Slightly neuters #8955, but we haven't had a build succeed in a while
for whatever reason. (I checked on Jenkins and the images in the build
log where deletion was attempted were actually deleted, so I think
this is primarily an exit code / API issue of some sort.)
By default, tmpfs on Docker 1.6 is 64mb which is too
small for Go builds on the Kube project (binary size, etc).
This moves the release build to use a non tmpfs work dir.
Also by pre-staging and pushing all at once, and by doing the ACL
modify in parallel, this shaves the push time down anyways, despite
the extra I/O.
Along the way: Updates to longer hashes ala #6615
Rework the parallel docker build path to create separate docker build
directories for each binary, rather than just separate files,
eliminating the use of "-f Dockerfile.foo". (I think this also shaves
a little more time off, because it was previously sending the whole
dir each time to the docker daemon.)
Also some minor cleanup.
Fixes#6463
This allows us to continue on branched like v0.14.1, v0.14.2, and tag on
that branch. But it will merge those tags into master so git describe
picks up the changes.
This does some git magic to make sure you do not tag a branch with
v0.13.3 unless that branch is a decendant of the release-0.13 branch
upstream.
Don't allow v0.13.4 if v0.13.3 doesn't exit
Don't allow v0.13.3 if v0.13.3 already exits
We keep getting tags and branches wonky. This will land
v0.14.0 as an ancestor of master
v0.14.1 will NOT be an ancestor of master
I can do the later, it only a touch of git gymnastics, not really hard, but
we only occasionably do that today. (0.13.1 is an ancestor of master,
but 0.13.2 is not)
Adds a kube::release::gcs::publish_latest_official that checks the
current contents of this file in GCS, verifies that we're pushing a
newer build, and updates it if so. (i.e. it handles the case of
pushing a 0.13.1 and then later pushing a 0.12.3.) This follows the
pattern of the ci/ build, which Jenkins just updates unconditionally.
I already updated the file for 0.13.1. After this we can update the
get-k8s script, so we don't have to keep updating it.