kubernetes/federation
Kubernetes Submit Queue b29d9cdbcf Merge pull request #39898 from ixdy/bazel-release-tars
Automatic merge from submit-queue

Build release tars using bazel

**What this PR does / why we need it**: builds equivalents of the various kubernetes release tarballs, solely using bazel.

For example, you can now do
```console
$ make bazel-release
$ hack/e2e.go -v -up -test -down
```

**Special notes for your reviewer**: this is currently dependent on 3b29803eb5, which I have yet to turn into a pull request, since I'm still trying to figure out if this is the best approach.

Basically, the issue comes up with the way we generate the various server docker image tarfiles and load them on nodes:
* we `md5sum` the binary being encapsulated (e.g. kube-proxy) and save that to `$binary.docker_tag` in the server tarball
* we then build the docker image and tag using that md5sum (e.g. `gcr.io/google_containers/kube-proxy:$MD5SUM`)
* we `docker save` this image, which embeds the full tag in the `$binary.tar` file.
* on cluster startup, we `docker load` these tarballs, which are loaded with the tag that we'd created at build time. the nodes then use the `$binary.docker_tag` file to find the right image.

With the current bazel `docker_build` rule, the tag isn't saved in the docker image tar, so the node is unable to find the image after `docker load`ing it.

My changes to the rule save the tag in the docker image tar, though I don't know if there are subtle issues with it. (Maybe we want to only tag when `--stamp` is given?)

Also, the docker images produced by bazel have the timestamp set to the unix epoch, which is not great for debugging. Might be another thing to change with a `--stamp`.

Long story short, we probably need to follow up with bazel folks on the best way to solve this problem.

**Release note**:

```release-note
NONE
```
2017-01-18 14:24:48 -08:00
..
apis generated: staging 2017-01-17 16:17:20 -05:00
client generated changes 2017-01-17 08:32:05 -05:00
cluster Always --pull in docker build to ensure recent base images 2017-01-10 16:21:05 -08:00
cmd move admission to genericapiserver 2017-01-18 08:15:19 -05:00
deploy Update charts image version. 2016-10-12 00:15:01 -07:00
develop Build release tarballs in bazel and add make bazel-release rule 2017-01-13 16:17:44 -08:00
docs/api-reference generated: change to WatchEvent from Event 2017-01-06 23:45:05 -05:00
manifests use etcd2 as storage-backend for federation until it is completely tested with etcd3 2017-01-10 15:14:40 +05:30
pkg refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
registry/cluster refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
BUILD Build release tarballs in bazel and add make bazel-release rule 2017-01-13 16:17:44 -08:00
Makefile Separate the build recipe in federation Makefile into separate phases. 2016-08-29 14:16:39 -07:00
OWNERS Add colhom to federation OWNERS 2016-06-27 13:16:43 -07:00
README.md Fix doc links in Federation readme 2016-11-07 11:31:08 +00:00

Cluster Federation

Kubernetes Cluster Federation enables users to federate multiple Kubernetes clusters. Please see the user guide and the admin guide for more details about setting up and using the Cluster Federation.

Building Kubernetes Cluster Federation

Please see the Kubernetes Development Guide for initial setup. Once you have the development environment setup as explained in that guide, you also need to install jq

Building cluster federation artifacts should be as simple as running:

make build

You can specify the docker registry to tag the image using the KUBE_REGISTRY environment variable. Please make sure that you use the same value in all the subsequent commands.

To push the built docker images to the registry, run:

make push

To initialize the deployment run:

(This pulls the installer images)

make init

To deploy the clusters and install the federation components, edit the ${KUBE_ROOT}/_output/federation/config.json file to describe your clusters and run:

make deploy

To turn down the federation components and tear down the clusters run:

make destroy

Ideas for improvement

  1. Continue with destroy phase even in the face of errors.

    The bash script sets set -e errexit which causes the script to exit at the very first error. This should be the default mode for deploying components but not for destroying/cleanup.

Analytics