Files are taken from cluster/network-plugins/{bin,conf} to be consumed within a vagrant kube-up.sh environment.
Paths used for configuration files and the 'cni' name of the network provider are all from the kubernetes documentation, but the actual implementation in the salt automation doesn't seem to exist.
Use of NETWORK_PROVIDER=cni is documented as useable (as well as it's affects on the runtime args of kubelet),
however the actual implimentation in the salt automation doesnt seem to exist.
this change attempts to fix that for the vagrant usecase.
Automatic merge from submit-queue
use apply instead of create to setup namespaces and tokens in addon manager
when the addon manager restarts, it takes ~15 minutes (1000 seconds) to start the sync loop because it retries creation of namespace and tokens 100 times. Create fails if the tokens already exist. Just use apply.
Automatic merge from submit-queue
Juju kube up
I found some problems with the kube-up script that this pull request addresses. We didn't have the kubectl binary in the correct location.
Just changing where we download the package from the master, and fixing the kube-down.sh script to remove those files.
We rename it to EPHEMERAL_BLOCK_DEVICE_MAPPINGS, and we also change the value
so that it starts with a `,`, instead of always inserting a comma before it.
In this way the value can be empty.
Also, if the user sets the (currently experimental) KUBE_AWS_STORAGE
environment variable to be "ebs", then we will not mount any instance storage
which will cause the machines to use EBS storage instead.
format-disks used to run with non-strict bash semantics, but this changed in
1.2 as we now merge it into the GCE script, so pipefail and errexit are both
set.
However, the way we list the ephemeral disks, by piping to grep, would cause an
exit code of 2 if there were no ephemeral disks.
Tolerate failure here by add `|| true`. The metadata service call is unlikely
to fail, so we continue to ignore that possibility.
Automatic merge from submit-queue
Fix so setup-files don't recreate/invalidate certificates that already exist
Fixes: #23197 and a lot of other DNS and dashboard issues
This is quite critical for `docker`-based users and should be considered as a **cherrypick-candidate** as it makes a lot of people wonder why Dashboard and/or DNS doesn't work. Example: https://github.com/kubernetes/dashboard/issues/374
Earlier when you shut your `docker.md` cluster down and started it again, all ServiceAccounts became invalidated by `setup-files` that happily ran once again and replaced all files. That made `apiserver` and `controller-manager` pick up the new certs (or there was a race condition, they _could_ have picked up the old certs too, but that's unlikely) and the old certs were put into `/var/run/secrets` because the ServiceAccount's Secrets were stored in etcd, which `setup-files` didn't touch.
@fgrzadkowski @huggsboson @thockin @mikedanese @vishh @pwittrock @eparis @bgrant0607
Automatic merge from submit-queue
Trusty: Regional release .tar.gz support
@zmerlynn and @roberthbailey please review it. This change is to support the feature added in PR #22234. The entire logic is pretty much the same as in #22234, with only few minor changes in implementation.
I had manually run e2e tests with "export RELEASE_REGION_FALLBACK=true" on two clusters: (1) Trusty on master nodes on ContainerVM; (2) Master and nodes all on trusty. All tests are green. I don't figure out a way to simulate regional fallback. But I did test the function download_or_bust() out-of-box.
cc/ @wonderfly @dchen1107 @fabioy FYI.
Automatic merge from submit-queue
Create a new Deployment in kube-system for every version.
It appears that version numbers have already been properly added to these files. Small change to delete an old deployment entirely, so we can make a new one per version (like replication controllers).
We'll want to change this back once the kube-addons support deployments in a later version.
Mostly doc updates and cruft removal
- describe conformance test policy and howto in e2e-tests.md
- rm e2e test info from testing.md in the name of DRY
- rm cluster/test-conformance.sh; unusable in release tar, not e2e.go
- update e2e test link in write-a-getting-started-guide.md
There are actually two `roles` setting in ubuntu installation scripts.
One is roles as string, which can be set as env and then used in scripts.
The other is roles as array, which is used by internal handling to
locate specific role by offset.
This patch tries to distinguish roles meaning by declearing the second
as roles_array, thus eliminating its ambiguity.
When using this flag, this error is shown:
Flag --api-version has been deprecated, flag is no longer respected and will be deleted in the next release
Stop using the flag in the validate-cluster.sh script and avoid the warning.
This commit imports the latest development focus from the Charmer team
working to deliver Kubernetes charms with Juju.
Notable Changes:
- The charm is now assembled from layers in $JUJU_ROOT/layers
- Prior, the juju provider would compile and fat-pack the charms, this
new approach delivers the entirety of Kubernetes via hyperkube.
- Adds Kubedns as part of `cluster/kube-up.sh` and verification
- Removes the hard-coded port 8080 for the Kubernetes Master
- Includes TLS validation
- Validates kubernetes config from leader charm
- Targets Juju 2.0 commands
This should allow allow the non_masquerade_cidr option to get configured
in /etc/salt/minion.d/grains.conf, allowing the flag to used by kubelet
in /etc/sysconfig/kubelet. Default configuration is set in pillar