Variables $ENABLE_CLUSTER_MONITORING and $ENABLE_CLUSTER_UI are currently set in cluster/vagrant/config-default.sh but are not passed to the master VM. Therefore, cluster/saltbase/salt/kube-addons/init.sls does not have these variables, and the add-ons cannot be enabled.
The error message thrown when the KUBERNETES_PROVIDER is vagrant and the
vagrant plugin cannot be found is ambiguous. This does not change
functionality, just provides more feedback as to the source of the
error.
MASTER_IP and MINION_IP_BASE are hard-coded in vagrant's
config-default.sh, and the values correspond to virtualbox's default
subnet. On hosts that have both virtualbox and another provider
installed, attempting to deploy kubernetes with the non-virtualbox
provider is likely to result in broken networking. This change allows
the addresses to be overridden via the environment so that more
appropriate values can be used.
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware. Symptom would be that service proxying
stops working.
1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
1. Changes kube-proxy args:
- use the --kubeconfig argument
- changes --master argument from http://MASTER:7080 to https://MASTER
- http -> https
- explicit port 7080 -> implied 443
Possible ways this might break other distros:
Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.
Mitigation:
- azure: Special case to use 7080 in
- rackspace: way out of date, so don't care.
- vsphere: way out of date, so don't care.
- other distros: not using salt.
Generates the new token on AWS, GCE, Vagrant.
Renames instance metadata from "kube-token" to "kubelet-token".
(Is this okay for GKE?)
Having separate tokens for kubelet and kube-proxy permits
using principle of least privilege, makes it easy to
rate limit the clients separately, allows annotation
of apiserver logs with the client identity at a finer grain
than just source-ip.