GCE does this in its per-provider scripts; this does the same for AWS and lets
other providers do the same; I believe kube2sky requires 10.0.0.1 as a SAN.
If we're going to have a persistent disk on /mnt/master-pd, it seems risky
sometimes to have /mnt be a mounted volume.
A new consistent approach: we mount volumes under /mnt/<name>.
For parity with GCE, we really want to support aufs.
But we previously supported btrfs, so we want to expose that.
Most of the work here is required for aufs, and we let advanced users choose
devicemapper/btrfs if they have a setup that works for those configurations.
The latest salt version breaks the container_bridge.py _state function
We can lock to the same version as GCE. This is not a full fix,
because we can't update to the latest salt without breaking GCE,
but this at least unbreaks and sync AWS with GCE.
This isn't a straight copy from GCE, because we still use
the salt master on AWS (for now)
Fixes#8114
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware. Symptom would be that service proxying
stops working.
1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
1. Changes kube-proxy args:
- use the --kubeconfig argument
- changes --master argument from http://MASTER:7080 to https://MASTER
- http -> https
- explicit port 7080 -> implied 443
Possible ways this might break other distros:
Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.
Mitigation:
- azure: Special case to use 7080 in
- rackspace: way out of date, so don't care.
- vsphere: way out of date, so don't care.
- other distros: not using salt.
This is a stop-gap fix; we'd really like to use EC2 instance ids, but that is
blocked by #7092 or changing that health-check to not assume that the node name
is resolvable.
This stop-gap essentially reverts #7072 for AWS
Generates the new token on AWS, GCE, Vagrant.
Renames instance metadata from "kube-token" to "kubelet-token".
(Is this okay for GKE?)
Having separate tokens for kubelet and kube-proxy permits
using principle of least privilege, makes it easy to
rate limit the clients separately, allows annotation
of apiserver logs with the client identity at a finer grain
than just source-ip.
We were specifying a region, but naming it as a zone in util.sh
The zone matters just as much as the region, e.g. for EBS volumes.
We also change the config to require a Zone, not a Region.
But we fallback to get the information from the metadata service.