For parity with GCE, we really want to support aufs.
But we previously supported btrfs, so we want to expose that.
Most of the work here is required for aufs, and we let advanced users choose
devicemapper/btrfs if they have a setup that works for those configurations.
Whenever we do a list we now filter on tags so we only see resources relating
to our cluster.
Also, rationalize all the DescribeX calls:
* They all take a request object (so that we can pass filters)
* They do paging if that is required (and return the underlying resources)
* They wrap any error with a "error while listing X: %v" message
Default behaviour when setting up a cluster is using the Amazon-assigned public ip.
It will change between reboots. If MASTER_RESERVED_IP is set to 'auto', new Elastic
IP will be allocated & assigned to master. If MASTER_RESERVED_IP is set to an existing
Elastic IP, it will be used. When something fails, original Amazon-given IP will be used.
When MASTER_RESERVED_IP is set to elastic IP from AWS, then aws/util.sh will
associate it with master instance and assign it to KUBE_MASTER_IP. If no MASTER_RESERVED_IP
is set, new elastic ip will be requested from amazon. This allows cluster certificates to
be generated for an IP that doesn't change between stopping & starting cluster instances.
The requested elastic ip is not released when kube-down.sh is run. I think it is good
because user could have created DNS records and it would be bad if the IP was removed.
He can reuse it next time through MASTER_RESERVED_IP when setting up cluster again.
The latest salt version breaks the container_bridge.py _state function
We can lock to the same version as GCE. This is not a full fix,
because we can't update to the latest salt without breaking GCE,
but this at least unbreaks and sync AWS with GCE.
This isn't a straight copy from GCE, because we still use
the salt master on AWS (for now)
Fixes#8114
We were specifying a region, but naming it as a zone in util.sh
The zone matters just as much as the region, e.g. for EBS volumes.
We also change the config to require a Zone, not a Region.
But we fallback to get the information from the metadata service.