Automatic merge from submit-queue Migrates addons from RCs to Deployments Fixes #33698. Below addons are being migrated: - kube-dns - GLBC default backend - Dashboard UI - Kibana For the new deployments, the version suffixes are removed from their names. Version related labels are also removed because they are confusing and not needed any more with regard to how Deployment and the new Addon Manager works. The `replica` field in `kube-dns` Deployment manifest is removed for the incoming DNS horizontal autoscaling feature #33239. The `replica` field in `Dashboard` Deployment manifest is also removed because the rescheduler e2e test is manually scaling it. Some resource limit related fields in `heapster-controller.yaml` are removed, as they will be set up by the `addon resizer` containers. Detailed reasons in #34513. Three e2e tests are modified: - `rescheduler.go`: Changed to resize Dashboard UI Deployment instead of ReplicationController. - `addon_update.go`: Some namespace related changes in order to make it compatible with the new Addon Manager. - `dns_autoscaling.go`: Changed to examine kube-dns Deployment instead of ReplicationController. Both of above two tests passed on my own cluster. The upgrade process --- from old Addons with RCs to new Addons with Deployments --- was also tested and worked as expected. The last commit upgrades Addon Manager to v6.0. It is still a work in process and currently waiting for #35220 to be finished. (The Addon Manager image in used comes from a non-official registry but it mostly works except some corner cases.) @piosz @gmarek could you please review the heapster part and the rescheduler test? @mikedanese @thockin cc @kubernetes/sig-cluster-lifecycle --- Notes: - Kube-dns manifest still uses *-rc.yaml for the new Deployment. The stale file names are preserved here for receiving faster review. May send out PR to re-organize kube-dns's file names after this. - Heapster Deployment's name remains in the old fashion(with `-v1.2.0` suffix) for avoiding describe this upgrade transition explicitly. In this way we don't need to attach fake apply labels to the old Deployments.
SaltStack configuration
This is the root of the SaltStack configuration for Kubernetes. A high level overview for the Kubernetes SaltStack configuration can be found in the docs tree.
This SaltStack configuration currently applies to default
configurations for Debian-on-GCE, Fedora-on-Vagrant, Ubuntu-on-AWS and
Ubuntu-on-Azure. (That doesn't mean it can't be made to apply to an
arbitrary configuration, but those are only the in-tree OS/IaaS
combinations supported today.) As you peruse the configuration, these
are shorthanded as gce, vagrant, aws, azure-legacy in grains.cloud;
the documentation in this tree uses this same shorthand for convenience.
See more: