![]() Automatic merge from submit-queue Upgrade addon-manager with kubectl apply The first step of #33698. Use `kubectl apply` to replace addon-manager's previous logic. The most important issue this PR is targeting is the upgrade from 1.4 to 1.5. Procedure as below: 1. Precondition: After the master is upgraded, new addon-manager starts and all the old resources on nodes are running normally. 2. Annotate the old ReplicationController resources with kubectl.kubernetes.io/last-applied-configuration="" 3. Call `kubectl apply --prune=false` on addons folder to create new addons, including the new Deployments. 4. Wait for one minute for new addons to be spinned up. 5. Enter the periodical loop of `kubectl apply --prune=true`. The old RCs will be pruned at the first call. Procedure of a normal startup: 1. Addon-manager starts and no addon resources are running. 2. Annotate nothing. 3. Call `kubectl apply --prune=false` to create all new addons. 4. No need to explain the remain. Remained Issues: - Need to add `--type` flag to `kubectl apply --prune`, mentioned [here](https://github.com/kubernetes/kubernetes/pull/33075#discussion_r80814070). - This addon manager is not working properly with the current Deployment heapster, which runs [addon-resizer](https://github.com/kubernetes/contrib/tree/master/addon-resizer) in the same pod and changes resource limit configuration through the apiserver. `kubectl apply` fights with the addon-resizers. May be we should remove the initial resource limit field in the configuration file for this specific Deployment as we removed the replica count. @mikedanese @thockin @bprashanth --- Below are some logical things that may need to be clarified, feel free to **OMIT** them as they are too verbose: - For upgrade, the old RCs will not fight with the new Deployments during the overlap period even if they use the same label in template: - Deployment will not recognize the old pods because it need to match an additional "pod-template-hash" label. - ReplicationController will not manage the new pods (created by deployment) because the [`controllerRef`](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/controller-ref.md) feature. - As we are moving all addons to Deployment, all old RCs would be removed. Attach empty annotation to RCs is only for letting `kubectl apply --prune` to recognize them, the content does not matter. - We might need to also annotate other resource types if we plan to upgrade them in 1.5 release: - They don't need to be attached this fake annotation if they remain in the same name. `kubectl apply` can recognize them by name/type/namespace. - In the other case, attaching empty annotations to them will still work. As the plan is to use label selector for annotate, some innocence old resources may also be attached empty annotations, they work as below two cases: - Resources that need to be bumped up to a newer version (mainly due to some significant update --- change disallowed fields --- that could not be managed by the update feature of `kubectl apply`) are good to go with this fake annotation, as old resources will be deleted and new sources will be created. The content in annotation does not matter. - Resources that need to stay inside the management of `kubectl apply` is also good to go. As `kubectl apply` will [generate a 3-way merge patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/util/strategicpatch/patch.go#L1202-L1226). This empty annotation is harmless enough. |
||
---|---|---|
.. | ||
addons | ||
aws | ||
azure | ||
azure-legacy | ||
centos | ||
gce | ||
gke | ||
images | ||
juju | ||
kubemark | ||
lib | ||
libvirt-coreos | ||
local | ||
mesos/docker | ||
openstack-heat | ||
ovirt | ||
photon-controller | ||
rackspace | ||
saltbase | ||
skeleton | ||
ubuntu | ||
vagrant | ||
vsphere | ||
common.sh | ||
get-kube-binaries.sh | ||
get-kube-local.sh | ||
get-kube.sh | ||
kube-down.sh | ||
kube-push.sh | ||
kube-up.sh | ||
kube-util.sh | ||
kubectl.sh | ||
log-dump.sh | ||
options.md | ||
OWNERS | ||
README.md | ||
test-e2e.sh | ||
test-network.sh | ||
test-smoke.sh | ||
update-storage-objects.sh | ||
validate-cluster.sh |
Cluster Configuration
Deprecation Notice: This directory has entered maintainence mode and will not be accepting new providers. Please submit new automation deployments to kube-deploy. Deployments in this directory will continue to be maintained and supported at their current level of support.
The scripts and data in this directory automate creation and configuration of a Kubernetes cluster, including networking, DNS, nodes, and master components.
See the getting-started guides for examples of how to use the scripts.
cloudprovider/config-default.sh
contains a set of tweakable definitions/parameters for the cluster.
The heavy lifting of configuring the VMs is done by SaltStack.