kubeadm's current implementation of component config support is "kind" centric.
This has its downsides. Namely:
- Kind names and numbers can change between config versions.
Newer kinds can be ignored. Therefore, detection of a version change is
considerably harder.
- A component config can have only one kind that is managed by kubeadm.
Thus a more appropriate way to identify component configs is required.
Probably the best solution identified so far is a config group.
A group name is unlikely to change between versions, while the kind names and
structure can.
Tracking component configs by group name allows us to:
- Spot more easily config version changes and manage alternate versions.
- Support more than one kind in a config group/version.
- Abstract component configs by hiding their exact structure.
Hence, this change rips off the old kind based support for component configs
and replaces it with a group name based one. This also has the following
extra benefits:
- More tests were added.
- kubeadm now errors out if an unsupported version of a known component group
is used.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
RBAC construction helpers are part of the Kubernetes internal APIs. As such,
we cannot use them once we move to staging.
Hence, replace their use with manual RBAC rule construction.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
During the upgrade process, `kubeadm` will take the current
`ClusterConfiguration`, update the `KubernetesVersion` to the latest
version, and call to `UploadConfiguration`.
This change makes sure that when the mutation happens, not only the
`ClusterStatus` is mutated, but the `ClusterConfiguration` object
inside the `kubeadm-config` ConfigMap as well; it will contain the
new `KubernetesVersion`.
Add the functionality to support `CreateOrMutateConfigMap` and `MutateConfigMap`.
* `CreateOrMutateConfigMap` will try to create a given ConfigMap object; if this ConfigMap
already exists, a new version of the resource will be retrieved from the server and a
mutator callback will be called on it. Then, an `Update` of the mutated object will be
performed. If there's a conflict during this `Update` operation, retry until no conflict
happens. On every retry the object is refreshed from the server to the latest version.
* `MutateConfigMap` will try to get the latest version of the ConfigMap from the server,
call the mutator callback and then try to `Update` the mutated object. If there's a
conflict during this `Update` operation, retry until no conflict happens. On every retry
the object is refreshed from the server to the latest version.
Add unit tests for `MutateConfigMap`
* One test checks that in case of no conflicts, the update of the
given ConfigMap happens without any issues.
* Another test mimics 5 consecutive CONFLICT responses when updating
the given ConfigMap, whereas the sixth try it will work.
Add ResetClusterStatusForNode() that clears a certain
control-plane node's APIEndpoint from the ClusterStatus
key in the kubeadm ConfigMap on "kubeadm reset".
- move most unrelated to phases output to klog.V(1)
- rename some prefixes for consistency - e.g.
[kubelet] -> [kubelet-start]
- control-plane-prepare: print details for each generated CP
component manifest.
- uppercase the info text for all "[reset].." lines
- modify the text for one line in reset
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Kubeadm ha upgrade
**What this PR does / why we need it**:
This PR implements one of the actions defined by https://github.com/kubernetes/kubeadm/issues/751 (checklist form implementing HA in kubeadm). see [KEP 0015](https://github.com/kubernetes/community/blob/master/keps/sig-cluster-lifecycle/0015-kubeadm-join-master.md) for more context
With this PR, kubeadm implements a new command `kubeadm upgrade node experimental-control-plane` that managed upgrade of control plane components on a secondary control plane instance.
The entire workflow in case of HA clusters will be:
- Upgrade the control plane
- run `kubeadm upgrade apply` on a first control plane instance
- run `kubeadm upgrade node experimental-control-plane` on secondary control plane instances
- Upgrade nodes
**Special notes for your reviewer**:
/CC @timothysc @luxas @chuckha @kubernetes/sig-cluster-lifecycle-pr-reviews
**Release note**:
```
kubeadm now has the `kubeadm upgrade node experimental-control-plane` command for upgrading secondary control plane instances created with `kubeadm join --experimental-control-plane`.
```
This follows the pattern `kubectl` uses for logging.
There are two remaining glog.Infof call that cannot be removed easily.
One glog call comes from kubelet validation which calls features.SetFromMap.
The other comes from test/e2e during kernel validation.
Mostly fixeskubernetes/kubeadm#852
Signed-off-by: Chuck Ha <ha.chuck@gmail.com>