- Mark the "node-role.kubernetes.io/master" key for labels
and taints as deprecated.
- During "kubeadm init/join" apply the label
"node-role.kubernetes.io/control-plane" to new control-plane nodes,
next to the existing "node-role.kubernetes.io/master" label.
- During "kubeadm upgrade apply", find all Nodes with the "master"
label and also apply the "control-plane" label to them
(if they don't have it).
- During upgrade health-checks collect Nodes labeled both "master"
and "control-plane".
- Rename the constants.ControlPlane{Taint|Toleraton} to
constants.OldControlPlane{Taint|Toleraton} to manage the transition.
- Mark constants.OldControlPlane{{Taint|Toleraton} as deprecated.
- Use constants.OldControlPlane{{Taint|Toleraton} instead of
constants.ControlPlane{Taint|Toleraton} everywhere.
- Introduce constants.ControlPlane{Taint|Toleraton}.
- Add constants.ControlPlaneToleraton to the kube-dns / CoreDNS
Deployments to make them anticipate the introduction
of the "node-role.kubernetes.io/control-plane:NoSchedule"
taint (constants.ControlPlaneTaint) on kubeadm control-plane Nodes.
Back in the v1alpha2 days the fuzzer test needed to be disabled. To ensure that
there were no config breaks and everything worked correctly extensive replacement
tests were put in place that functioned as unit tests for the kubeadm config utils
as well.
The fuzzer test has been reenabled for a long time now and there's no need for
these replacements. Hence, over time most of these were disabled, deleted and
refactored. The last remnants are part of the LoadJoinConfigurationFromFile test.
The test data for those old tests remains largely unused today, but it still receives
updates as it contains kubelet's and kube-proxy's component configs. Updates to these
configs are usually done because the maintainers of those need to add a new field.
Hence, to cleanup old code and reduce maintenance burden, the last test that depends
on this test data is finally refactored and cleaned up to represent a simple unit test
of `LoadJoinConfigurationFromFile`.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Component configs are used by kubeadm upgrade plan at the moment. However, they
can prevent kubeadm upgrade plan from functioning if loading of an unsupported
version of a component config is attempted. For that matter it's best to just
stop loading component configs as part of the kubeadm config load process.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
When doing the very first upgrade from a cluster that contains the
source of truth in the ClusterStatus struct, the new kubeadm logic
will try to retrieve this information from annotations.
This changeset adds to both etcd and apiserver endpoint retrieval the
special case in which they won't retry if we are in such cases. The
logic will retry if we find any unknown error, but will not retry in
the following cases:
- etcd annotations do not contain etcd endpoints, but the overall list
of etcd pods is greater than 0. This means that we listed at least
one etcd pod, but they are missing the annotation.
- API server annotation is not found on the api server pod for a given
node name, but no errors aside from that one were found. This means
that the API server pod is present, but is missing the annotation.
In both cases there is no point in retrying, and so, this speeds up the
upgrade path when coming from a previous existing cluster.
While `ClusterStatus` will be maintained and uploaded, it won't be
used by the internal `kubeadm` logic in order to determine the etcd
endpoints anymore.
The only exception is during the first upgrade cycle (`kubeadm upgrade
apply`, `kubeadm upgrade node`), in which we will fallback to the
ClusterStatus to let the upgrade path add the required annotations to
the newly created static pods.
The warning message
```
[config] WARNING: Ignored YAML document with GroupVersionKind ...
```
is printed for all GVKs that are not part of the kubeadm core types.
This is wrong as the component config types are supported and successfully
parsed and used despite the fact that the warning is printed for them too.
Hence this simple fix first checks if the group of the GVK is a supported
component config group and the warning is printed only if it's not.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
If the user has modified the kubelet.conf post TLS bootstrap
to become invalid, the function getNodeNameFromKubeletConfig() can
panic. This was observed to trigger in "kubeadm reset" use cases.
Add basic validation and unit tests around parsing the kubelet.conf
with the aforementioned function.
kubeadm's current implementation of component config support is "kind" centric.
This has its downsides. Namely:
- Kind names and numbers can change between config versions.
Newer kinds can be ignored. Therefore, detection of a version change is
considerably harder.
- A component config can have only one kind that is managed by kubeadm.
Thus a more appropriate way to identify component configs is required.
Probably the best solution identified so far is a config group.
A group name is unlikely to change between versions, while the kind names and
structure can.
Tracking component configs by group name allows us to:
- Spot more easily config version changes and manage alternate versions.
- Support more than one kind in a config group/version.
- Abstract component configs by hiding their exact structure.
Hence, this change rips off the old kind based support for component configs
and replaces it with a group name based one. This also has the following
extra benefits:
- More tests were added.
- kubeadm now errors out if an unsupported version of a known component group
is used.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Whenever kubeadm needs to fetch its configuration from the cluster, it gets
the component configuration of all supported components (currently only kubelet
and kube-proxy). However, kube-proxy is deemed an optional component and its
installation may be skipped (by skipping the addon/kube-proxy phase on init).
When kube-proxy's installation is skipped, its config map is not created and
all kubeadm operations, that fetch the config from the cluster, are bound to
fail with "not found" or "forbidden" (because of missing RBAC rules) errors.
To fix this issue, we have to ignore the 403 and 404 errors, returned on an
attempt to fetch kube-proxy's component config from the cluster.
The `GetFromKubeProxyConfigMap` function now supports returning nil for both
error and object to indicate just such a case.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
MarshalClusterConfigurationToBytes has capabilities to output the component
configs, as separate YAML documents, besides the kubeadm ClusterConfiguration
kind. This is no longer necessary for the following reasons:
- All current use cases of this function require only the ClusterConfiguration.
- It will output component configs only if they are not the default ones. This
can produce undeterministic output and, thus, cause potential problems.
- There are only hacky ways to dump the ClusterConfiguration only (without the
component configs).
Hence, we simplify things by replacing the function with direct calls to the
underlaying MarshalToYamlForCodecs. Thus marshalling only ClusterConfiguration,
when needed.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
During the control plane joins, sometimes the control plane returns an
expected error when trying to download the `kubeadm-config` ConfigMap.
This is a workaround for this issue until the root cause is completely
identified and fixed.
Ideally, this commit should be reverted in the near future.
Ever since v1alpha3, InitConfiguration is containing ClusterConfiguration
embedded in it. This was done to mimic the internal InitConfiguration, which in
turn is used throughout the kubeadm code base as if it is the old
MasterConfiguration of v1alpha2.
This, however, is confusing to users who vendor in kubeadm as the embedded
ClusterConfiguration inside InitConfiguration is not marshalled to YAML.
For this to happen, special care must be taken for the ClusterConfiguration
field to marshalled separately.
Thus, to make things smooth for users and to reduce third party exposure to
technical debt, this change removes ClusterConfiguration embedding from
InitConfiguration.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
There are a couple of problems with regards to the `omitempty` in v1beta1:
- It is not applied to certain fields. This makes emitting YAML configuration
files in v1beta1 config format verbose by both kubeadm and third party Go
lang tools. Certain fields, that were never given an explicit value would
show up in the marshalled YAML document. This can cause confusion and even
misconfiguration.
- It can be used in inappropriate places. In this case it's used for fields,
that need to be always serialized. The only one such field at the moment is
`NodeRegistrationOptions.Taints`. If the `Taints` field is nil, then it's
defaulted to a slice containing a single control plane node taint. If it's
an empty slice, no taints are applied, thus, the cluster behaves differently.
With that in mind, a Go program, that uses v1beta1 with `omitempty` on the
`Taints` field has no way to specify an explicit empty slice of taints, as
this would get lost after marshalling to YAML.
To fix these issues the following is done in this change:
- A whole bunch of additional omitemptys are placed at many fields in v1beta2.
- `omitempty` is removed from `NodeRegistrationOptions.Taints`
- A test, that verifies the ability to specify empty slice value for `Taints`
is included.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
This change introduces config fields to the v1beta2 format, that allow
certificate key to be specified in the config file. This certificate key is a
hex encoded AES key, that is used to encrypt certificates and keys, needed for
secondary control plane nodes to join. The same key is used for the decryption
during control plane join.
It is important to note, that this key is never uploaded to the cluster. It can
only be specified on either command line or the config file.
The new fields can be used like so:
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
certificateKey: "yourSecretHere"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
controlPlane:
certificateKey: "yourSecretHere"
---
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
These are based on recommendation from
[staticcheck](http://staticcheck.io/).
- Remove unused struct fields
- Remove unused function
- Remove unused variables
- Remove unused constants.
- Miscellaneous cleanups
In the case where newControlPlane is true we don't go through
getNodeRegistration() and initcfg.NodeRegistration.CRISocket is empty.
This forces DetectCRISocket() to be called later on, and if there is more than
one CRI installed on the system, it will error out, while asking for the user
to provide an override for the CRI socket. Even if the user provides an
override, the call to DetectCRISocket() can happen too early and thus ignore it
(while still erroring out).
However, if newControlPlane == true, initcfg.NodeRegistration is not used at
all and it's overwritten later on.
Thus it's necessary to supply some default value, that will avoid the call to
DetectCRISocket() and as initcfg.NodeRegistration is discarded, setting
whatever value here is harmless.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>