If the user has modified the kubelet.conf post TLS bootstrap
to become invalid, the function getNodeNameFromKubeletConfig() can
panic. This was observed to trigger in "kubeadm reset" use cases.
Add basic validation and unit tests around parsing the kubelet.conf
with the aforementioned function.
kubeadm's current implementation of component config support is "kind" centric.
This has its downsides. Namely:
- Kind names and numbers can change between config versions.
Newer kinds can be ignored. Therefore, detection of a version change is
considerably harder.
- A component config can have only one kind that is managed by kubeadm.
Thus a more appropriate way to identify component configs is required.
Probably the best solution identified so far is a config group.
A group name is unlikely to change between versions, while the kind names and
structure can.
Tracking component configs by group name allows us to:
- Spot more easily config version changes and manage alternate versions.
- Support more than one kind in a config group/version.
- Abstract component configs by hiding their exact structure.
Hence, this change rips off the old kind based support for component configs
and replaces it with a group name based one. This also has the following
extra benefits:
- More tests were added.
- kubeadm now errors out if an unsupported version of a known component group
is used.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Whenever kubeadm needs to fetch its configuration from the cluster, it gets
the component configuration of all supported components (currently only kubelet
and kube-proxy). However, kube-proxy is deemed an optional component and its
installation may be skipped (by skipping the addon/kube-proxy phase on init).
When kube-proxy's installation is skipped, its config map is not created and
all kubeadm operations, that fetch the config from the cluster, are bound to
fail with "not found" or "forbidden" (because of missing RBAC rules) errors.
To fix this issue, we have to ignore the 403 and 404 errors, returned on an
attempt to fetch kube-proxy's component config from the cluster.
The `GetFromKubeProxyConfigMap` function now supports returning nil for both
error and object to indicate just such a case.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
During the control plane joins, sometimes the control plane returns an
expected error when trying to download the `kubeadm-config` ConfigMap.
This is a workaround for this issue until the root cause is completely
identified and fixed.
Ideally, this commit should be reverted in the near future.
In the case where newControlPlane is true we don't go through
getNodeRegistration() and initcfg.NodeRegistration.CRISocket is empty.
This forces DetectCRISocket() to be called later on, and if there is more than
one CRI installed on the system, it will error out, while asking for the user
to provide an override for the CRI socket. Even if the user provides an
override, the call to DetectCRISocket() can happen too early and thus ignore it
(while still erroring out).
However, if newControlPlane == true, initcfg.NodeRegistration is not used at
all and it's overwritten later on.
Thus it's necessary to supply some default value, that will avoid the call to
DetectCRISocket() and as initcfg.NodeRegistration is discarded, setting
whatever value here is harmless.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Add ResetClusterStatusForNode() that clears a certain
control-plane node's APIEndpoint from the ClusterStatus
key in the kubeadm ConfigMap on "kubeadm reset".
Currently ConfigFileAndDefaultsToInternalConfig and
FetchConfigFromFileOrCluster are used to default and load InitConfiguration
from file or cluster. These two APIs do a couple of completely separate things
depending on how they were invoked. In the case of
ConfigFileAndDefaultsToInternalConfig, an InitConfiguration could be either
defaulted with external override parameters, or loaded from file.
With FetchConfigFromFileOrCluster an InitConfiguration is either loaded from
file or from the config map in the cluster.
The two share both some functionality, but not enough code. They are also quite
difficult to use and sometimes even error prone.
To solve the issues, the following steps were taken:
- Introduce DefaultedInitConfiguration which returns defaulted version agnostic
InitConfiguration. The function takes InitConfiguration for overriding the
defaults.
- Introduce LoadInitConfigurationFromFile, which loads, converts, validates and
defaults an InitConfiguration from file.
- Introduce FetchInitConfigurationFromCluster that fetches InitConfiguration
from the config map.
- Reduce, when possible, the usage of ConfigFileAndDefaultsToInternalConfig by
replacing it with DefaultedInitConfiguration or LoadInitConfigurationFromFile
invocations.
- Replace all usages of FetchConfigFromFileOrCluster with calls to
LoadInitConfigurationFromFile or FetchInitConfigurationFromCluster.
- Delete FetchConfigFromFileOrCluster as it's no longer used.
- Rename ConfigFileAndDefaultsToInternalConfig to
LoadOrDefaultInitConfiguration in order to better describe what the function
is actually doing.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
The kubelet allows you to set `--pod-infra-container-image`
(also called `PodSandboxImage` in the kubelet config),
which can be a custom location to the "pause" image in the case
of Docker. Other CRIs are not supported.
Set the CLI flag for the Docker case in flags.go using
WriteKubeletDynamicEnvFile().