Until now, users were always asked to manually convert a component config to a
version supported by kubeadm, if kubeadm is not supporting its version.
This is true even for configs generated with older kubeadm versions, hence
getting users to make manual conversions on kubeadm generated configs.
This is not appropriate and user friendly, although, it tends to be the most
common case. Hence, we sign kubeadm generated component configs stored in
config maps with a SHA256 checksum. If a configs is loaded by kubeadm from a
config map and has a valid signature it's considered "kubeadm generated" and if
a version migration is required, this config is automatically discarded and a
new one is generated.
If there is no checksum or the checksum is not matching, the config is
considered as "user supplied" and, if a version migration is required, kubeadm
will bail out with an error, requiring manual config migration (as it's today).
The behavior when supplying component configs on the kubeadm command line
does not change. Kubeadm would still bail out with an error requiring migration
if it can recognize their groups but not versions.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
kubelet.DownloadConfig is an old utility function which takes a client set and
a kubelet version, uses them to fetch the kubelet component config from a
config map, and places it in a local file. This function is simple to use, but
it is dangerous and unnecessary. Practically, in all cases the kubelet
configuration is present locally and does not need to be fetched from a config
map on the cluster (it just needs to be stored in a file).
Furthermore, kubelet.DownloadConfig does not use the kubeadm component configs
module in any way. Hence, a kubelet configuration fetched using it may not be
patched, validated, or otherwise, processed in any way by kubeadm other than
piping it to a file.
This patch replaces all but a single kubelet.DownloadConfig invocation with
equivalents that get the local copy of the kubelet component config and just
store it in a file. The sole remaining invocation covers the
`kubeadm upgrade node --kubelet-version` case.
In addition to that, a possible panic is fixed in kubelet.DownloadConfig and
it now takes the kubelet version parameter as string.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
kubeadm is setting the IPv6DualStack feature gate in the command line of the kubelet.
However, the kubelet is gradually moving away from command line flags towards component config use.
Hence, we should set the IPv6DualStack feature gate in the component config instead.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
- Uses correct pause image for Windows
- Omits systemd specific flags
- Common build flags function to be used by Linux and Windows
- Uses user configured image repository for Windows pause image
If a Node name in the cluster is already taken and this Node is Ready,
prevent TLS bootsrap on "kubeadm join" and exit early.
This change requires that a new ClusterRole is granted to the
"system:bootstrappers:kubeadm:default-node-token" group to be
able get Nodes in the cluster. The same group already has access
to obtain objects such as the KubeletConfiguration and kubeadm's
ClusterConfiguration.
The motivation of this change is to prevent undefined behavior
and the potential control-plane breakdown if such a cluster
is racing to have two nodes with the same name for long periods
of time.
The following values are validated in the following precedence
from lower to higher:
- actual hostname
- NodeRegistration.Name (or "--node-name") from JoinConfiguration
- "--hostname-override" passed via kubeletExtraArgs
If the user decides to not let kubeadm know about a custom node name
and to instead override the hostname from a kubelet systemd unit file,
kubeadm will not be able to detect the problem.
kubeadm's current implementation of component config support is "kind" centric.
This has its downsides. Namely:
- Kind names and numbers can change between config versions.
Newer kinds can be ignored. Therefore, detection of a version change is
considerably harder.
- A component config can have only one kind that is managed by kubeadm.
Thus a more appropriate way to identify component configs is required.
Probably the best solution identified so far is a config group.
A group name is unlikely to change between versions, while the kind names and
structure can.
Tracking component configs by group name allows us to:
- Spot more easily config version changes and manage alternate versions.
- Support more than one kind in a config group/version.
- Abstract component configs by hiding their exact structure.
Hence, this change rips off the old kind based support for component configs
and replaces it with a group name based one. This also has the following
extra benefits:
- More tests were added.
- kubeadm now errors out if an unsupported version of a known component group
is used.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
- Don't always print to stdout that the kubelet is starting.
instead delegate this to the callers of TryStartKubelet.
- Add a new root kubeadm init phase called "kubelet-finalize"
- Add a sub-phase to "kubelet-finalize"
called "experimental-cert-rotation"
- "cert-rotation" performs the following actions:
- tries to guess if kubelet client cert rotation is enabled
- update the kubelet.conf to use the rotatable cert/key
This change removes dependencies on the internal types of the kubelet and
kube-proxy component configs. Along with that defaulting and validation is
removed as well. kubeadm will display a warning, that it did not verify the
component config upon load.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
progagates the kubeadm FG to the individual k8scomponents
on the control-plane node.
* Note: Users who want to join worker nodes to the cluster
will have to specify the dual-stack FG to kubelet using the
nodeRegistration.kubeletExtraArgs option as part of their
join config. Alternatively, they can use KUBELET_EXTRA_ARGS.
kubeadm FG: kubernetes/kubeadm#1612
RBAC construction helpers are part of the Kubernetes internal APIs. As such,
we cannot use them once we move to staging.
Hence, replace their use with manual RBAC rule construction.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
During the control plane joins, sometimes the control plane returns an
expected error when trying to download the `kubeadm-config` ConfigMap.
This is a workaround for this issue until the root cause is completely
identified and fixed.
Ideally, this commit should be reverted in the near future.
- move most unrelated to phases output to klog.V(1)
- rename some prefixes for consistency - e.g.
[kubelet] -> [kubelet-start]
- control-plane-prepare: print details for each generated CP
component manifest.
- uppercase the info text for all "[reset].." lines
- modify the text for one line in reset
For historical reasons InitConfiguration is used almost everywhere in kubeadm
as a carrier of various configuration components such as ClusterConfiguration,
local API server endpoint, node registration settings, etc.
Since v1alpha2, InitConfiguration is meant to be used solely as a way to supply
the kubeadm init configuration from a config file. Its usage outside of this
context is caused by technical dept, it's clunky and requires hacks to fetch a
working InitConfiguration from the cluster (as it's not stored in the config
map in its entirety).
This change is a small step towards removing all unnecessary usages of
InitConfiguration. It reduces its usage by replacing it in some places with
some of the following:
- ClusterConfiguration only.
- APIEndpoint (as local API server endpoint).
- NodeRegistrationOptions only.
- Some combinations of the above types, or if single fields from them are used,
only those field.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
In order to allow for a smoother UX with CRIs different than Docker, we have to
make the --cri-socket command line flag optional when just one CRI is
installed.
This change does that by doing the following:
- Introduce a new runtime function (DetectCRISocket) that will attempt to
detect a CRI socket, or return an appropriate error.
- Default to using the above function if --cri-socket is not specified and
CRISocket in NodeRegistrationOptions is empty.
- Stop static defaulting to DefaultCRISocket. And rename it to
DefaultDockerCRISocket. Its use is now narrowed to "Docker or not"
distinguishment and tests.
- Introduce AddCRISocketFlag function that adds --cri-socket flag to a flagSet.
Use that in all commands, that support --cri-socket.
- Remove the deprecated --cri-socket-path flag from kubeadm config images pull
and deprecate --cri-socket in kubeadm upgrade apply.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Bump MinimumControlPlaneVersion and MinimumKubeletVersion to v1.12 and update
any related tests.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
The reason for the bump is the new functionality of the
k8s.io/utils/exec package which allows
- to get a hold of the process' std{out,err} as `io.Reader`s
- to `Start` a process and `Wait` for it
This should help on addressing #70890 by allowing to wrap std{out,err}
of the process to be wrapped with a `io.limitedReader`.
It also updates
- k8s.io/kubernetes/pkg/probe/exec.FakeCmd
- k8s.io/kubernetes/pkg/kubelet/prober.execInContainer
- k8s.io/kubernetes/cmd/kubeadm/app/phases/kubelet.fakeCmd
to implement the changed interface.
The dependency on 'k8s.io/utils/pointer' to the new version has also
been bumped in some staging repos:
- apiserver
- kube-controller-manager
- kube-scheduler
The kubelet allows you to set `--pod-infra-container-image`
(also called `PodSandboxImage` in the kubelet config),
which can be a custom location to the "pause" image in the case
of Docker. Other CRIs are not supported.
Set the CLI flag for the Docker case in flags.go using
WriteKubeletDynamicEnvFile().
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135