During Runner data initialization, if the value for the flag
"--skip-phases" was empty set the {init|join}Runner.Options.SkipPhases
to the {Init|Join}Configuration.SkipPhases value.
- Add the field SkipPhases in the public v1beta3 as a []string (omitempty)
- Add the field in the internal type
- Run generators
- Adapt v1beta2 converter for JoinConfiguration
Ideally this should be part of dockershim/CRI and not on the
side of kubeadm.
Remove the detection during:
- During preflight
- During kubelet config defaulting
Update dependencies and the test images to use pause 3.5. We also
provide a changelog entry for the new container image version.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
- Remove the deprecated --csr* flags "init phase certs"
- Deprecate the same flags for "certs renew".
For both cases users should be using "certs generate-csr".
The command "kubeadm config view" was deprecated in 1.19.
Remove it as scheduled in 1.22.
The replacement is to use kubectl:
kubectl get cm -n kube-system kubeadm-config -o=jsonpath="{.data.ClusterConfiguration}"
- Remove the object form v1beta3 and internal type
- Deprecate a couple of phases that were specifically designed / named to
modify the ClusterStatus object
- Adapt logic around annotation vs ClusterStatus retrieval
- Update unit tests
- Run generators
Running "go test ./cmd/kubeadm/app/..." results in these 3 files
being generated, since we have more callers to the functions
for generating unique private keys during pkiutil tests.
Add the files to ensure they are not generated locally all the time.
Kubeadm no longer supports kube-dns and CoreDNS is the only
supported DNS server. Remove ClusterConfiguration.DNS.Type
from v1beta3 that is used to set the DNS server type.
- Pin the ClusterConfiguration when fuzzing
the internal InitConfiguration that embeds it. Kubeadm includes
separate constructs for this embedding in the internal type
and this round trip is not viable.
- Remove the artificial calls to SetDefaults_ClusterConfiguration()
in v1beta{2|3}'s converters from public to internal InitConfiguration.
- Make sure the internal InitConfiguration.ClusterConfiguration is
defaulted in initconfiguration.go instead.
- scheme: switch to:
utilruntime.Must(scheme.SetVersionPriority(v1beta3.SchemeGroupVersion))
- change all imports in the code base from v1beta2 to v1beta3
- rename all import aliases for kubeadmapiv1beta2 to "kubeadmapiv".
this allows smaller diffs when changing the default public API.
The v1beta1/2 API doc.go files include an example
flag for the kubelet binary "cgroup-driver" under
"kubeletExtraArgs".
This flag is deprecated and should not be in the examples.
Add "v" instead which is one of the flags we know will
not be deprecated soon.
This is part of the "master" -> "control-plane" rename
that we missed. It's not critical for 1.21 as the
"control-plane" taint is still not added to CP nodes,
but it would be best to add the toleration preemptively
like the KEP planned.
The kubeadm documentation instructs users to set the container
runtime driver to "systemd", since kubeadm manages a kubelet via
the systemd init system. The kubelet default however is "cgroupfs".
For new clusters set the driver to "systemd" unless the user
is explicit about it. The same defaulting would not happen
during "upgrade".
Pass the flag --pod-infra-container-image to the kubelet not only
for Docker but for all CRs.
This flag tells the kubelet to special case the image and not garbage
collect it.
Looks like there is a bit of an issue in the Bluderbuss (Prow plugin)
where it prefers to pick reviewers from a parent OWNERS files,
instead of using an approver from a current OWNERS file as
an additional reviewer.
Updates kubeadm version resolution to use kubernetes community infra
bucket to fetch appropriate k8s ci versions. The images are already
being pulled from the kubernetes community infra bucket meaning that a
mismatch can occur when the ci version is fetched from the google infra
bucket and the image is not yet present on k8s infra.
Follow-up to kubernetes/kubernetes#97087
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
Originally raised as an issue with invalid versions to plan, but it has
been determined with air gapped environments and development versions it
is not possible to fully address that issue.
But one thing that was identified was that we can do a better job in how
we output the upgrade plan information. Kubeadm outputs the requested
version as "Latest stable version", though that may not actually be the
case. For this instance, we want to change this to "Target version" to
be a little more accurate.
Then in the component upgrade table that is emitted, the last column of
AVAILABLE isn't quite right either. Also changing this to TARGET to
reflect that this is the version we are targetting to upgrade to,
regardless of its availability.
There could be some improvements in checking available versions,
particularly in air gapped environments, to make sure we actually have
access to the requested version. But this at least clarifies some of the
output a bit.
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
Add DefaultedStaticInitConfiguration() which can be
used instead of DefaultedInitConfiguration() during unit tests.
The later can be slow since it performs dynamic defaulting.
Apply the label:
"node.kubernetes.io/exclude-from-external-load-balancers"
To control plane nodes to preserve backwards compatibility
with the legacy mode where "master" nodes were excluded from
LBs.
During upgrade the coredns migration library seems to require
that the input version doesn't have the "v" prefix".
Fixes a bug where the user cannot run commands such as
"kubeadm upgrade plan" if they have `v1.8.0` installed.
Assuming this is caused by the fact that previously the image didn't
have a "v" prefix.
Fixes an issue where some kubeadm phases fail if a certificate file
contains a certificate chain with one or more intermediate CA
certificates. The validation algorithm has been changed from requiring
that a certificate was signed directly by the root CA to requiring that
there is a valid certificate chain back to the root CA.
In kubeadm etcd join there is a a bug that exists where,
if a peer already exists in etcd, it attempts to mitigate
by continuing and generating the etcd manifest file. However,
this existing "member name" may actually be unset, causing
subsequent etcd consistency checks to fail.
This change checks if the member name is empty - if it is,
it sets the member name to the node name, and resumes.
The error messages when the user feeds an invalid discovery token CA
hash are vague. Make sure to:
- Print the list of supported hash formats (currently only "sha256").
- Wrap the error from pubKeyPins.Allow() with a descriptive message.
- Mark the "node-role.kubernetes.io/master" key for labels
and taints as deprecated.
- During "kubeadm init/join" apply the label
"node-role.kubernetes.io/control-plane" to new control-plane nodes,
next to the existing "node-role.kubernetes.io/master" label.
- During "kubeadm upgrade apply", find all Nodes with the "master"
label and also apply the "control-plane" label to them
(if they don't have it).
- During upgrade health-checks collect Nodes labeled both "master"
and "control-plane".
- Rename the constants.ControlPlane{Taint|Toleraton} to
constants.OldControlPlane{Taint|Toleraton} to manage the transition.
- Mark constants.OldControlPlane{{Taint|Toleraton} as deprecated.
- Use constants.OldControlPlane{{Taint|Toleraton} instead of
constants.ControlPlane{Taint|Toleraton} everywhere.
- Introduce constants.ControlPlane{Taint|Toleraton}.
- Add constants.ControlPlaneToleraton to the kube-dns / CoreDNS
Deployments to make them anticipate the introduction
of the "node-role.kubernetes.io/control-plane:NoSchedule"
taint (constants.ControlPlaneTaint) on kubeadm control-plane Nodes.
the controller manager should validate the podSubnet against the node-mask
because if they are incorrect can cause the controller-manager to fail.
We don't need to calculate the node-cidr-masks, because those should
be provided by the user, if they are wrong we fail in validation.
Currently the "generate-csr" command does not have any output.
Pass an io.Writer (bound to os.Stdout from /cmd) to the functions
responsible for generating the kubeconfig / certs keys and CSRs.
If nil is passed these functions don't output anything.
Deprecate the experimental command "alpha self-hosting" and its
sub-command "pivot" that can be used to create a self-hosting
control-plane from static Pods.
The kubeconfig phase of "kubeadm init" detects external CA mode
and skips the generation of kubeconfig files. The kubeconfig
handling during control-plane join executes
CreateJoinControlPlaneKubeConfigFiles() which requires the presence
of ca.key when preparing the spec of a kubeconfig file and prevents
usage of external CA mode.
Modify CreateJoinControlPlaneKubeConfigFiles() to skip generating
the kubeconfig files if external CA mode is detected.
- Modify validateCACertAndKey() to print warnings for missing keys
instead of erroring out.
- Update unit tests.
This allows doing a CP node join in a case where the user has:
- copied shared certificates to the new CP node, but not copied
ca.key files, treating the cluster CAs as external
- signed other required certificates in advance
The flag was deprecated as it is problematic since it allows
overrides of the kubelet configuration that is downloaded
from the cluster during upgrade.
Kubeadm node upgrades already download the KubeletConfiguration
and store it in the internal ClusterConfiguration type. It is then
only a matter of writing that KubeletConfiguration to disk.
For external CA users that have prepared the kubeconfig files
for components, they might wish to provide a custom API server URL.
When performing validation on these kubeconfig files, instead of
erroring out on such custom URLs, show a klog Warning.
This allows flexibility around topology setup, where users
wish to make the kubeconfigs point to the ControlPlaneEndpoint instead
of the LocalAPIEndpoint.
Fix validation in ValidateKubeconfigsForExternalCA expecting
all kubeconfig files to use the CPE. The kube-scheduler and
kube-controller-manager now use LAE.