Panicing if not running in a test and if the component-base/version
variables are empty is not ideal. At some point sections
of kubeadm could be exposed as a library and if these sections
import the constants package, they would panic on the library
users unless they set the version information in component-base
with ldflags.
Instead:
- If the component-base version is empty, return a placeholder version
that should indicate to users that build kubeadm that something is not
right (e.g. they did not use 'make'). During library usage or unit
tests this version should not be relevant.
- Update unit tests to use hardcoded versions instead of the versions
from the constants package. Using the constants package for testing
is good but during unit tests these versions are already placeholders
since unit tests do not populate the actual component-base versions
(e.g. 1.23).
Tests under /app and /test would fail if the current/minimum k8s version
is dynamically populated from the version in the kubeadm binary.
Adapt the tests to support that.
Kubeadm requires manual version updates of its current supported k8s
control plane version and minimally supported k8s control plane and
kubelet versions every release cycle.
To avoid that, in constants.go:
- Add the helper function getSkewedKubernetesVersion() that can be
used to retrieve a MAJOR.(MINOR+n).0 version of k8s. It currently
uses the kubeadm version populated in "component-base/version" during
the kubeadm build process.
- Use the function to set existing version constants (variables).
Update util/config/common.go#NormalizeKubernetesVersion() to
tolerate the case where a k8s version in the ClusterConfiguration
is too old for the kubeadm binary to use during code freeze.
Include unit tests for the new utilities.
This change optimizes the kubeadm/etcd `AddMember` client-side function
by stopping early in the backoff loop when a peer conflict is found
(indicating the member has already been added to the etcd cluster). In
this situation, the function will stop early and relay a call to
`ListMembers` to fetch the current list of members to return. With this
optimization, front-loading a `ListMembers` call is no longer necessary,
as this functionally returns the equivalent response.
This helps reduce the amount of time taken in situational cases where an
initial client request to add a member is accepted by the server, but
fails client-side.
This situation is possible situationally, such as if network latency
causes the request to timeout after it was sent and accepted by the
cluster. In this situation, the following loop would occur and fail with
an `ErrPeerURLExist` response, and would be stuck until the backoff
timeout was met (roughly ~2min30sec currently).
Testing Done:
* Manual testing with an etcd cluster. Initial "AddMember` call was
successful, and the etcd manifest file was identical to prior version
of these files. Subsequent calls to add the same member succeeded
immediately (retaining idempotency), and the resulting manifest file
remains identical to previous version as well. The difference, this
time, is the call finished ~2min25sec faster in an identical test in
the environment tested with.
The purell package at github.com/PuerkitoBio/purell is no longer maintained and in k/k repo under kubeadm package its been used for normalizing the URL. This commit removes the dependency on this package and creates a local function for normalizing the URL within the preflight package under cmd/kubeadm.
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
chore: add new line at end of the file
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
fix: remove unused mod from vendor modules file
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
During operations such as "upgrade", kubeadm fetches the
ClusterConfiguration object from the kubeadm ConfigMap.
However, due to requiring node specifics it wraps it in an
InitConfiguration object. The function responsible for that is:
app/util/config#FetchInitConfigurationFromCluster().
A problem with this function (and sub-calls) is that it ignores
the static defaults applied from versioned types
(e.g. v1beta3/defaults.go) and only applies dynamic defaults for:
- API endpoints
- node registration
- etc...
The introduction of Init|JoinConfiguration.ImagePullPolicy now
has static defaulting of the NodeRegistration object with a default
policy of "PullIfNotPresent". Respect this defaulting by constructing
a defaulted internal InitConfiguration from
FetchInitConfigurationFromCluster() and only then apply the dynamic
defaults over it.
This fixes a bug where "kubeadm upgrade ..." fails when pulling images
due to an empty ("") ImagePullPolicy. We could assume that empty
string means default policy on runtime in:
cmd/kubeadm/app/preflight/checks.go#ImagePullCheck()
but that might actually not be the user intent during "init" and "join",
due to e.g. a typo. Similarly, we don't allow empty tokens
on runtime and error out.
Instead of dynamically defaulting NodeRegistration.ImagePullPolicy,
which is common when doing defaulting depending on host state - e.g.
hostname, statically default it in v1beta3/defaults.go.
- Remove defaulting in checks.go
- Add one more unit test in checks_test.go
- Adapt v1beta2 conversion and fuzzer / round tripping tests
This also results in the default being visible when calling:
"kubeadm config print ...".
Given bootstraptoken/v1 is now a separate GV, there is no need
to duplicate the API and utilities inside v1beta3 and the internal
version.
v1beta2 must continue to use its internal copy due, since output/v1alpha1
embeds the v1beta2.BootstrapToken object. See issue 2427 in k/kubeadm.
- Make v1beta3 use bootstraptoken/v1 instead of local copies
- Make the internal API use bootstraptoken/v1
- Update validation, /cmd, /util and other packages
- Update v1beta2 conversion
Package bootstraptoken contains an API and utilities wrapping the
"bootstrap.kubernetes.io/token" Secret type to ease its usage in kubeadm.
The API is released as v1, since these utilities have been part of a
GA workflow for 10+ releases.
The "bootstrap.kubernetes.io/token" Secret type is also GA.
During "join" of new control plane machines, kubeadm would
download shared certificates and keys from the cluster stored
in a Secret. Based on the contents of an entry in the Secret,
it would use helper functions from client-go to either write
it as public key, cert (mode 644) or as a private key (mode 600).
The existing logic is always writing both keys and certs with mode 600.
Allow detecting public readable data properly and writing some files
with mode 644.
First check the data with ParsePrivateKeyPEM(); if this passes
there must be at least one private key and the file should be written
with mode 600 as private. If that fails, validate if the data contains
public keys with ParsePublicKeysPEM() and write the file as public
(mode 644).
As a result of this new logic, and given the current set of managed
kubeadm files, .key files will end up with 600, while .crt and .pub
files will end up with 644.
Add {Init|Join}Configuration.Patches, which is a structure that
contains patch related options. Currently it only has the "Directory"
field which is the same option as the existing --experimental-patches
flag.
The flags --[experimental-]patches value override this value
if both a flag and config is passed during "init" or "join".
The feature of "patches" in kubeadm has been in Alpha for a few
releases. It has not received major bug reports from users.
Deprecate the --experimental-patches flag and add --patches.
Both flags are allowed to be mixed with --config.
If the user has not specified a pull policy we must assume a default of
v1.PullIfNotPresent.
Add some extra verbose output to help users monitor what policy is
used and what images are skipped / pulled.
Use "fallthrough" and case handle "v1.PullAlways".
Update unit test.
In the Alpha stage of the feature in kubeadm to support
a rootless control plane, the allocation and assignment of
UID/GIDs to containers in the static pods will be automated.
This automation will require management of users and groups
in /etc/passwd and /etc/group.
The tools on Linux for user/group management are inconsistent
and non-standardized. It also requires us to include a number of
more dependencies in the DEB/RPMs, while complicating the UX for
non-package manager users.
The format of /etc/passwd and /etc/group is standardized.
Add code for managing (adding and deleting) a set of managed
users and groups in these files.
During Runner data initialization, if the value for the flag
"--skip-phases" was empty set the {init|join}Runner.Options.SkipPhases
to the {Init|Join}Configuration.SkipPhases value.
- Add the field SkipPhases in the public v1beta3 as a []string (omitempty)
- Add the field in the internal type
- Run generators
- Adapt v1beta2 converter for JoinConfiguration
Ideally this should be part of dockershim/CRI and not on the
side of kubeadm.
Remove the detection during:
- During preflight
- During kubelet config defaulting
Update dependencies and the test images to use pause 3.5. We also
provide a changelog entry for the new container image version.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
- Remove the deprecated --csr* flags "init phase certs"
- Deprecate the same flags for "certs renew".
For both cases users should be using "certs generate-csr".
The command "kubeadm config view" was deprecated in 1.19.
Remove it as scheduled in 1.22.
The replacement is to use kubectl:
kubectl get cm -n kube-system kubeadm-config -o=jsonpath="{.data.ClusterConfiguration}"
- Remove the object form v1beta3 and internal type
- Deprecate a couple of phases that were specifically designed / named to
modify the ClusterStatus object
- Adapt logic around annotation vs ClusterStatus retrieval
- Update unit tests
- Run generators
Running "go test ./cmd/kubeadm/app/..." results in these 3 files
being generated, since we have more callers to the functions
for generating unique private keys during pkiutil tests.
Add the files to ensure they are not generated locally all the time.
Kubeadm no longer supports kube-dns and CoreDNS is the only
supported DNS server. Remove ClusterConfiguration.DNS.Type
from v1beta3 that is used to set the DNS server type.
- Pin the ClusterConfiguration when fuzzing
the internal InitConfiguration that embeds it. Kubeadm includes
separate constructs for this embedding in the internal type
and this round trip is not viable.
- Remove the artificial calls to SetDefaults_ClusterConfiguration()
in v1beta{2|3}'s converters from public to internal InitConfiguration.
- Make sure the internal InitConfiguration.ClusterConfiguration is
defaulted in initconfiguration.go instead.
- scheme: switch to:
utilruntime.Must(scheme.SetVersionPriority(v1beta3.SchemeGroupVersion))
- change all imports in the code base from v1beta2 to v1beta3
- rename all import aliases for kubeadmapiv1beta2 to "kubeadmapiv".
this allows smaller diffs when changing the default public API.
The v1beta1/2 API doc.go files include an example
flag for the kubelet binary "cgroup-driver" under
"kubeletExtraArgs".
This flag is deprecated and should not be in the examples.
Add "v" instead which is one of the flags we know will
not be deprecated soon.
This is part of the "master" -> "control-plane" rename
that we missed. It's not critical for 1.21 as the
"control-plane" taint is still not added to CP nodes,
but it would be best to add the toleration preemptively
like the KEP planned.
The kubeadm documentation instructs users to set the container
runtime driver to "systemd", since kubeadm manages a kubelet via
the systemd init system. The kubelet default however is "cgroupfs".
For new clusters set the driver to "systemd" unless the user
is explicit about it. The same defaulting would not happen
during "upgrade".
Pass the flag --pod-infra-container-image to the kubelet not only
for Docker but for all CRs.
This flag tells the kubelet to special case the image and not garbage
collect it.
Looks like there is a bit of an issue in the Bluderbuss (Prow plugin)
where it prefers to pick reviewers from a parent OWNERS files,
instead of using an approver from a current OWNERS file as
an additional reviewer.
Updates kubeadm version resolution to use kubernetes community infra
bucket to fetch appropriate k8s ci versions. The images are already
being pulled from the kubernetes community infra bucket meaning that a
mismatch can occur when the ci version is fetched from the google infra
bucket and the image is not yet present on k8s infra.
Follow-up to kubernetes/kubernetes#97087
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
Originally raised as an issue with invalid versions to plan, but it has
been determined with air gapped environments and development versions it
is not possible to fully address that issue.
But one thing that was identified was that we can do a better job in how
we output the upgrade plan information. Kubeadm outputs the requested
version as "Latest stable version", though that may not actually be the
case. For this instance, we want to change this to "Target version" to
be a little more accurate.
Then in the component upgrade table that is emitted, the last column of
AVAILABLE isn't quite right either. Also changing this to TARGET to
reflect that this is the version we are targetting to upgrade to,
regardless of its availability.
There could be some improvements in checking available versions,
particularly in air gapped environments, to make sure we actually have
access to the requested version. But this at least clarifies some of the
output a bit.
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
Add DefaultedStaticInitConfiguration() which can be
used instead of DefaultedInitConfiguration() during unit tests.
The later can be slow since it performs dynamic defaulting.
Apply the label:
"node.kubernetes.io/exclude-from-external-load-balancers"
To control plane nodes to preserve backwards compatibility
with the legacy mode where "master" nodes were excluded from
LBs.
During upgrade the coredns migration library seems to require
that the input version doesn't have the "v" prefix".
Fixes a bug where the user cannot run commands such as
"kubeadm upgrade plan" if they have `v1.8.0` installed.
Assuming this is caused by the fact that previously the image didn't
have a "v" prefix.
Fixes an issue where some kubeadm phases fail if a certificate file
contains a certificate chain with one or more intermediate CA
certificates. The validation algorithm has been changed from requiring
that a certificate was signed directly by the root CA to requiring that
there is a valid certificate chain back to the root CA.
In kubeadm etcd join there is a a bug that exists where,
if a peer already exists in etcd, it attempts to mitigate
by continuing and generating the etcd manifest file. However,
this existing "member name" may actually be unset, causing
subsequent etcd consistency checks to fail.
This change checks if the member name is empty - if it is,
it sets the member name to the node name, and resumes.
The error messages when the user feeds an invalid discovery token CA
hash are vague. Make sure to:
- Print the list of supported hash formats (currently only "sha256").
- Wrap the error from pubKeyPins.Allow() with a descriptive message.
- Mark the "node-role.kubernetes.io/master" key for labels
and taints as deprecated.
- During "kubeadm init/join" apply the label
"node-role.kubernetes.io/control-plane" to new control-plane nodes,
next to the existing "node-role.kubernetes.io/master" label.
- During "kubeadm upgrade apply", find all Nodes with the "master"
label and also apply the "control-plane" label to them
(if they don't have it).
- During upgrade health-checks collect Nodes labeled both "master"
and "control-plane".
- Rename the constants.ControlPlane{Taint|Toleraton} to
constants.OldControlPlane{Taint|Toleraton} to manage the transition.
- Mark constants.OldControlPlane{{Taint|Toleraton} as deprecated.
- Use constants.OldControlPlane{{Taint|Toleraton} instead of
constants.ControlPlane{Taint|Toleraton} everywhere.
- Introduce constants.ControlPlane{Taint|Toleraton}.
- Add constants.ControlPlaneToleraton to the kube-dns / CoreDNS
Deployments to make them anticipate the introduction
of the "node-role.kubernetes.io/control-plane:NoSchedule"
taint (constants.ControlPlaneTaint) on kubeadm control-plane Nodes.
the controller manager should validate the podSubnet against the node-mask
because if they are incorrect can cause the controller-manager to fail.
We don't need to calculate the node-cidr-masks, because those should
be provided by the user, if they are wrong we fail in validation.
Currently the "generate-csr" command does not have any output.
Pass an io.Writer (bound to os.Stdout from /cmd) to the functions
responsible for generating the kubeconfig / certs keys and CSRs.
If nil is passed these functions don't output anything.
Deprecate the experimental command "alpha self-hosting" and its
sub-command "pivot" that can be used to create a self-hosting
control-plane from static Pods.
The kubeconfig phase of "kubeadm init" detects external CA mode
and skips the generation of kubeconfig files. The kubeconfig
handling during control-plane join executes
CreateJoinControlPlaneKubeConfigFiles() which requires the presence
of ca.key when preparing the spec of a kubeconfig file and prevents
usage of external CA mode.
Modify CreateJoinControlPlaneKubeConfigFiles() to skip generating
the kubeconfig files if external CA mode is detected.
- Modify validateCACertAndKey() to print warnings for missing keys
instead of erroring out.
- Update unit tests.
This allows doing a CP node join in a case where the user has:
- copied shared certificates to the new CP node, but not copied
ca.key files, treating the cluster CAs as external
- signed other required certificates in advance
The flag was deprecated as it is problematic since it allows
overrides of the kubelet configuration that is downloaded
from the cluster during upgrade.
Kubeadm node upgrades already download the KubeletConfiguration
and store it in the internal ClusterConfiguration type. It is then
only a matter of writing that KubeletConfiguration to disk.
For external CA users that have prepared the kubeconfig files
for components, they might wish to provide a custom API server URL.
When performing validation on these kubeconfig files, instead of
erroring out on such custom URLs, show a klog Warning.
This allows flexibility around topology setup, where users
wish to make the kubeconfigs point to the ControlPlaneEndpoint instead
of the LocalAPIEndpoint.
Fix validation in ValidateKubeconfigsForExternalCA expecting
all kubeconfig files to use the CPE. The kube-scheduler and
kube-controller-manager now use LAE.
This PR specifies minimum control plane version,
kubelet version and current K8s version for v1.20.
Signed-off-by: Kommireddy Akhilesh <akhileshkommireddy2412@gmail.com>
Client side period validation of certificates should not be
fatal, as local clock skews are not so uncommon. The validation
should be left to the running servers.
- Remove this validation from TryLoadCertFromDisk().
- Add a new function ValidateCertPeriod(), that can be used for this
purpose on demand.
- In phases/certs add a new function CheckCertificatePeriodValidity()
that will print warnings if a certificate does not pass period
validation, and caches certificates that were already checked.
- Use the function in a number of places where certificates
are loaded from disk.
The isCoreDNSVersionSupported() check assumes that
there is a running kubelet, that manages the CoreDNS containers.
If the containers are being created it is not possible to fetch
their image digest. To workaround that, a poll can be used in
isCoreDNSVersionSupported() and wait for the CoreDNS Pods
are expected to be running. Depending on timing and CNI
yet to be installed this can cause problems related to
addon idempotency of "kubeadm init", because if the CoreDNS
Pods are waiting for another step they will never get running.
Remove the function isCoreDNSVersionSupported() and assume that
the version is always supported. Rely on the Corefile migration
library to error out if it must.
- Ensure the directory is created with 0700 via a new function
called CreateDataDirectory().
- Call this function in the init phases instead of the manual call
to MkdirAll.
- Call this function when joining control-plane nodes with local etcd.
If the directory creation is left to the kubelet via the
static Pod hostPath mounts, it will end up with 0755
which is not desired.
A bug was discovered in the `enforceRequirements` func for `upgrade plan`.
If a command line argument that specifies the target Kubernetes version is
supplied, the returned `ClusterConfiguration` by `enforceRequirements` will
have its `KubernetesVersion` field set to the new version.
If no version was specified, the returned `KubernetesVersion` points to the
currently installed one.
This remained undetected for a couple of reasons
- It's only `upgrade plan` that allows for the version command line argument to
be optional (in `upgrade plan` it's mandatory)
- Prior to 1.19, the implementation of `upgrade plan` did not make use of the
`KubernetesVersion` returned by `enforceRequirements`.
`upgrade plan` supports this optional command line argument to enable
air-gapped setups (as not specifying a version on the command line will end up
looking for the latest version over the Interned).
Hence, the only option is to make `enforceRequirements` consistent in the
`upgrade plan` case and always return the currently installed version in the
`KubernetesVersion` field.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Pinning the kube-controller-manager and kube-scheduler kubeconfig files
to point to the control-plane-endpoint can be problematic during
immutable upgrades if one of these components ends up contacting an N-1
kube-apiserver:
https://kubernetes.io/docs/setup/release/version-skew-policy/#kube-controller-manager-kube-scheduler-and-cloud-controller-manager
For example, the components can send a request for a non-existing API
version.
Instead of using the CPE for these components, use the LocalAPIEndpoint.
This guarantees that the components would talk to the local
kube-apiserver, which should be the same version, unless the user
explicitly patched manifests.
A check that verifies that kubeadm does not "upgrade" to an older release was
overly optimized by skipping upgrade if the new version is the same as the old
one. This somewhat makes sense, but that way changes in any of the etcd fields
in the ClusterConfiguration won't be applied if the etcd version is not
changed.
Hence, this simple change ensures that the upgrade is done even when no version
change takes place.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
* Creates private keys and CSR files for all the control-plane certificates
* Helps with External CA mode of kubeadm
Signed-off-by: Richard Wall <richard.wall@jetstack.io>
Back in the v1alpha2 days the fuzzer test needed to be disabled. To ensure that
there were no config breaks and everything worked correctly extensive replacement
tests were put in place that functioned as unit tests for the kubeadm config utils
as well.
The fuzzer test has been reenabled for a long time now and there's no need for
these replacements. Hence, over time most of these were disabled, deleted and
refactored. The last remnants are part of the LoadJoinConfigurationFromFile test.
The test data for those old tests remains largely unused today, but it still receives
updates as it contains kubelet's and kube-proxy's component configs. Updates to these
configs are usually done because the maintainers of those need to add a new field.
Hence, to cleanup old code and reduce maintenance burden, the last test that depends
on this test data is finally refactored and cleaned up to represent a simple unit test
of `LoadJoinConfigurationFromFile`.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Over the course of recent development of the `componentconfigs` package,
it became evident that most of the tests in this package cannot be implemented without
using a component config. As all of the currently supported component configs are
external to the kubeadm project (kubelet and kube-proxy), practically all of the tests
in this package are now dependent on external code.
This is not desirable, because other component's configs may change frequently and
without much of a notice. In particular many configs add new fields without bumping their
versions. In addition to that, some components may be deprecated in the future and many
tests may use their configs as a place holder of a component config just to test some
common functionality.
To top that, there are many tests that test the same common functionality several times
(for each different component config).
Thus a kubeadm managed replacement and a fake test environment are introduced.
The new test environment uses kubeadm's very own `ClusterConfiguration`.
ClusterConfiguration is normally not managed by the `componentconfigs` package.
It's only used, because of the following:
- It's a versioned API that is under the control of kubeadm maintainers. This enables us to test
the componentconfigs package more thoroughly without having to have full and always up to date
knowledge about the config of another component.
- Other components often introduce new fields in their configs without bumping up the config version.
This, often times, requires that the PR that introduces such new fields to touch kubeadm test code.
Doing so, requires more work on the part of developers and reviewers. When kubeadm moves out of k/k
this would allow for more sporadic breaks in kubeadm tests as PRs that merge in k/k and introduce
new fields won't be able to fix the tests in kubeadm.
- If we implement tests for all common functionality using the config of another component and it gets
deprecated and/or we stop supporting it in production, we'll have to focus on a massive test refactoring
or just continue importing this config just for test use.
Thus, to reduce maintenance costs without sacrificing test coverage, we introduce this mini-framework
and set of tests here which replace the normal component configs with a single one (`ClusterConfiguration`)
and test the component config independent logic of this package.
As a result of this, many of the older test cases are refactored and greatly simplified to reflect
on the new change as well. The old tests that are strictly tied to specific component configs
(like the defaulting tests) are left unchanged.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Kubeadm setup of kube-controller-manager and kube-scheduler is
lacking the --port=0 option which caused the component to enable
the insecure port by default and serve insecurely on the default
node interface.
Add --port=0 by default to both components. Users are still allowed
the explicitly set the flag (via extraArgs), which allows them
to override this default kubeadm behavior and enable the insecure port.
NOTE: the flag is deprecated and should be removed from kubeadm manifests
once it's removed from core.
`kubeadm config upload` is a GA command that has been deprecated and scheduled
for removal since Kubernetes 1.15 (released 06/19/2019). This change will
finally removed it in Kubernetes 1.19 (planned for August 2020).
The original command has long since been replaced by a GA init phase:
`kubeadm init phase upload-config`
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Add PatchStaticPod() in staticpod/utils.go
Apply patches to static Pods in:
- phases/controlplane/CreateStaticPodFiles()
- phases/etcd/CreateLocalEtcdStaticPodManifestFile() and
CreateStackedEtcdStaticPodManifestFile()
Add unit tests and update Bazel.
This change enables kubeadm upgrade plan to print a state table with
information regarding known component config API groups. Most importantly this
information includes current and preferred version for each group and an
indication if a manual user upgrade is required.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
`kubeadm upgrade plan` is using the external (currently `v1alpha1`) types of
the kubeadm output API to collect upgrade plans. This is counter intuitive
since code structure gets bound to the whatever version the output API is at.
In addition to that, the versioned API is used only in the very last stages of
a machine readable output (which is currently not implemented).
Hence, to increase flexibility and keep up with the standard Kubernetes
ecosystem practice, `kubeadm upgrade plan` is migrated to use the internal
types of the output API.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
UploadConfiguration() now always retries the underling API calls,
which can make TestUploadConfiguration run for a long time.
Remove the negative test cases, where errors are expected.
Negative test cases should be tested in app/util/apiclient,
where a short timeout / retry count should be possible for unit tests.
Currently, kubeadm would refuse to perfom an upgrade (or even planing for one)
if it detects a user supplied unsupported component config version. Hence,
users are required to manually upgrade their component configs and store them
in the config maps prior to executing `kubeadm upgrade plan` or
`kubeadm upgrade apply`.
This change introduces the ability to use the `--config` option of the
`kubeadm upgrade plan` and `kubeadm upgrade apply` commands to supply a YAML
file containing component configs to be used in place of the existing ones in
the cluster upon upgrade.
The old behavior where `--config` is used to reconfigure a cluster is still
supported. kubeadm automatically detects which behavior to use based on the
presence (or absense) of kubeadm config types (API group
`kubeadm.kubernetes.io`).
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
If an etcd member with the same address already exists, don't re-add it.
Instead, use the existing member list for creating the "initial cluster"
that is written for this etcd server instance static Pod.
Component configs are used by kubeadm upgrade plan at the moment. However, they
can prevent kubeadm upgrade plan from functioning if loading of an unsupported
version of a component config is attempted. For that matter it's best to just
stop loading component configs as part of the kubeadm config load process.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
`getK8sVersionFromUserInput` would attempt to load the config from a user
specified YAML file (via the `--config` option of `kubeadm upgrade plan` or
`kubeadm upgrade apply`). This is done in order to fetch the `KubernetesVersion`
field of the `ClusterConfiguration`. The complete config is then immediately
discarded. The actual config that is used during the upgrade process is fetched
from within `enforceRequirements`.
This, along with the fact that `getK8sVersionFromUserInput` is always called
immediately after `enforceRequirements` makes it possible to merge the two.
Merging them would help us simplify things and avoid future problems in
component config related patches.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Until now, users were always asked to manually convert a component config to a
version supported by kubeadm, if kubeadm is not supporting its version.
This is true even for configs generated with older kubeadm versions, hence
getting users to make manual conversions on kubeadm generated configs.
This is not appropriate and user friendly, although, it tends to be the most
common case. Hence, we sign kubeadm generated component configs stored in
config maps with a SHA256 checksum. If a configs is loaded by kubeadm from a
config map and has a valid signature it's considered "kubeadm generated" and if
a version migration is required, this config is automatically discarded and a
new one is generated.
If there is no checksum or the checksum is not matching, the config is
considered as "user supplied" and, if a version migration is required, kubeadm
will bail out with an error, requiring manual config migration (as it's today).
The behavior when supplying component configs on the kubeadm command line
does not change. Kubeadm would still bail out with an error requiring migration
if it can recognize their groups but not versions.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
In case a malformed flag is passed to k8s components
such as "–foo", where "–" is not an ASCII dash character,
the components currently silently ignore the flag
and treat it as a positional argument.
Make k8s components/commands exit with an error if a positional argument
that is not empty is found. Include a custom error message for all
components except kubeadm, as cobra.NoArgs is used in a lot of
places already (can be fixed in a followup).
The kubelet already handles this properly - e.g.:
'unknown command: "–foo"'
This change affects:
- cloud-controller-manager
- kube-apiserver
- kube-controller-manager
- kube-proxy
- kubeadm {alpha|config|token|version}
- kubemark
Signed-off-by: Monis Khan <mok@vmware.com>
Signed-off-by: Lubomir I. Ivanov <lubomirivanov@vmware.com>
- Use a dummy nodename instead of OS hostname
- Inline toString() function
- Use backticks to wrap expected patch
- Remove redundant test name from error logs
kubelet.DownloadConfig is an old utility function which takes a client set and
a kubelet version, uses them to fetch the kubelet component config from a
config map, and places it in a local file. This function is simple to use, but
it is dangerous and unnecessary. Practically, in all cases the kubelet
configuration is present locally and does not need to be fetched from a config
map on the cluster (it just needs to be stored in a file).
Furthermore, kubelet.DownloadConfig does not use the kubeadm component configs
module in any way. Hence, a kubelet configuration fetched using it may not be
patched, validated, or otherwise, processed in any way by kubeadm other than
piping it to a file.
This patch replaces all but a single kubelet.DownloadConfig invocation with
equivalents that get the local copy of the kubelet component config and just
store it in a file. The sole remaining invocation covers the
`kubeadm upgrade node --kubelet-version` case.
In addition to that, a possible panic is fixed in kubelet.DownloadConfig and
it now takes the kubelet version parameter as string.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
kubeadm is setting the IPv6DualStack feature gate in the command line of the kubelet.
However, the kubelet is gradually moving away from command line flags towards component config use.
Hence, we should set the IPv6DualStack feature gate in the component config instead.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
In slower setups it can take more time for the existing cluster
to be in a healthy state, so the existing backoff of ~50 seconds
is apparently not sufficient.
The client dial can also fail for similar reasons.
Improve kubeadm's join toleration of adding new etcd members.
Wrap both the client dial and member add in a longer backoff
(up to ~200 seconds).
This particular change should be backported to the support skew.
In a future change for master, all etcd client operations should be
make consistent so that the etcd logic is in a sane state.
specifically:
- cmd/kubeadm/.import-restrictions
- we don't need to explicitly allow k8s.io repos (external or published)
- rm pkg/controller/.import-restrictions
- pkg/client/unversioned was removed in 59042
- pkg/kubectl/.import-restrictions
- pkg/printers is no longer used
- pkg/api was masking all of the pkg/apis prefixes
- rm staging/src/k8s.io/code-generator/cmd/lister-gen/.import-restrictions
- noop / empty file
- test/e2e/framework/.import-restrictions
- we don't need to explicitly allow k8s.io repos (external or published)
yaml has comments, so we can explain why we have certain rules or
certain prefixes
for those files that weren't already commented yaml, I converted them to
yaml and took a best guess at comments based on the PRs that introduced
or updated them
Use an init container that performs the pre-pull of a component
and then start an instance of "pause" as a regular container to
get the DaemonSet Pod in a Running state.
More details on this change in the code comments.
The flag "--use-api" for "alpha certs renew" was deprecated in 1.18.
Remove the flag and related logic that executes certificate renewal
using "api/certificates/v1beta1". kubeadm continues to be able
to create CSR files and renew using the local CA on disk.
kubeadm init prints:
W0410 23:02:10.119723 13040 manifests.go:225] the default kube-apiserver
authorization-mode is "Node,RBAC"; using "Node,RBAC"
Add a new function compareAuthzModes() and a unit test for it.
Make sure the warning is printed only if the user modes don't match
the defaults.
Allow overriding the dry-run temporary directory with
an env. variable (KUBEADM_INIT_DRYRUN_DIR).
Use the same variable in test/cmd/init_test.go.
This allows running integration tests as non-root.
Make getKubeadmPath() fetch the KUBEADM_PATH env. variable.
Panic if it's missing. Don't handle the "--kubeadm-path"
flag. Remove the same flag from the BUILD bazel test rule.
Don't handle "--kubeadm-cmd-skip" usage of this flag is missing
from the code base.
Remove usage of "kubeadmCmdSkip" as the flag "--kubeadm-cmd-skip"
is never passed.
If the kube-proxy/dns ConfigMap are missing, show warnings and assume
that these addons were skipped during "kubeadm init",
and that their redeployment on upgrade is not desired.
TODO: remove this once "kubeadm upgrade apply" phases are supported:
https://github.com/kubernetes/kubeadm/issues/1318