StaticPodMirroringTimeout and StaticPodMirroringRetryInterval
are use for just an API call to get Pods(). The already existing
constants.KubernetesAPICallRetryInterval
and kubeadmapi.GetActiveTimeouts().KubernetesAPICall.Duration
can be used for that instead.
Follow the same process of adding the Timeouts struct
to UpgradeConfiguration similarly to how it was done for
other API Kinds.
In the Timeouts struct include one new timeout:
- UpgradeManifests
When the flag is defaulted before writing the apiserver manifest,
the input "cfg" object should not be mutated.
If the "cfg" is mutated, the upload ClusterConfiguration
to the cluster will include the defaulting, which is not
needed.
The idempotency.go (perhaps not so accurately named) contains
API calls that kubeadm does against an API server using client-go.
Some users seem to have unstable setups where for unknown reasons
the API server can be unavailable or refuse to respond as expected.
Use PollUntilContextTimeout in all exported functions to ensure
such API calls are all retry-able.
NOTE: The context passed to PollUntilContextTimeout is not propagated
in the polled function. Instead the poll function creates it's own
context 'ctx := context.Background()', this is to avoid
breaking expectations on the side of the callers, that expect
a certain type of error and not "context timeout" errors.
Additional changes:
- Make all context.TODO() -> context.Background()
- Update all unit tests and make sure during testing the retry
interval and timeout are short. Test coverage of idempotency.go
is at ~97%.
- Remove the TestMutateConfigMapWithConflict test. It does not
contribute much, because conflict handling is done at the API,
server side, not on the side of kubeadm. This simulating this is not
needed.
WaitForAllControlPlaneComponents is a new feature gate
that can be used to tell kubeadm to wait for all control plane
components and not only kube-apiserver.
- Add the Waiter function WaitForControlPlaneComponents
that waits for all CP components in parallel. Uses the regular
healthz endpoint for checks of status 200.
- Add a new experimental phase to kubeadm join called "wait-control-plane".
A similar phase exists for kubeadm init.
During initialization `kubeadm init` creates kubelet.conf with
specified name and during finalize phase validates that
this kubeconfig is not corrupted by checking for presence of specific
authinfo
However:
* kubelet doesn't require a specific name for this context
* in external CA mode this kubeconfig can be created outside of
`kubeadm init`
This change updates kubeadm finalize stage to avoid overly strict
context check.
Currently --rootfs does not work with "upgrade node" for CP nodes
because the only check of CP nodes is performed in newNodeOptions()
which runs before the root kubeadm command is run, thus the chroot()
path coming from --rootfs is not applied yet.
To work around that call the "isControlPlaneNode" check when
constructing the command data on command runtime.
kubeadm upgrade checks the migration path for the existing CoreDNS
deployment pre-flight. Migration paths are defined for CoreDNS
versions, which are derived from the image tag used in the existing
deployment.
The kubeadm ClusterConfiguration.DNS.ImageMeta supports suffixing the
tag with a digest, but at upgrade time does not derive the version
correctly from an image with digest suffix, because DeployedDNSAddon
does not deal with digests correctly. This commit makes DeployedDNSAddon
digest-aware.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Previous v1beta4 work added support for
ClusterConfiguration.EncryptionAlgorithm, however the possible
values were limited to just "RSA" (2048 key size) and "ECDSA" (P256).
Allow more arbitrary algorithm types, that can also include key size
or curve type encoded in the name:
"RSA-2048" (default), "RSA-3072", "RSA-4096" or "ECDSA-P256".
Update the deprecation notice of the PublicKeysECDSA FeatureGate
as ideally it should be removed only after v1beta3 is removed.
The previous fix changed the behavior of
EnsureAdminClusterRoleBindingImpl under the assumption that the unit
test was correct and the real-world behavior was wrong, but in fact,
the real-world behavior was already correct, and the unit test was
expecting the wrong result because of the difference in behavior
between real and fake clients.