For historical reasons InitConfiguration is used almost everywhere in kubeadm
as a carrier of various configuration components such as ClusterConfiguration,
local API server endpoint, node registration settings, etc.
Since v1alpha2, InitConfiguration is meant to be used solely as a way to supply
the kubeadm init configuration from a config file. Its usage outside of this
context is caused by technical dept, it's clunky and requires hacks to fetch a
working InitConfiguration from the cluster (as it's not stored in the config
map in its entirety).
This change is a small step towards removing all unnecessary usages of
InitConfiguration. It reduces its usage by replacing it in some places with
some of the following:
- ClusterConfiguration only.
- APIEndpoint (as local API server endpoint).
- NodeRegistrationOptions only.
- Some combinations of the above types, or if single fields from them are used,
only those field.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
When 'kubeadm init ...' is used with an IPv6 kubeadm configuration,
kubeadm currently generates an etcd.yaml manifest that uses IP:port
combinatins where the IP is an IPv6 address, but it is not enclosed
in square brackets, e.g.:
- --advertise-client-urls=https://fd00:20::2:2379
For IPv6 advertise addresses, this should be of the form:
- --advertise-client-urls=https://[fd00:20::2]:2379
The lack of brackets around IPv6 addresses in cases like this is
causing failures to bring up IPv6-only clusters with Kubeadm as
described in kubernetes/kubeadm Issues #1212.
This format error is fixed by using net.JoinHostPort() to generate
URLs as shown above.
Fixes kubernetes/kubeadm Issue #1212
When doing upgrades kubeadm generates new manifest and
waits until kubelet restarts correspondent pod.
However, kubelet won't restart pod if there are no changes
in the manifest. That makes kubeadm stuck waiting for
restarted pod.
Skipping upgrade if new component manifest is the same as
current manifest should solve this.
Fixes: kubernetes/kubeadm#1054
When kubeadm upgrades a static pod cluster, the old manifests were previously
deleted. This patch alters this behaviour so they are now stored in a
timestamped temporary directory.
Currently kubeadm only performs an upgrade if the etcd cluster is
colocated with the control plane node. As this is only one possible
configuration, kubeadm should support upgrades with etcd clusters
that are not local to the node.
Signed-off-by: Craig Tracey <craigtracey@gmail.com>
Fix `rollbackEtcdData()` to return error=nil on success
`rollbackEtcdData()` used to always return an error making the rest of the
upgrade code completely unreachable.
Ignore errors from `rollbackOldManifests()` during the rollback since it
always returns an error.
Success of the rollback is gated with etcd L7 healthchecks.
Remove logic implying the etcd manifest should be rolled back when
`upgradeComponent()` fails
- Calculate `beforePodHashMap` before the etcd upgrade in anticipation of KubeAPIServer downtime
- Detect if pre-upgrade etcd static pod cluster `HasTLS()==false` to switch on the Etcd TLS Upgrade
if TLS Upgrade:
- Skip L7 Etcd check (could implement a waiter for this)
- Skip data rollback on etcd upgrade failure due to lack of L7 check (APIServer is already down unable to serve new requests)
- On APIServer upgrade failure, also rollback the etcd manifest to maintain protocol compatibility
- Add logging
- Test HasTLS()
- Instrument throughout upgrade plan and apply
- Update plan_test and apply_test to use new fake Cluster interfaces
- Add descriptions to upgrade range test
- Support KubernetesDir and EtcdDataDir in upgrade tests
- Cover etcdUpgrade in upgrade tests
- Cover upcoming TLSUpgrade in upgrade tests