Apply a small fix to ensure the kubeconfig files
that kubeadm manages have a CA when printed in the table
of the "check expiration" command. "CAName" is the field used for that.
In practice kubeconfig files can contain multiple credentials
from different CAs, but this is not supported by kubeadm and there
is a single cluster CA that signs the single client cert/key
in kubeadm managed kubeconfigs.
In case stacked etcd is used, the code that does expiration checks
does not validate if the etcd CA is "external" (missing key)
and if the etcd CA signed certificates are valid.
Add a new function UsingExternalEtcdCA() similar to existing functions
for the cluster CA and front-proxy CA, that performs the checks for
missing etcd CA key and certificate validity.
This function only runs for stacked etcd, since if etcd is external
kubeadm does not track any certs signed by that etcd CA.
This fixes a bug where the etcd CA will be reported as local even
if the etcd/ca.key is missing during "certs check-expiration".
When the "kubeadm certs check-expiration" command is used and
if the ca.key is not present, regular on disk certificate reads
pass fine, but fail for kubeconfig files. The reason for the
failure is that reading of kubeconfig files currently
requires reading both the CA key and cert from disk. Reading the CA
is done to ensure that the CA cert in the kubeconfig is not out of date
during renewal.
Instead of requiring both a CA key and cert to be read, only read
the CA cert from disk, as only the cert is needed for kubeconfig files.
This fixes printing the cert expiration table even if the ca.key
is missing on a host (i.e. the CA is considered external).
kubeam:node-proxier -> kubeadm:node-proxier
This causes e2e test failures:
"[area-kubeadm] proxy addon kube-proxy ServiceAccount should
be bound to the system:node-proxier cluster role"
in:
- kubeadm-kinder-latest
- kubeadm-kinder-latest-on-...
- other tests
Before this commit when setting bindAddress to 1.2.3.4 the warning was:
The recommended value for "bindAddress" in "KubeProxyConfiguration" is: 1.2.3.4; the provided value is: 0.0.0.0
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
Add the UnversionedKubeletConfigMap feature gate that can
be used to control legacy vs new behavior for naming the
default configmap used to store the KubeletConfiguration.
Update related unit tests.
Instead of the individual error and return, it's better to aggregate all
the errors so that we can fix them all at once.
Take the chance to fix some comments, since kubeadm are not checking that
the certs are equal across controlplane.
Signed-off-by: Dave Chen <dave.chen@arm.com>
The addition of output/v1alpha2 made the converter-gen require
an explicit converter for:
kubeadm/v1beta2.BootstrapToken -> bootstraptoken/v1.BootstrapToken.
Add this converter under kubeadm/v1beta.
Use the converter in output/v1alpha1.
* De-share the Handler struct in core API
An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.
This never should have been shared. Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.
In the future I can also see adding lifecycle hooks that don't make
sense as probes. E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.
* Run update scripts
The API is a copy of output/v1alpha1 with a minor difference
where output/v1alpha2.BootstrapToken embeds
bootstraptoken/v1.BootstrapToken instead of
kubeadm/v1beta2.BootstrapToken.
Embedding the later is an undesired binding between the "kubeadm"
and "output" groups, preventing the eventual deprecation and removal
of the kubeadm.v1beta2 API.
This new output API version, unlike v1alpha1, does not include
defaulting which is not needed.