Automatic merge from submit-queue
kubeadm: add `kubeadm phase addons` command
**What this PR does / why we need it**:
Adds the `addons` phase command to `kubeadm`
fixes: https://github.com/kubernetes/kubeadm/issues/418
/cc @luxas
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
**What this PR does / why we need it**:
In order to improve the UX when the kubelet is unhealthy or stopped, or whatever, kubeadm now polls the kubelet's API after 40 and 60 seconds, and then performs an exponential backoff for a total of 155 seconds.
If the kubelet endpoint is not returning `ok` by then, kubeadm gives up and exits.
This will miligate at least 60% of our "[apiclient] Created API client, waiting for control plane to come up" issues in the kubeadm issue tracker 🎉, as kubeadm now informs the user what's wrong and also doesn't deadlock like before.
Demo:
```
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 40.502199 seconds
[markmaster] Will mark node thegopher as master by adding a label and a taint
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5776d5.91e7ed14f9e274df
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 5776d5.91e7ed14f9e274df 192.168.1.115:6443 --discovery-token-ca-cert-hash sha256:6f301ce8c3f5f6558090b2c3599d26d6fc94ffa3c3565ffac952f4f0c7a9b2a9
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
lucas@THEGOPHER:~/luxas/kubernetes$ sudo systemctl stop kubelet
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by that:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection; so the kubelet can't pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
- gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
- gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
```
In this demo, I'm first starting kubeadm normally and everything works as usual.
In the second case, I'm explicitely stopping the kubelet so it doesn't run, and skipping preflight checks, so that kubeadm doesn't even try to exec `systemctl start kubelet` like it does usually.
That obviously results in a non-working system, but now kubeadm tells the user what's the problem instead of waiting forever.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes: https://github.com/kubernetes/kubeadm/issues/377
**Special notes for your reviewer**:
**Release note**:
```release-note
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @pipejakob
cc @justinsb @kris-nova @lukemarsden as well as you wanted this feature :)
Automatic merge from submit-queue (batch tested with PRs 50832, 51119, 51636, 48921, 51712)
kubeadm: Add support for using an external CA whose key is never stored in the cluster
We allow a kubeadm user to use an external CA by checking to see if ca.key is missing and skipping cert checks and kubeconfig generation if ca.key is missing. We also pass an empty arg --cluster-signing-key-file="" to kube controller manager so that the csr signer doesn't start.
**What this PR does / why we need it**:
This PR allows the kubeadm certs phase and kubeconfig phase to be skipped if the ca.key is missing but all other certs are present.
**Which issue this PR fixes** :
Fixes kubernetes/kubeadm/issues/280
**Special notes for your reviewer**:
@luxas @mikedanese @fabriziopandini
**Release note**:
```release-note
kubeadm: Add support for using an external CA whose key is never stored in the cluster
```
Automatic merge from submit-queue
kubeadm: Add node-cidr-mask-size to pass to kube-controller-manager for IPv6
Due to the increased size of subnets with IPv6, the node-cidr-mask-size needs to be passed to kube-controller-manager. If IPv4 it will be set to 24 as it was previously, if IPv6, it will be set to
64
**What this PR does / why we need it**:
If the user specifies the --pod-network-cidr with kubeadm init, this caused the kube-controller-manager manifest to include the "--allocate-node-cidrs" and "--cluster-cidr" flags to be set. The --node-cidr-mask-size is not set, and currently defaults to 24, which is fine for IPv4, but not appropriate for IPv6. This change passes the a value as the node-cidr-mask-size to the controller-manager. It detects if it is IPv4 or v6, and sets --node-cidr-mask-size to 24 for IPv4 as before, and to 64 for IPv6.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50469
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
We allow a kubeadm user to use an external CA by checking to see if ca.key is missing and skipping cert checks and kubeconfig generation if ca.key is missing.
Automatic merge from submit-queue (batch tested with PRs 49861, 50933, 51380, 50688, 51305)
Add configurable groups to bootstrap tokens.
**What this PR does / why we need it**:
This change adds support for authenticating bootstrap tokens into a configurable set of extra groups in addition to `system:bootstrappers`. Previously, bootstrap tokens could only ever authenticate to the `system:bootstrappers` group.
Groups are specified as a comma-separated list in the `auth-extra-groups` key of the `bootstrap.kubernetes.io/token` Secret, and must begin with the prefix `system:bootstrapper:` (and match a validation regex that checks against our normal convention). Whether or not any extra groups are configured, `system:bootstrappers` will still be added.
This also adds a `--groups` flag for `kubeadm token create`, which sets the `auth-extra-groups` key on the resulting Secret. The default is to not set the key.
`kubeadm token list` is also updated to include a `EXTRA GROUPS` output column.
**Which issue this PR fixes**: fixes#49306
**Special notes for your reviewer**:
The use case for this is in https://github.com/kubernetes/kubernetes/issues/49306. Comments on the feature itself are probably better over there. It will be part of how HA/self-hosting kubeadm bootstraps new master nodes (post 1.8).
**Release note**:
```release-note
Add support for configurable groups for bootstrap token authentication.
```
cc @luxas @kubernetes/sig-cluster-lifecycle-api-reviews @kubernetes/sig-auth-api-reviews
/kind feature
Automatic merge from submit-queue
kubeadm: Rename FeatureFlags to FeatureGates
**What this PR does / why we need it**:
Automatic rename from `FeatureFlags` to `FeatureGates`, as I noticed that's the real name for this feature. This is for consistency in the API and generally in the code.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @fabriziopandini @jamiehannaford
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)
Add host mountpath to controller-manager for flexvolume dir
Controller manager needs access to Flexvolume plugin when using attach-detach controller interface.
This PR adds the host mount path for the default directory of flexvolume plugins
Fixes https://github.com/kubernetes/kubeadm/issues/410
Controller manager needs access to Flexvolume plugin when
using attach-detach controller interface.
This PR adds the host mount path for the default directory of flexvolume
plugins
Fixes https://github.com/kubernetes/kubeadm/issues/410
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)
update related manifest files to use hostpath type
**What this PR does / why we need it**:
Per [discussion in #46597](https://github.com/kubernetes/kubernetes/pull/46597#pullrequestreview-53568947)
Dependes on #46597
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes: https://github.com/kubernetes/kubeadm/issues/298
**Special notes for your reviewer**:
/cc @euank @thockin @tallclair @Random-Liu
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 51039, 50512, 50546, 50965, 50467)
kubeadm: Get kube-dns based on the kubernetes version
**What this PR does / why we need it**:
Makes the kube-dns version used dependent on the kubernetes version. This is required for upgrades as we have to be able to handle one kube-dns version per branch for instance...
Currently a no-op though, as both v1.7 and v1.8 seem to use 1.14.4
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Dependency for https://github.com/kubernetes/kubernetes/pull/48899 (kubeadm upgrades)
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
@kubernetes/dns-maintainers FYI; next time you bump DNS version, please update this func instead of the constant there...
Automatic merge from submit-queue (batch tested with PRs 50967, 50505, 50706, 51033, 51028)
kubeadm: Tell the user when a static pod is created
**What this PR does / why we need it**:
Prints a line to notify the user of the static pod creation in order to be consistent with the other phases (one line per phase and optionally per component).
Now the phase command `controlplane all` and `etcd local` also actually outputs something.
Also renamed `[token]` to `[bootstraptoken]` to match the output below and `s/mode/modes/`
`kubeadm init` output now:
```console
$ ./kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[apiclient] All control plane components are healthy after 40.002026 seconds
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: cfe65e.d196614967c3ffe3
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token cfe65e.d196614967c3ffe3 192.168.1.115:6443 --discovery-token-ca-cert-hash sha256:eb3461b9b707eafc214577f36ae8c351bbc4d595ab928fc84caf1325b69cb192
$ ./kubeadm alpha phase controlplane all
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
$ ./kubeadm alpha phase etcd local
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @fabriziopandini
Due to the increased size of subnets with IPv6, the node-cidr-mask-size needs to be passed to kube-controller-manager. If the user passes a IPv6 cidr, the node-cidr-mask-size will be set to 64, If IPv4 it will be set to 24 as it was previously.
Automatic merge from submit-queue (batch tested with PRs 46458, 50934, 50766, 50970, 47698)
kubeadm: Make the self-hosting with certificates in Secrets mode work again
**What this PR does / why we need it**:
This PR:
- makes the self-hosting with certificates in Secrets mode work
- makes the wait functions timeoutable
- fixes a race condition where the kubelet may be slow to remove the Static Pod
- cleans up some of the self-hosting logic
- makes self-hosting-with-secrets respect the feature flag
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/405
**Special notes for your reviewer**:
This is work in progress. I'll add unit tests, rebase upon https://github.com/kubernetes/kubernetes/pull/50762 and maybe split out some of the functionatlity here into a separate PR
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 50693, 50831, 47506, 49119, 50871)
kubeadm: Implement support for using images from CI builds
**What this PR does / why we need it**: Implements support for CI images in kubeadm
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixeskubernetes/kubeadm#337
**Special notes for your reviewer**:
**Release note**:
```release-note
- kubeadm now supports "ci/latest-1.8" or "ci-cross/latest-1.8" and similar labels.
```
Automatic merge from submit-queue (batch tested with PRs 47896, 50678, 50620, 50631, 51005)
kubeadm: Adds dry-run support for kubeadm using the `--dry-run` option
**What this PR does / why we need it**:
Adds dry-run support to kubeadm by creating a fake clientset that can get totally fake values (like in the init case), or delegate GETs/LISTs to a real API server but discard all edits like POST/PUT/PATCH/DELETE
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/389
**Special notes for your reviewer**:
This PR depends on https://github.com/kubernetes/kubernetes/pull/50626, first three commits are from there
This PR is a dependency for https://github.com/kubernetes/kubernetes/pull/48899 (kubeadm upgrades)
I have some small things to fixup and I'll yet write unit tests, but PTAL if you think this is going in the right direction
**Release note**:
```release-note
kubeadm: Adds dry-run support for kubeadm using the `--dry-run` option
```
cc @kubernetes/sig-cluster-lifecycle-pr-reviews @kubernetes/sig-api-machinery-pr-reviews
Previously, kubeadm would use <ip>:<port> to construct a master
endpoint. This works fine for IPv4 addresses, but not for IPv6.
IPv6 requires the ip to be encased in brackets when being joined
to a port with a colon.
This patch updates kubeadm to support wrapping a v6 address with
[] to form the master endpoint url. Since this functionality is
needed in multiple areas, a dedicated util function was created.
Fixes: https://github.com/kubernetes/kubernetes/issues/48227
Automatic merge from submit-queue (batch tested with PRs 41901, 50762, 50756)
Feature-gate self-hosted secrets
**What this PR does / why we need it**:
Feature gates now select whether secrets are used for TLS cert storage in self-hosted clusters.
**Release note**:
```release-note
TLS cert storage for self-hosted clusters is now configurable. You can store them as secrets (alpha) or as usual host mounts.
```
/cc @luxas
Automatic merge from submit-queue (batch tested with PRs 49115, 47480)
Adds IPv6 test cases for kubeadm certs.
**What this PR does / why we need it**:
Adds IPv6 test cases in support of kubeadm certificate and validation functionality. It's needed to ensure test cases cover IPv6 related networking scenarios.
**Which issue this PR fixes**
This PR is in support of Issue #1443
**Special notes for your reviewer**:
Additional PR's will follow to ensure kubeadm supports IPv6.
**Release note**:
```NONE
```