Use shared informers instead of creating local controllers/reflectors
for the proxy's endpoints and service configs. This allows downstream
integrators to pass in preexisting shared informers to save on memory &
cpu usage.
This also enables the cache mutation detector for kube-proxy for those
presubmit jobs that already turn it on.
Kubelet flags are not necessarily appropriate for the KubeletConfiguration
object. For example, this PR also removes HostnameOverride and NodeIP
from KubeletConfiguration. This is a preleminary step to enabling Nodes
to share configurations, as part of the dynamic Kubelet configuration
feature (#29459). Fields that must be unique for each node inhibit
sharing, because their values, by definition, cannot be shared.
Automatic merge from submit-queue
kubelet: change image-gc-high-threshold below docker dm.min_free_space
docker dm.min_free_space defaults to 10%, which "specifies the min free space percent in a thin pool require for new device creation to succeed....Whenever a new a thin pool device is created (during docker pull or during container creation), the Engine checks if the minimum free space is available. If sufficient space is unavailable, then device creation fails and any relevant docker operation fails." [1]
This setting is preventing the storage usage to cross the 90% limit. However, image GC is expected to kick in only beyond image-gc-high-threshold. The image-gc-high-threshold has a default value of 90%, and hence GC never triggers. If image-gc-high-threshold is set to a value lower than (100 - dm.min_free_space)%, GC triggers.
xref https://bugzilla.redhat.com/show_bug.cgi?id=1408309
```release-note
changed kubelet default image-gc-high-threshold to 85% to resolve a conflict with default settings in docker that prevented image garbage collection from resolving low disk space situations when using devicemapper storage.
```
@derekwaynecarr @sdodson @rhvgoyal
Automatic merge from submit-queue
fix typo in kubeadm join -h
```
Flags:
--config string Path to kubeadm config file
--discovery-file string A file or url from which to load cluster information
--discovery-token string A token used to validate cluster information fetched from the master
--skip-preflight-checks skip preflight checks normally run before modifying the system
--tls-bootstrap-token string A token used for TLS bootstrapping
--token string Use this token for both discovery-token and tls-bootstrap-token
```
Automatic merge from submit-queue
Make RBAC post-start hook conditional on RBAC authorizer being used
Makes the RBAC post-start hook (and reconciliation) conditional on the RBAC authorizer being used
Ensures we don't set up unnecessary objects.
```release-note
RBAC role and rolebinding auto-reconciliation is now performed only when the RBAC authorization mode is enabled.
```
Automatic merge from submit-queue (batch tested with PRs 42360, 43109, 43737, 43853)
kubeadm: test-cmds for kubeadm completion
**What this PR does / why we need it**: Adding test-cmds for kubeadm completion.
Adding tests is a WIP from #34136
**Special notes for your reviewer**: /cc @luxas
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
don't wait for first kubelet to be ready and drop dummy deploy
Per https://github.com/kubernetes/kubernetes/issues/43815#issuecomment-290270198, I suggest that we drop both the node ready and the dummy deployment check altogether for 1.6 and move them to a validation phase for 1.7.
I really think we should drop these checks altogether. CreateClientAndWaitForAPI should create a client and wait for the API, not create dummy deployments and wait for nodes to register and be healthy. These are end to end validations and this is the wrong place to do this stuff. We need an explicit final validation phase for this.
```release-note
Fix a deadlock in kubeadm master initialization.
```
Fixes#43815
Automatic merge from submit-queue (batch tested with PRs 42617, 43247, 43509, 43644, 43820)
Bugfix: OpenAPI-gen was not generating extensions correctly
Fixes a bug in openapi-gen that generated invalid code if x-kubernetes extensions defined in types.go. The location of VendorExtensions was wrong.
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)
remove legacy insecure port options from genericapiserver
The insecure port has been a source of problems and it will prevent proper aggregation into a cluster, so the genericapiserver has no need for it. In addition, there's no reason for it to be in the main kube-apiserver flow either. This pull removes it from genericapiserver and removes it from the shared kube-apiserver code. It's still wired up in the command, but its no longer possible for someone to mess up and start using in mainline code.
@kubernetes/sig-api-machinery-misc @ncdc
Automatic merge from submit-queue
proxy to IP instead of name, but still use host verification
I think I found a setting that lets us proxy to an IP and still do hostname verification on the certificate.
@liggitt @sttts Can you see if you agree that this knob does what I think it does? Last commit only, still needs tests.
Automatic merge from submit-queue (batch tested with PRs 42900, 43044, 42896, 43308, 43621)
require codecfactory
The genericapiserver requires a codec to start. Help new comers to the API by forcing them to set it when they create a new config.
Automatic merge from submit-queue (batch tested with PRs 43149, 41399, 43154, 43569, 42507)
kubeadm: only print stderr/stdout if failed test
**What this PR does / why we need it**: This PR changes when stdout/stderr will be logged during a kubeadm test-cmd test. It's useful when a real failure occurs to only see the failure rather than output that looks like it might be a failure
**Special notes for your reviewer**: /cc @luxas @marun
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
break kube-apiserver start into stages
This is a code shuffle which breaks the kube-apiserver start into
1. set defaults on the options
1. create the generic config from the options
1. create the master config from the generic config and the options
This makes apiserver composition easy/possible later on.
Automatic merge from submit-queue (batch tested with PRs 43144, 42671, 43226, 43314, 43361)
don't start controllers against unhealthy master
Operating against an unhealthy apiserver is unpredictable. Some clients like `kubectl` need to be best effort in this regard so that you can debug broken apiservers. Controllers shouldn't run against unhealthy masters.
Automatic merge from submit-queue (batch tested with PRs 43144, 42671, 43226, 43314, 43361)
start informers as a post-start-hook
Switches the shared informer start to a post start hook to make future API server composition easier. PostStartHooks will have to be unioned for server composition and this ensures that we don't accidentally skip starting them.
Automatic merge from submit-queue
Better messaging when GKE certificate signing fails.
**What this PR does / why we need it**:
On errors, the GKE signing API can respond with a JSON body that contains an error message explaining the failure. If we're able to extract it, use that message when reporting the error instead of the generic error returned by the webhook library. Also, always add an event to the CSR object on signing errors.
**Release note**:
```release-note
NONE
```
CC @mikedanese @jcbsmpsn
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)
Enable storage class support in Azure File volume
**What this PR does / why we need it**:
Support StorageClass in Azure file volume
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support StorageClass in Azure file volume
```
Automatic merge from submit-queue
kubeadm: Remove an outdated comment
Now that `AdvertiseAddress` is a `string` and not
`AdvertiseAddresses` a `[]string` this comment is no longer
necessary.
@k8s-mirror-cluster-lifecycle-misc RFR
**What this PR does / why we need it**
Just a little house cleaning by removing an outdated comment.
**Release note**:
```release-note
NONE
```
On errors, the GKE signing API can respond with a JSON body that
contains an error message explaining the failure. If we're able to
extract it, use that message when reporting the error instead of the
generic error returned by the webhook library. Also, always add an event
to the CSR object on signing errors.
Automatic merge from submit-queue
Use realistic value for the memory example of kube-reserved and system-reserved
Use realistic value for the memory example of kube-reserved and system-reserved
Currently, kublet help shows the memory example of
kube-reserved and system-reserved as 150G. This 150G is not realistic
value and it leads misconfiguration or confusion. This patch changes
to example value as 500Mi.
Before(same with system-reserved):
```
--kube-reserved value A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=150G) pairs that describe resources reserved for kubernetes system components. Currently only cpu and memory are supported. See http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md for more detail. [default=none]
```
After(same with system-reserved):
```
--kube-reserved value A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi) pairs that describe resources reserved for kubernetes system components. Currently only cpu and memory are supported. See http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md for more detail. [default=none]
```
Automatic merge from submit-queue (batch tested with PRs 43355, 42827)
kubeadm: In-cluster DNS should be used when self-hosting
**What this PR does / why we need it**:
I noticed that the master components doesn't use the built-in cluster DNS which they really should do in order to be able to discover other services inside the cluster (like extension API Servers like service catalog).
This is a really small change that fixes a misconfiguration that had slipped though earlier.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@jbeda @bowei @MrHohn
Automatic merge from submit-queue
kubeadm: Default to v1.6.0 stable in offline scenarios in beforehand
**What this PR does / why we need it**:
In offline scenarios, kubeadm will fallback to the latest well-known version.
This PR bumps that to v1.6. We can merge now, and in the small gap between the merge of this PR and the actual v1.6 release, kubeadm devs will have to explicitely set k8s version.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@jbeda
Automatic merge from submit-queue (batch tested with PRs 43018, 42713)
kubeadm: Don't drain and remove the current node on kubeadm reset
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
In v1.5, `kubeadm reset` would drain your node and remove it from your cluster if you specified, but now in v1.6 we can't do that due to the RBAC rules we have set up.
After conversations with @liggitt, I also agree this functionality was somehow a little mis-placed (though still very convenient to use), so we're removing it for v1.6.
It's the system administrator's duty to drain and remove nodes from the cluster, not the nodes' responsibility.
The current behavior is therefore a bug that needs to be fixed in v1.6
**Release note**:
```release-note
kubeadm: `kubeadm reset` won't drain and remove the current node anymore
```
@liggitt @deads2k @jbeda @dmmcquay @pires @errordeveloper
Automatic merge from submit-queue
remove dgoodwin and dmmcquay to kubeadm reviewers
@dgoodwin says he needs to work on other stuff right now. @dmmcquay says he wants to help with reviews.
Automatic merge from submit-queue (batch tested with PRs 42940, 42906, 42970, 42848)
Improve kubeadm init message
Now that we are locking down the insecure port, we should give clearer instructions on how to copy out the root owned admin.conf file, chmod it and use it.
Signed-off-by: Joe Beda <joe.github@bedafamily.com>
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42969, 42966)
kubeadm: update kubeadm banner to beta
**What this PR does / why we need it**: Updates the intro banner for kubeadm, which used to state it is in alpha (but we are going to beta). This also updates the tagged github group (one that no longer exists) to the sig-cluster-lifecycle-misc group.
**Special notes for your reviewer**: /cc @jbeda
**Release note**:
```release-note
NONE
```
Now that we are locking down the insecure port, we should give clearer instructions on how to copy out the root owned admin.conf file, chmod it and use it.
Signed-off-by: Joe Beda <joe.github@bedafamily.com>
Automatic merge from submit-queue (batch tested with PRs 42728, 42278)
[Federation] Create integration test fixture for api
This PR factors a reusable fixture for the federation api server out of the existing integration test.
Targets #40705
cc: @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42762, 42739, 42425, 42778)
kubeadm: update docker version for CE and EE
**What this PR does / why we need it**: Update regex for docker version to also capture new CE and EE versions.
**Which issue this PR fixes**: fixes #https://github.com/kubernetes/kubeadm/issues/189
**Special notes for your reviewer**: /cc @jbeda @luxas
**Release note**:
```release-note
NONE
```
This change allows validators to pass warnings as well as errors. This
was needed because of how support for docker 1.13+ and the new EE and CE
versions is currently being handled.
Automatic merge from submit-queue
kubeadm: Delete the dummy Deployment properly
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubeadm/issues/149
**Special notes for your reviewer**:
Earlier, the Pod created by the Deployment wasn't deleted. With this option it is.
As suggested by @deads2k, thank you!
This is a bug fix for v1.6
**Release note**:
```release-note
```
@mikedanese @jbeda @dmmcquay @pires @errordeveloper @deads2k @caesarxuchao
Automatic merge from submit-queue (batch tested with PRs 42692, 42169, 42173)
DaemonSet: Respect ControllerRef
**What this PR does / why we need it**:
This is part of the completion of the [ControllerRef](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md) proposal. It brings DaemonSet into full compliance with ControllerRef. See the individual commit messages for details.
**Which issue this PR fixes**:
This ensures that DaemonSet does not fight with other controllers over control of Pods.
**Special notes for your reviewer**:
**Release note**:
```release-note
DaemonSet now respects ControllerRef to avoid fighting over Pods.
```
cc @erictune @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42692, 42169, 42173)
Add pprof trace support
Add support for `/debug/pprof/trace`
Can wait for master to reopen for 1.7.
cc @smarterclayton @wojtek-t @gmarek @timothysc @jeremyeder @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue
kubeadm: Make kube-apiserver's liveness probe match its bindport.
The `kube-apiserver` liveness probe port had previously been hardcoded, so if you used `--apiserver-bind-port` to override the default port (6443), then the health check for the pod would quickly fail and kubelet would continuously kill the apiserver.
**Which issue this PR fixes**: fixes https://github.com/kubernetes/kubeadm/issues/196
**Release note**:
```release-note
kubeadm: fix kube-apiserver liveness probe port when --apiserver-bind-port given
```
Automatic merge from submit-queue (batch tested with PRs 41890, 42593, 42633, 42626, 42609)
make all controllers obey the disable flags
Fixes https://github.com/kubernetes/kubernetes/issues/42592
Some controllers weren't disable-able. This fixes them so they obey our flags.
@ncdc
It had previously been hardcoded, so if you used --apiserver-bind-port
to override the default port (6443), then the health check for the pod
would quickly fail and kubelet would continuously kill the apiserver.
Automatic merge from submit-queue (batch tested with PRs 41826, 42405)
Add stubDomains and upstreamNameservers configuration to kube-dns
```release-note
Updates the dnsmasq cache/mux layer to be managed by dnsmasq-nanny.
dnsmasq-nanny manages dnsmasq based on values from the
kube-system:kube-dns configmap:
"stubDomains": {
"acme.local": ["1.2.3.4"]
},
is a map of domain to list of nameservers for the domain. This is used
to inject private DNS domains into the kube-dns namespace. In the above
example, any DNS requests for *.acme.local will be served by the
nameserver 1.2.3.4.
"upstreamNameservers": ["8.8.8.8", "8.8.4.4"]
is a list of upstreamNameservers to use, overriding the configuration
specified in /etc/resolv.conf.
```
Automatic merge from submit-queue
Fix Multizone pv creation on GCE
When Multizone is enabled static PV creation on GCE
fails because Cloud provider configuration is not
available in admission plugins.
cc @derekwaynecarr @childsb
Automatic merge from submit-queue
kubeadm: Fix the nodeSelector and scheduler mounts when using the self-hosted mode
**What this PR does / why we need it**:
The self-hosted option in `kubeadm` was broken.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42528
**Special notes for your reviewer**:
**Release note**:
```release-note
```
/cc @luxas
Automatic merge from submit-queue
Remove the kube-discovery binary from the tree
**What this PR does / why we need it**:
kube-discovery was a temporary solution to implementing proposal: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md
However, this functionality is now gonna be implemented in the core for v1.6 and will fully replace kube-discovery:
- https://github.com/kubernetes/kubernetes/pull/36101
- https://github.com/kubernetes/kubernetes/pull/41281
- https://github.com/kubernetes/kubernetes/pull/41417
So due to that `kube-discovery` isn't used in any v1.6 code, it should be removed.
The image `gcr.io/google_containers/kube-discovery-${ARCH}:1.0` should and will continue to exist so kubeadm <= v1.5 continues to work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Remove cmd/kube-discovery from the tree since it's not necessary anymore
```
@jbeda @dgoodwin @mikedanese @dmmcquay @lukemarsden @errordeveloper @pires
Automatic merge from submit-queue
kubeadm: Hook up kubeadm against the BootstrapSigner
**What this PR does / why we need it**:
This PR makes kubeadm able to use the BootstrapSigner.
Depends on a few other PRs I've made, I'll rebase and fix this up after they've merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Example usage:
```console
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm init --kubernetes-version v1.7.0-alpha.0.377-2a6414bc914d55
[sudo] password for lucas:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0-alpha.0.377-2a6414bc914d55
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key.
[certificates] Generated service account token signing public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.301384 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 8.072688 seconds
[apiclient] Test deployment succeeded
[token-discovery] Using token: 67a96d.02405a1773564431
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
other-computer $ ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.1.115:6443"
[discovery] Successfully established connection with API Server "192.168.1.115:6443"
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# Wrong secret!
other-computer $ ./kubeadm join --token 67a96d.02405a1773564432 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
^C
# Poor method to create a cluster-info KubeConfig (a KubeConfig file with no credentials), but...
$ printf "kind: Config\n$(sudo ./kubeadm alpha phas --client-name foo --server https://192.168.1.115:6443 --token foo | head -6)\n" > cluster-info.yaml
$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREl3TXpBek1Gb1hEVEkzTURJeU5qSXdNekF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmt0ClFlUVVrenlkQjhTMVM2Y2ZSQ0ZPUnNhdDk2Wi9Id3A3TGJiNkhrSFRub1dvbDhOOVpGUXNDcCtYbDNWbStTS1AKZWFLTTFZWWVDVmNFd0JXNUlWclIxMk51UzYzcjRqK1dHK2NTdjhUOFBpYUZjWXpLalRpODYvajlMYlJYNlFQWAovYmNWTzBZZDVDMVJ1cmRLK2pnRGprdTBwbUl5RDRoWHlEZE1vZk1laStPMytwRC9BeVh5anhyd0crOUFiNjNrCmV6U3BSVHZSZ1h4R2dOMGVQclhKanMwaktKKzkxY0NXZTZJWEZkQnJKbFJnQktuMy9TazRlVVdIUTg0OWJOZHgKdllFblNON1BPaitySktPVEpLMnFlUW9ua0t3WU5qUDBGbW1zNnduL0J0dWkvQW9hanhQNUR3WXdxNEk2SzcvdgplbUM4STEvdzFpSk9RS2dxQmdzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNL1JQbTYzQTJVaGhPMVljTUNqSEJlUjROOHkKUzB0Q2RNdDRvK0NHRDJKTUt5NDJpNExmQTM2L2hvb01iM2tpUkVSWTRDaENrMGZ3VHpSMHc5Q21nZHlVSTVQSApEc0dIRWdkRHpTVXgyZ3lrWDBQU04zMjRXNCt1T0t6QVRLbm5mMUdiemo4cFA2Uk9QZDdCL09VNiswckhReGY2CnJ6cDRldHhWQjdQWVE0SWg5em1KcVY1QjBuaUZrUDBSYWNDYUxFTVI1NGZNWDk1VHM0amx1VFJrVnBtT1ZGNHAKemlzMlZlZmxLY3VHYTk1ME1CRGRZR2UvbGNXN3JpTkRIUGZZLzRybXIxWG9mUGZEY0Z0ZzVsbUNMWk8wMDljWQpNdGZBdjNBK2dYWjBUeExnU1BpYkxaajYrQU9lMnBiSkxCZkxOTmN6ODJMN1JjQ3RxS01NVHdxVnd0dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.1.115:6443
name: kubernetes
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION
67a96d.02405a1773564431 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'.
# Any token with the authentication usage set works as the --tls-bootstrap-token arg here
other-computer $ ./kubeadm join --skip-preflight-checks --discovery-file cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Synced cluster-info information from the API Server so we have got the latest information
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# Delete the RoleBinding that exposes the cluster-info ConfigMap publicly. Now this ConfigMap will be private
lucas@THENINJA:~/luxas/kubernetes$ kubectl -n kube-public edit rolebindings kubeadm:bootstrap-signer-clusterinfo
# This breaks the token joining method
other-computer $ sudo ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
^C
# But we can still connect using the cluster-info file
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[discovery] The cluster-info ConfigMap isn't set up properly (no kubeconfig key in ConfigMap), but the TLS cert is valid so proceeding...
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# What happens if the CA in the cluster-info file and the API Server's CA aren't equal?
# Generated new CA for the cluster-info file, a invalid one for connecting to the cluster
# The new cluster-info file is here:
lucas@THENINJA:~/luxas/kubernetes$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREUyTkRBME1Wb1hEVEkzTURJeU5qRTJOREEwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3VHCmR3MXQ5Nmlkd1YrVmwxQjRVSmZWdGpNZ0NTd1poMG00bmR5Q1JCR3FIRkpMTGhIWjREM2N2ckg1Tk44UmZHS0EKb1cwVjN3Q3R2THl4UFdnZkZMbGtrdERPWnBDQ01oYzd2alYxU2FKUE9MS1BIUUtEdm1CVWFNcTdrUzN5NEg1VApMcUp3bFBUUXNVVW5YNWM5V0pzS2JIcEx6MnJZbC9Pam4veGRtd1lQa3JUTTJwSitMS0RjUkxLTEpiQjhGc2pzCnZBQTg2QURjY3phMDd0WEgxL1MzeTN0UDJMTDN0UVgvZWJIYWNPcHluYnVaNlIwdFhKeUpsTTVlOHRHMzFhWHMKQTV3cGo1d2Z1RGU1amRuTHgxNnFtbG5ueGV3OGp0bk4zSDExYUp6VlErOWlSQUZkUTN4WmN4dWdmQVM2ZndqRwo0QnJFeGpUOUFaRlVQb0VkR09NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJc0pKRFIzbWMxQ1lCd2ViSkRPNm1MdWkwTk4KS3BVdlBuazlMbWlnb2JmYVhjQWlnUlo2M1pIYTd4MXBHNGpKRG8zY3lxNWEybTAzZ245RFMrcEpKYTdpMmpXUQpaV1YvZ2ZRMEk4RGc0endXU3J0T056NHpTTXQ1cW5JZjVWRC95KzVVSmVRck1XSEVFS1VrdklSQzhuUmIvV1F2CmNRWEpiN1hMY0dtbWJyaXpDSUlDYmI4KzhmNDFUWTZnTmg5ZzduaVdGZlp2VG1jN05aMTNjQVJjajJ0UTAzeVMKbWVPcEc2REdMRENFWWYzRld0QmdleE5CcFlFYy9ydUNnUE9IcEdhelYya3JHdFFNLzI0OGQ2ZndwcVNQOGc4RgpVSHNWZWxiMExnNmgvZ3VSYlZ5SENlck5zTDBJdFFhdjlscmZmWkxQaVA5TzNLQ0pBWk9MbXhEOUhaaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.1.115:6443
name: kubernetes
# Try to join an API Server with the wrong CA
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
^C
```
**Release note**:
```release-note
```
@jbeda @mikedanese @justinsb @pires @dmmcquay @roberthbailey @dgoodwin