Commit Graph

814 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
6915fd5f20 Merge pull request #53146 from brendandburns/ignore
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add a label which prevents a node from being added to a cloud load balancer

There are a variety of reasons that you may not want a node in a cluster to participate in a cloud load balancer. For example workload isolation for security, or managing network throughput, or because the node is not in the appropriate virtual network (cluster's that span environments)

This PR adds a label so that you can select which nodes you want to participate.
2017-09-27 21:28:52 -07:00
Bowei Du
c122a7c54f Update kubeadm to 1.14.5 2017-09-27 11:37:21 -07:00
Brendan Burns
422f5e37b9 Add a label which prevents a node from being added to a cloud load balancer. 2017-09-27 10:13:02 -07:00
Kubernetes Submit Queue
df569a3b24 Merge pull request #53043 from kad/upgrade-ux
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Allow to use version labels in kubeadm upgrade apply command.

**What this PR does / why we need it**:

kubeadm upgrade apply now is able to utilize all possible combinations
of version argument, including labels (latest, stable-1.8, ci/latest-1.9)
as well as specific builds (v1.8.0-rc.1, ci/v1.9.0-alpha.1.123_01234567889)

As side effect, specifying exact build to deploy from CI area is now also
possible in kubeadm init command.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes kubernetes/kubeadm#451

**Special notes for your reviewer**:
cc @luxas 

**Release note**:
```release-note
- kubeadm init can now deploy exact build from CI area by specifying ID with "ci/" prefix. Example: "ci/v1.9.0-alpha.1.123+01234567889"
- kubeadm upgrade apply supports all standard ways of specifying version via labels. Examples: stable-1.8, latest-1.8, ci/latest-1.9 and similar.
```
2017-09-27 09:13:25 -07:00
Kubernetes Submit Queue
86338ba153 Merge pull request #52913 from kad/kubeadm-issue-430
Automatic merge from submit-queue (batch tested with PRs 50685, 53050, 52899, 52913, 53067). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Detect major version mismatches between kubeadm and kubelet.

**What this PR does / why we need it**:
Kubeadm supports only one minor release back, thus for 1.9 it requires minimum kubelet from 1.8.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes kubernetes/kubeadm#430

**Special notes for your reviewer**:

**Release note**:
```release-note
- kubeadm 1.9 will detect and fail init or join pre-flight checks if kubelet is lower than 1.8.0-alpha
```
2017-09-27 07:33:35 -07:00
Alexander Kanevskiy
09e59cfcaf Allow to use version labels in kubeadm upgrade apply
kubeadm upgrade apply now is able to utilize all possible combinations
of version argument, including labels (latest, stable-1.8, ci/latest-1.9)
as well as specific builds (v1.8.0-rc.1, ci/v1.9.0-alpha.1.123_01234567889)

As side effect, specifying exact build to deploy from CI area is now also
possible in kubeadm init command.

Fixes: kubernetes/kubeadm#451
2017-09-26 22:27:58 +03:00
madhukar32
ad8c9a3b8a Removes creation of CSR approval CR from kubeadm 2017-09-26 07:04:32 -07:00
Alexander Kanevskiy
521c84aa89 Detect major version mismatches between kubeadm and kubelet.
Kubeadm supports only one minor release back, thus for 1.9
it requires minimum kubelet from 1.8.

Fixes: kubernetes/kubeadm#430
2017-09-26 12:56:03 +03:00
Kubernetes Submit Queue
20fd96a161 Merge pull request #52540 from sbezverk/kubeadm_issue_398
Automatic merge from submit-queue (batch tested with PRs 52251, 52540). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

kubeadm: Switching to rbac/v1

Fixes: https://github.com/kubernetes/kubeadm/issues/398
Fixes: https://github.com/kubernetes/kubeadm/issues/385
Fixes: https://github.com/kubernetes/kubeadm/issues/403
2017-09-25 07:19:55 -07:00
Kubernetes Submit Queue
7fa13044bb Merge pull request #52251 from sbezverk/kubeadm_lint_cleanup
Automatic merge from submit-queue (batch tested with PRs 52251, 52540). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

kubeadm golint clean up

Cleaning up golint discovered issue for kubeadm

Fixes: https://github.com/kubernetes/kubeadm/issues/375
2017-09-25 07:19:53 -07:00
Serguei Bezverkhi
9d725da4c3 Switching to rbac/v1
Closes https://github.com/kubernetes/kubeadm/issues/398
2017-09-24 10:47:29 -04:00
Kubernetes Submit Queue
8e7f5d8c8b Merge pull request #52855 from NickrenREN/remove-rackspace
Automatic merge from submit-queue (batch tested with PRs 52880, 52855, 52761, 52885, 52929). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Remove cloud provider rackspace

**What this PR does / why we need it**:
For now, we have to implement functions in both `rackspace` and `openstack` packages if we want to add function for cinder, for example [resize for cinder](https://github.com/kubernetes/kubernetes/pull/51498).  Since openstack has implemented all the functions rackspace has,  and rackspace is considered deprecated for a long time, [rackspace deprecated](https://github.com/rackspace/gophercloud/issues/592) ,
after talking with @mikedanese  and @jamiehannaford offline ,  i sent this PR to remove `rackspace` in favor of `openstack`

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #52854

**Special notes for your reviewer**:

**Release note**:
```release-note
The Rackspace cloud provider has been removed after a long deprecation period. It was deprecated because it duplicates a lot of the OpenStack logic and can no longer be maintained. Please use the OpenStack cloud provider instead.
```
2017-09-24 04:30:04 -07:00
Kubernetes Submit Queue
7c9e614cbb Merge pull request #52873 from ixdy/bazel-cleanup
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

bazel: build/test almost everything

**What this PR does / why we need it**: Miscellaneous cleanups and bug fixes. The main motivating idea here was to make `bazel build //...` and `bazel test //...` mostly work. (There's a few reasons these still don't work, but we're a lot closer.)

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

/assign @BenTheElder @mikedanese @spxtr
2017-09-24 00:04:36 -07:00
Kubernetes Submit Queue
30bb5153be Merge pull request #52542 from sbezverk/kubeadm_issue_390
Automatic merge from submit-queue (batch tested with PRs 50890, 52484, 52542, 52567, 50672). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

kubeadm: Switching to apps/v1beta2
2017-09-23 16:26:52 -07:00
Kubernetes Submit Queue
e8e617f31d Merge pull request #52538 from kad/no-ttl-warning
Automatic merge from submit-queue (batch tested with PRs 52445, 52380, 52516, 52531, 52538). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Remove warning about changes in default token TTLs

**What this PR does / why we need it**:
It was planned for 1.9 cleanup to remove that warning, as change was
done few release cycles ago and users should be already aware of it.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes kubernetes/kubeadm#346

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
2017-09-23 14:33:19 -07:00
Kubernetes Submit Queue
32144cd775 Merge pull request #52109 from medinatiger/dev
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Add more test coverage for kubeadm uploadconfig especially with idemp…

**What this PR does / why we need it**:
This PR adds more test case for Kubeadm uploadconfig, particularly to address some feedback in #51482
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes #kubernetes/kubeadm#379

**Special notes for your reviewer**:

```release-note
NONE
```
2017-09-23 10:17:58 -07:00
Kubernetes Submit Queue
446daf02a5 Merge pull request #52240 from mattjmcnaughton/mattjmcnaughton/do-not-import-apimachinery-from-staging
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Modify `apimachinery` imports using `staging`

**What this PR does / why we need it**:

Currently some of the imports of `apimachinery` use
`k8s.io/kubernetes/staging/src/k8s.io/apimachinery...`. Replace
these with `k8s.io/apimachinery`, as is in use throughout the rest
of the code base.

Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>

**Release note**:
```release-note
NONE
```
2017-09-23 09:08:26 -07:00
Serguei Bezverkhi
42bd500134 kubeadm golint clean up
Closes #375
2017-09-23 08:07:55 -04:00
Kubernetes Submit Queue
1c0f22ea01 Merge pull request #43016 from liggitt/time-added-pointer
Automatic merge from submit-queue (batch tested with PRs 43016, 50503, 51281, 51518, 51582). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Omit timeAdded from taint when empty

Fixes omitempty portion of https://github.com/kubernetes/kubernetes/issues/42394
2017-09-22 23:35:52 -07:00
Lucas Käldström
81840b3782 Use the release-1.8 branch by default 2017-09-22 21:03:28 +03:00
Lucas Käldström
4360911d89 Fake out the kubernetes version in phase testing in order to avoid resolving things manually (which can lead to flakes) 2017-09-22 21:03:16 +03:00
NickrenREN
39c48d3605 remove rackspace related code 2017-09-22 18:06:50 +08:00
Jeff Grafton
02fb4200dc Use buildozer to delete licenses() rules 2017-09-21 15:53:22 -07:00
Jeff Grafton
532bd482df Use buildozer to remove deprecated automanaged tags 2017-09-21 15:53:22 -07:00
Serguei Bezverkhi
834a02e673 Switching to apps/v1beta2
Closes https://github.com/kubernetes/kubeadm/issues/390
2017-09-15 18:48:17 -04:00
fabriziopandini
d21040b8e6 fix addon error 2017-09-16 00:03:35 +02:00
Alexander Kanevskiy
d5098c1624 Remove warning about changes in default token TTLs
It was planned for 1.9 cleanup to remove that warning, as change was
done few release cycles ago and users should be already aware of it.

Closes: kubernetes/kubeadm#346
2017-09-15 16:50:58 +03:00
Kubernetes Submit Queue
e1b446f873 Merge pull request #52362 from fabriziopandini/kubeadm436
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)

fix kubeadm token create error

**What this PR does / why we need it**:
fix kubeadm token create error

**Which issue this PR fixes** 
[#436](https://github.com/kubernetes/kubeadm/issues/436) 

**Special notes for your reviewer**:
CC @luxas
2017-09-13 09:30:15 -07:00
Kubernetes Submit Queue
e36b4fdaa8 Merge pull request #52364 from fabriziopandini/kubeadm437
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)

fix Kubeadm phase addon error

What this PR does / why we need it:
fix Kubeadm phase addon error

Which issue this PR fixes
[#437](https://github.com/kubernetes/kubeadm/issues/437)

Special notes for your reviewer:
CC @luxas @andrewrynhard
2017-09-13 09:30:11 -07:00
Kubernetes Submit Queue
2ed6e53183 Merge pull request #52153 from lukemarsden/tweak-kubeadm-intro-text
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)

Improve kubeadm help text

* Replace 'misc' with more specific at-mentions bugs and feature-requests.
* Replace ReplicaSets with Deployments as example, because ReplicaSets are dated.
* Generalize join example.

Before:

```
    ┌──────────────────────────────────────────────────────────┐
    │ KUBEADM IS BETA, DO NOT USE IT FOR PRODUCTION CLUSTERS!  │
    │                                                          │
    │ But, please try it out! Give us feedback at:             │
    │ https://github.com/kubernetes/kubeadm/issues             │
    │ and at-mention @kubernetes/sig-cluster-lifecycle-misc    │
    └──────────────────────────────────────────────────────────┘

Example usage:

    Create a two-machine cluster with one master (which controls the cluster),
    and one node (where your workloads, like Pods and ReplicaSets run).

    ┌──────────────────────────────────────────────────────────┐
    │ On the first machine                                     │
    ├──────────────────────────────────────────────────────────┤
    │ master# kubeadm init                                     │
    └──────────────────────────────────────────────────────────┘

    ┌──────────────────────────────────────────────────────────┐
    │ On the second machine                                    │
    ├──────────────────────────────────────────────────────────┤
    │ node# kubeadm join --token=<token> <ip-of-master>:<port> │
    └──────────────────────────────────────────────────────────┘

    You can then repeat the second step on as many other machines as you like.
```

After (changes highlighted with `<--`):

```
    ┌──────────────────────────────────────────────────────────┐
    │ KUBEADM IS BETA, DO NOT USE IT FOR PRODUCTION CLUSTERS!  │
    │                                                          │
    │ But, please try it out! Give us feedback at:             │
    │ https://github.com/kubernetes/kubeadm/issues             │
    │ and at-mention @kubernetes/sig-cluster-lifecycle-bugs    │ <--
    │ or @kubernetes/sig-cluster-lifecycle-feature-requests    │ <--
    └──────────────────────────────────────────────────────────┘

Example usage:

    Create a two-machine cluster with one master (which controls the cluster),
    and one node (where your workloads, like Pods and Deployments run).  <--

    ┌──────────────────────────────────────────────────────────┐
    │ On the first machine                                     │
    ├──────────────────────────────────────────────────────────┤
    │ master# kubeadm init                                     │
    └──────────────────────────────────────────────────────────┘

    ┌──────────────────────────────────────────────────────────┐
    │ On the second machine                                    │
    ├──────────────────────────────────────────────────────────┤
    │ node# kubeadm join <arguments-returned-from-init>        │ <--
    └──────────────────────────────────────────────────────────┘

    You can then repeat the second step on as many other machines as you like.

```

cc @luxas
2017-09-13 09:30:06 -07:00
fabriziopandini
56d830776b fix Kubeadm phase addon 2017-09-12 23:52:20 +02:00
fabriziopandini
36562db310 fix kubeadm token create error 2017-09-12 23:30:14 +02:00
Lucas Käldström
09d7e73ae5 kubeadm: Mark self-hosting alpha in v1.8 2017-09-11 22:12:53 +03:00
mattjmcnaughton
8323fb4b4f Modify apimachinery imports using staging
Currently some of the imports of `apimachinery` use
`k8s.io/kubernetes/staging/src/k8s.io/apimachinery...`. Replace
these with `k8s.io/apimachinery`, as is in use throughout the rest
of the code base.

Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
2017-09-10 10:19:30 -04:00
Feng Min
e5d205717b Add more test coverage for kubeadm uploadconfig especially with idempotent case. 2017-09-08 16:47:49 -07:00
Lucas Käldström
136d68b4d5 kubeadm: Perform TLS Bootstrapping in kubeadm join for v1.7 kubelets but not v1.8 ones 2017-09-08 22:29:11 +03:00
Lucas Käldström
976d5c3438 Revert commit 9dc3a661d7 2017-09-08 15:19:18 +03:00
Luke Marsden
60a16cfedd Replace 'misc' with more specific at-mentions bugs and feature-requests. Replace ReplicaSets with Deployments as example, because ReplicaSets are dated. Generalize join example. 2017-09-08 09:22:07 +01:00
Jordan Liggitt
3cf760c57e Change TimeAdded to pointer 2017-09-07 14:13:09 -04:00
Lucas Käldström
74954fdae9 kubeadm: Set the new BT auth group on the init token 2017-09-07 15:27:58 +03:00
Kubernetes Submit Queue
ea017719e5 Merge pull request #51171 from andrewrynhard/proxy-dns-phase
Automatic merge from submit-queue

kubeadm: add `kubeadm phase addons` command

**What this PR does / why we need it**:
Adds the `addons` phase command to `kubeadm`

fixes: https://github.com/kubernetes/kubeadm/issues/418

/cc @luxas
2017-09-07 00:03:15 -07:00
Andrew Rynhard
d55cea629f kubeadm: add addons command 2017-09-06 19:54:04 -07:00
Lucas Käldström
a455f995ac kubeadm: Upgrade Bootstrap Tokens to beta when upgrading to v1.8 2017-09-06 21:04:33 +03:00
Kubernetes Submit Queue
e528a6e785 Merge pull request #51369 from luxas/kubeadm_poll_kubelet
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)

kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy

**What this PR does / why we need it**:

In order to improve the UX when the kubelet is unhealthy or stopped, or whatever, kubeadm now polls the kubelet's API after 40 and 60 seconds, and then performs an exponential backoff for a total of 155 seconds.

If the kubelet endpoint is not returning `ok` by then, kubeadm gives up and exits.

This will miligate at least 60% of our "[apiclient] Created API client, waiting for control plane to come up" issues in the kubeadm issue tracker 🎉, as kubeadm now informs the user what's wrong and also doesn't deadlock like before.

Demo:
```
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 40.502199 seconds
[markmaster] Will mark node thegopher as master by adding a label and a taint
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5776d5.91e7ed14f9e274df
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 5776d5.91e7ed14f9e274df 192.168.1.115:6443 --discovery-token-ca-cert-hash sha256:6f301ce8c3f5f6558090b2c3599d26d6fc94ffa3c3565ffac952f4f0c7a9b2a9

lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
lucas@THEGOPHER:~/luxas/kubernetes$ sudo systemctl stop kubelet
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by that:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	- There is no internet connection; so the kubelet can't pull the following control plane images:
		- gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
		- gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
		- gcr.io/google_containers/kube-scheduler-amd64:v1.7.4

You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
```

In this demo, I'm first starting kubeadm normally and everything works as usual.
In the second case, I'm explicitely stopping the kubelet so it doesn't run, and skipping preflight checks, so that kubeadm doesn't even try to exec `systemctl start kubelet` like it does usually.
That obviously results in a non-working system, but now kubeadm tells the user what's the problem instead of waiting forever.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

Fixes: https://github.com/kubernetes/kubeadm/issues/377

**Special notes for your reviewer**:

**Release note**:

```release-note
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @pipejakob 

cc @justinsb @kris-nova @lukemarsden as well as you wanted this feature :)
2017-09-03 15:54:19 -07:00
Lucas Käldström
92c5997b8e kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy 2017-09-03 18:02:46 +03:00
Lucas Käldström
d3081ee23d autogenerated code 2017-09-03 17:26:02 +03:00
Lucas Käldström
b0a17d11e4 kubeadm: Add omitempty tags to nullable values and use metav1.Duration 2017-09-03 17:25:45 +03:00
Lucas Käldström
c575626988 autogenerated bazel 2017-09-03 12:29:03 +03:00
Lucas Käldström
94983530d4 Add unit tests for kubeadm upgrade 2017-09-03 12:26:10 +03:00
Lucas Käldström
c237ff5bc0 Fully implement the kubeadm upgrade functionality 2017-09-03 12:25:47 +03:00