Commit Graph

831 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
0b597b51d6 Merge pull request #55972 from rpothier/v6_proxy_bind_addr
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use kube-proxy ComponentConfig in kubeadm clusters

This change adds configuring the kube-proxy bind address to be an
IPv6 address based on the whether the API server advertise address is IPv6.

It is doing this via the kube-proxy ComponentConfig API now from v1.9

**What this PR does / why we need it**:
This PR sets the bind address for kube-proxy to be a IPv6 address. This is needed for IPv6

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #50927
Fixes https://github.com/kubernetes/kubeadm/issues/527

**Special notes for your reviewer**:

**Release note**:

```release-note
Adds kubeadm support for using ComponentConfig for the kube-proxy
```
2017-11-23 17:58:09 -08:00
Robert Pothier
ce8113d9a9 Update kubeadm config for setting kube-proxy bind address
This change adds configuring the kube-proxy bind address to be an
IPv6 address based on the whether the API server advertise address is IPv6.
2017-11-23 00:48:20 -05:00
Kubernetes Submit Queue
b2a233b6d4 Merge pull request #56156 from sbezverk/kubeadm_upgrade_plan_etcd
Automatic merge from submit-queue (batch tested with PRs 55873, 56156). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding etcd version for kubeadm upgrade plan

Adding etcd version display to kubeadm upgrade plan subcommand
```release-note
Adding etcd version display to kubeadm upgrade plan subcommand
```
Closes https://github.com/kubernetes/kubeadm/issues/531
2017-11-22 06:43:26 -08:00
Serguei Bezverkhi
a9ea1b881b Adding etcd version for kubeadm upgrade plan 2017-11-22 07:01:13 -05:00
wackxu
3592c1be18 Improve kubeadm apply error logging style 2017-11-20 20:40:14 +08:00
Kubernetes Submit Queue
f0ce7ca051 Merge pull request #55010 from sbezverk/kubeadm_etcd_upgrade_apply
Automatic merge from submit-queue (batch tested with PRs 51192, 55010). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding etcd upgrade option to kubeadm upgrade apply 

This PR adds etcd upgrade functionality to kubeadm upgrade apply.
First commit adds certain functions to be able to deal with a single component of control plane and not just with all three components (apiserver, controller-manager and scheduler). It adds granularity as a result code can be reused. 

Closes: https://github.com/kubernetes/kubeadm/issues/490

```release-note
Adds to **kubeadm upgrade apply**, a new **--etcd-upgrade** keyword. When this keyword is specified, etcd's static pod gets upgraded to the etcd version officially recommended for a target kubernetes release.
```
2017-11-19 05:22:26 -08:00
Serguei Bezverkhi
1f20a8d022 Adding etcd upgrade to kubeadm upgrade apply
List of changes:
- Refactoring staticpod and waiter functions
2017-11-18 18:47:50 -05:00
xiangpengzhao
880648f3f1 Set defaults for KubeletConfiguration 2017-11-18 00:55:59 +08:00
xiangpengzhao
e8c58338a0 Auto generated files. 2017-11-17 16:57:23 +08:00
Serguei Bezverkhi
39830f3642 Refactoring staticpod and waiter functions 2017-11-12 19:36:56 -05:00
Alexander Kanevskiy
4bd692a3bf kubeadm: Utilize transport defaults from API machinery for http calls
Default Go HTTP transport does not allow to use CIDR notations in
NO_PROXY variables, thus for certain HTTP calls that is done inside
kubeadm user needs to put explicitly multiple IP addresses. For most of
calls done via API machinery it is get solved by setting different Proxy
resolver. This patch allows to use CIDR notations in NO_PROXY variables
for currently all other HTTP calls that is made inside kubeadm.
2017-11-10 14:05:58 +02:00
Daneyon Hansen
1d47893608 Adds Support for Configurable Kubeadm Probes. 2017-11-03 10:42:29 -07:00
Andrew Rynhard
5a64c049e6 Allow extra volumes to be defined 2017-10-31 21:44:45 -07:00
Lars Lehtonen
1884055329 cmd/kubeadm/app/util/apiclient: fix swallowed errors
cmd/kubeadm/app/phases/upgrade: fix swallowed error

cmd/kubeadm/app/phases/selfhosting: fix swallowed errors

cmd/kubeadm/app/phases/certs: fix swallowed errors

cmd/kubeadm/app/cmd: fix swallowed error

cmd/kubeadm/app/cmd: descriptive error returns

cmd/kubeadm/app/cmd: govet fixes

cmd/kubeadm: error formatting
2017-10-25 18:10:21 -07:00
Dr. Stefan Schimanski
cad0364e73 Update bazel 2017-10-18 17:24:04 +02:00
Dr. Stefan Schimanski
7773a30f67 pkg/api/legacyscheme: fixup imports 2017-10-18 17:23:55 +02:00
Jeff Grafton
aee5f457db update BUILD files 2017-10-15 18:18:13 -07:00
Kubernetes Submit Queue
5502e74b1c Merge pull request #52869 from medinatiger/dev2
Automatic merge from submit-queue (batch tested with PRs 50749, 52869, 53359). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Kubeadm: Change the marshal code to use ApiMachinery code.

**What this PR does / why we need it**:
The PR change the k8s obj marshaling to use ApiMachinery code instead of plain yaml.Marshal which is known to have some side-effect.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes kubernetes/kubeadm#453
 
**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-02 21:43:11 -07:00
Feng Min
3add91fd3c Kubeadm: Change the marshal code to use ApiMachinery code. 2017-09-28 13:36:36 -07:00
Alexander Kanevskiy
09e59cfcaf Allow to use version labels in kubeadm upgrade apply
kubeadm upgrade apply now is able to utilize all possible combinations
of version argument, including labels (latest, stable-1.8, ci/latest-1.9)
as well as specific builds (v1.8.0-rc.1, ci/v1.9.0-alpha.1.123_01234567889)

As side effect, specifying exact build to deploy from CI area is now also
possible in kubeadm init command.

Fixes: kubernetes/kubeadm#451
2017-09-26 22:27:58 +03:00
Kubernetes Submit Queue
20fd96a161 Merge pull request #52540 from sbezverk/kubeadm_issue_398
Automatic merge from submit-queue (batch tested with PRs 52251, 52540). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

kubeadm: Switching to rbac/v1

Fixes: https://github.com/kubernetes/kubeadm/issues/398
Fixes: https://github.com/kubernetes/kubeadm/issues/385
Fixes: https://github.com/kubernetes/kubeadm/issues/403
2017-09-25 07:19:55 -07:00
Kubernetes Submit Queue
7fa13044bb Merge pull request #52251 from sbezverk/kubeadm_lint_cleanup
Automatic merge from submit-queue (batch tested with PRs 52251, 52540). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

kubeadm golint clean up

Cleaning up golint discovered issue for kubeadm

Fixes: https://github.com/kubernetes/kubeadm/issues/375
2017-09-25 07:19:53 -07:00
Serguei Bezverkhi
9d725da4c3 Switching to rbac/v1
Closes https://github.com/kubernetes/kubeadm/issues/398
2017-09-24 10:47:29 -04:00
Kubernetes Submit Queue
7c9e614cbb Merge pull request #52873 from ixdy/bazel-cleanup
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

bazel: build/test almost everything

**What this PR does / why we need it**: Miscellaneous cleanups and bug fixes. The main motivating idea here was to make `bazel build //...` and `bazel test //...` mostly work. (There's a few reasons these still don't work, but we're a lot closer.)

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

/assign @BenTheElder @mikedanese @spxtr
2017-09-24 00:04:36 -07:00
Serguei Bezverkhi
42bd500134 kubeadm golint clean up
Closes #375
2017-09-23 08:07:55 -04:00
Jeff Grafton
02fb4200dc Use buildozer to delete licenses() rules 2017-09-21 15:53:22 -07:00
Jeff Grafton
532bd482df Use buildozer to remove deprecated automanaged tags 2017-09-21 15:53:22 -07:00
Serguei Bezverkhi
834a02e673 Switching to apps/v1beta2
Closes https://github.com/kubernetes/kubeadm/issues/390
2017-09-15 18:48:17 -04:00
Lucas Käldström
92c5997b8e kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy 2017-09-03 18:02:46 +03:00
Lucas Käldström
c575626988 autogenerated bazel 2017-09-03 12:29:03 +03:00
Lucas Käldström
c237ff5bc0 Fully implement the kubeadm upgrade functionality 2017-09-03 12:25:47 +03:00
Kubernetes Submit Queue
39581ac9bf Merge pull request #51122 from luxas/kubeadm_impl_dryrun
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

kubeadm: Fully implement --dry-run

**What this PR does / why we need it**:

Finishes the work begun in #50631 
 - Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable
 - Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those.
 - Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

fixes: https://github.com/kubernetes/kubeadm/issues/389

**Special notes for your reviewer**:

Full `kubeadm init --dry-run` output:

```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization mode: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930"
[dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them
[dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-apiserver
	    tier: control-plane
	  name: kube-apiserver
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-apiserver
	    - --allow-privileged=true
	    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
	    - --requestheader-extra-headers-prefix=X-Remote-Extra-
	    - --service-cluster-ip-range=10.96.0.0/12
	    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
	    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
	    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
	    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
	    - --experimental-bootstrap-token-auth=true
	    - --client-ca-file=/etc/kubernetes/pki/ca.crt
	    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
	    - --secure-port=6443
	    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
	    - --insecure-port=0
	    - --requestheader-username-headers=X-Remote-User
	    - --requestheader-group-headers=X-Remote-Group
	    - --requestheader-allowed-names=front-proxy-client
	    - --advertise-address=192.168.200.101
	    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
	    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
	    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
	    - --authorization-mode=Node,RBAC
	    - --etcd-servers=http://127.0.0.1:2379
	    image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 6443
	        scheme: HTTPS
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-apiserver
	    resources:
	      requests:
	        cpu: 250m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/pki
	      name: ca-certs-etc-pki
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/pki
	    name: k8s-certs
	  - hostPath:
	      path: /etc/ssl/certs
	    name: ca-certs
	  - hostPath:
	      path: /etc/pki
	    name: ca-certs-etc-pki
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-controller-manager
	    tier: control-plane
	  name: kube-controller-manager
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-controller-manager
	    - --address=127.0.0.1
	    - --kubeconfig=/etc/kubernetes/controller-manager.conf
	    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
	    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
	    - --leader-elect=true
	    - --use-service-account-credentials=true
	    - --controllers=*,bootstrapsigner,tokencleaner
	    - --root-ca-file=/etc/kubernetes/pki/ca.crt
	    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
	    image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10252
	        scheme: HTTP
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-controller-manager
	    resources:
	      requests:
	        cpu: 200m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/kubernetes/controller-manager.conf
	      name: kubeconfig
	      readOnly: true
	    - mountPath: /etc/pki
	      name: ca-certs-etc-pki
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/pki
	    name: k8s-certs
	  - hostPath:
	      path: /etc/ssl/certs
	    name: ca-certs
	  - hostPath:
	      path: /etc/kubernetes/controller-manager.conf
	    name: kubeconfig
	  - hostPath:
	      path: /etc/pki
	    name: ca-certs-etc-pki
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-scheduler
	    tier: control-plane
	  name: kube-scheduler
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-scheduler
	    - --leader-elect=true
	    - --kubeconfig=/etc/kubernetes/scheduler.conf
	    - --address=127.0.0.1
	    image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10251
	        scheme: HTTP
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-scheduler
	    resources:
	      requests:
	        cpu: 100m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/scheduler.conf
	      name: kubeconfig
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/scheduler.conf
	    name: kubeconfig
	status: {}
[markmaster] Will mark node thegopher as master by adding a label and a taint
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Attached patch:
	{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[token] Using token: 96efd6.98bbb2f4603c026b
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-96efd6"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
	  expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA==
	  token-id: OTZlZmQ2
	  token-secret: OThiYmIyZjQ2MDNjMDI2Yg==
	  usage-bootstrap-authentication: dHJ1ZQ==
	  usage-bootstrap-signing: dHJ1ZQ==
	kind: Secret
	metadata:
	  creationTimestamp: null
	  name: bootstrap-token-96efd6
	type: bootstrap.kubernetes.io/token
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:kubelet-bootstrap
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:node-bootstrapper
	subjects:
	- kind: Group
	  name: system:bootstrappers
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRole
	metadata:
	  creationTimestamp: null
	  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
	rules:
	- apiGroups:
	  - certificates.k8s.io
	  resources:
	  - certificatesigningrequests/nodeclient
	  verbs:
	  - create
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:node-autoapprove-bootstrap
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
	subjects:
	- kind: Group
	  name: system:bootstrappers
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  kubeconfig: |
	    apiVersion: v1
	    clusters:
	    - cluster:
	        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
	        server: https://192.168.200.101:6443
	      name: ""
	    contexts: []
	    current-context: ""
	    kind: Config
	    preferences: {}
	    users: []
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  name: cluster-info
	  namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: Role
	metadata:
	  creationTimestamp: null
	  name: kubeadm:bootstrap-signer-clusterinfo
	  namespace: kube-public
	rules:
	- apiGroups:
	  - ""
	  resourceNames:
	  - cluster-info
	  resources:
	  - configmaps
	  verbs:
	  - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: RoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:bootstrap-signer-clusterinfo
	  namespace: kube-public
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: Role
	  name: kubeadm:bootstrap-signer-clusterinfo
	subjects:
	- kind: User
	  name: system:anonymous
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  MasterConfiguration: |
	    api:
	      advertiseAddress: 192.168.200.101
	      bindPort: 6443
	    apiServerCertSANs: []
	    apiServerExtraArgs: null
	    authorizationModes:
	    - Node
	    - RBAC
	    certificatesDir: /etc/kubernetes/pki
	    cloudProvider: ""
	    controllerManagerExtraArgs: null
	    etcd:
	      caFile: ""
	      certFile: ""
	      dataDir: /var/lib/etcd
	      endpoints: []
	      extraArgs: null
	      image: ""
	      keyFile: ""
	    featureFlags: null
	    imageRepository: gcr.io/google_containers
	    kubernetesVersion: v1.7.4
	    networking:
	      dnsDomain: cluster.local
	      podSubnet: ""
	      serviceSubnet: 10.96.0.0/12
	    nodeName: thegopher
	    schedulerExtraArgs: null
	    token: 96efd6.98bbb2f4603c026b
	    tokenTTL: 86400000000000
	    unifiedControlPlaneImage: ""
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  name: kubeadm-config
	  namespace: kube-system
[dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Resource name: "system:node"
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  creationTimestamp: null
	  name: kube-dns
	  namespace: kube-system
[dryrun] Would perform action GET on resource "services" in API group "core/v1"
[dryrun] Resource name: "kubernetes"
[dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1"
[dryrun] Attached object:
	apiVersion: extensions/v1beta1
	kind: Deployment
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-dns
	  name: kube-dns
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: kube-dns
	  strategy:
	    rollingUpdate:
	      maxSurge: 10%
	      maxUnavailable: 0
	  template:
	    metadata:
	      creationTimestamp: null
	      labels:
	        k8s-app: kube-dns
	    spec:
	      affinity:
	        nodeAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	            nodeSelectorTerms:
	            - matchExpressions:
	              - key: beta.kubernetes.io/arch
	                operator: In
	                values:
	                - amd64
	      containers:
	      - args:
	        - --domain=cluster.local.
	        - --dns-port=10053
	        - --config-dir=/kube-dns-config
	        - --v=2
	        env:
	        - name: PROMETHEUS_PORT
	          value: "10055"
	        image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /healthcheck/kubedns
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: kubedns
	        ports:
	        - containerPort: 10053
	          name: dns-local
	          protocol: UDP
	        - containerPort: 10053
	          name: dns-tcp-local
	          protocol: TCP
	        - containerPort: 10055
	          name: metrics
	          protocol: TCP
	        readinessProbe:
	          httpGet:
	            path: /readiness
	            port: 8081
	            scheme: HTTP
	          initialDelaySeconds: 3
	          timeoutSeconds: 5
	        resources:
	          limits:
	            memory: 170Mi
	          requests:
	            cpu: 100m
	            memory: 70Mi
	        volumeMounts:
	        - mountPath: /kube-dns-config
	          name: kube-dns-config
	      - args:
	        - -v=2
	        - -logtostderr
	        - -configDir=/etc/k8s/dns/dnsmasq-nanny
	        - -restartDnsmasq=true
	        - --
	        - -k
	        - --cache-size=1000
	        - --log-facility=-
	        - --server=/cluster.local/127.0.0.1#10053
	        - --server=/in-addr.arpa/127.0.0.1#10053
	        - --server=/ip6.arpa/127.0.0.1#10053
	        image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /healthcheck/dnsmasq
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: dnsmasq
	        ports:
	        - containerPort: 53
	          name: dns
	          protocol: UDP
	        - containerPort: 53
	          name: dns-tcp
	          protocol: TCP
	        resources:
	          requests:
	            cpu: 150m
	            memory: 20Mi
	        volumeMounts:
	        - mountPath: /etc/k8s/dns/dnsmasq-nanny
	          name: kube-dns-config
	      - args:
	        - --v=2
	        - --logtostderr
	        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
	        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
	        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /metrics
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: sidecar
	        ports:
	        - containerPort: 10054
	          name: metrics
	          protocol: TCP
	        resources:
	          requests:
	            cpu: 10m
	            memory: 20Mi
	      dnsPolicy: Default
	      serviceAccountName: kube-dns
	      tolerations:
	      - key: CriticalAddonsOnly
	        operator: Exists
	      - effect: NoSchedule
	        key: node-role.kubernetes.io/master
	      volumes:
	      - configMap:
	          name: kube-dns
	          optional: true
	        name: kube-dns-config
	status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: Service
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-dns
	    kubernetes.io/cluster-service: "true"
	    kubernetes.io/name: KubeDNS
	  name: kube-dns
	  namespace: kube-system
	  resourceVersion: "0"
	spec:
	  clusterIP: 10.96.0.10
	  ports:
	  - name: dns
	    port: 53
	    protocol: UDP
	    targetPort: 53
	  - name: dns-tcp
	    port: 53
	    protocol: TCP
	    targetPort: 53
	  selector:
	    k8s-app: kube-dns
	status:
	  loadBalancer: {}
[addons] Applied essential addon: kube-dns
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  creationTimestamp: null
	  name: kube-proxy
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  kubeconfig.conf: |
	    apiVersion: v1
	    kind: Config
	    clusters:
	    - cluster:
	        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
	        server: https://192.168.200.101:6443
	      name: default
	    contexts:
	    - context:
	        cluster: default
	        namespace: default
	        user: default
	      name: default
	    current-context: default
	    users:
	    - name: default
	      user:
	        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  labels:
	    app: kube-proxy
	  name: kube-proxy
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1"
[dryrun] Attached object:
	apiVersion: extensions/v1beta1
	kind: DaemonSet
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-proxy
	  name: kube-proxy
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: kube-proxy
	  template:
	    metadata:
	      creationTimestamp: null
	      labels:
	        k8s-app: kube-proxy
	    spec:
	      containers:
	      - command:
	        - /usr/local/bin/kube-proxy
	        - --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
	        image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4
	        imagePullPolicy: IfNotPresent
	        name: kube-proxy
	        resources: {}
	        securityContext:
	          privileged: true
	        volumeMounts:
	        - mountPath: /var/lib/kube-proxy
	          name: kube-proxy
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	      hostNetwork: true
	      serviceAccountName: kube-proxy
	      tolerations:
	      - effect: NoSchedule
	        key: node-role.kubernetes.io/master
	      - effect: NoSchedule
	        key: node.cloudprovider.kubernetes.io/uninitialized
	        value: "true"
	      volumes:
	      - configMap:
	          name: kube-proxy
	        name: kube-proxy
	      - hostPath:
	          path: /run/xtables.lock
	        name: xtables-lock
	  updateStrategy:
	    type: RollingUpdate
	status:
	  currentNumberScheduled: 0
	  desiredNumberScheduled: 0
	  numberMisscheduled: 0
	  numberReady: 0
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:node-proxier
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:node-proxier
	subjects:
	- kind: ServiceAccount
	  name: kube-proxy
	  namespace: kube-system
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1

```

**Release note**:

```release-note
kubeadm: Implement a `--dry-run` mode and flag for `kubeadm`
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
2017-08-25 14:01:27 -07:00
Lucas Käldström
2c71814344 kubeadm: Fully implement 'kubeadm init --dry-run' 2017-08-25 20:31:14 +03:00
Di Xu
01e4b960d8 update kubeadm to use hostpath type 2017-08-24 21:11:52 +08:00
Kubernetes Submit Queue
cb1220c114 Merge pull request #50963 from luxas/kubeadm_add_back_component_label2
Automatic merge from submit-queue (batch tested with PRs 50893, 50913, 50963, 50629, 50640)

kubeadm: Add back labels for the Static Pod control plane (attempt 2)

**What this PR does / why we need it**:

Exactly the same PR as https://github.com/kubernetes/kubernetes/pull/50174, but that PR was appearently lost in a rebase/mis-merge or something, so resending this one.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
2017-08-22 05:31:10 -07:00
Kubernetes Submit Queue
b49a179ea4 Merge pull request #50766 from luxas/kubeadm_selfhosting_race_condition
Automatic merge from submit-queue (batch tested with PRs 46458, 50934, 50766, 50970, 47698)

kubeadm: Make the self-hosting with certificates in Secrets mode work again

**What this PR does / why we need it**:

This PR:
 - makes the self-hosting with certificates in Secrets mode work
 - makes the wait functions timeoutable
 - fixes a race condition where the kubelet may be slow to remove the Static Pod
 - cleans up some of the self-hosting logic
 - makes self-hosting-with-secrets respect the feature flag

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

fixes: https://github.com/kubernetes/kubeadm/issues/405

**Special notes for your reviewer**:

This is work in progress. I'll add unit tests, rebase upon https://github.com/kubernetes/kubernetes/pull/50762 and maybe split out some of the functionatlity here into a separate PR

**Release note**:

```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
2017-08-21 18:11:22 -07:00
Kubernetes Submit Queue
f40d07480f Merge pull request #49119 from kad/n-addons-repo
Automatic merge from submit-queue (batch tested with PRs 50693, 50831, 47506, 49119, 50871)

kubeadm: Implement support for using images from CI builds

**What this PR does / why we need it**: Implements support for CI images in kubeadm

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes kubernetes/kubeadm#337

**Special notes for your reviewer**:

**Release note**:
```release-note
- kubeadm now supports "ci/latest-1.8" or "ci-cross/latest-1.8" and similar labels.
```
2017-08-21 14:30:03 -07:00
Lucas Käldström
4a693337b6 kubeadm: Add back labels for the Static Pod control plane (attempt 2) 2017-08-19 19:59:59 +03:00
Lucas Käldström
d2e08fd739 autogenerated bazel 2017-08-19 00:46:38 +03:00
Lucas Käldström
21eeb5c925 kubeadm: Adding unit tests for newly added funcs 2017-08-19 00:45:49 +03:00
Lucas Käldström
d1acdf1627 kubeadm: Make the self-hosting with certificates in Secrets mode work again 2017-08-19 00:45:16 +03:00
Alexander Kanevskiy
2312920cbc Implemented support for using images from CI builds
Implements kubernetes/kubeadm#337
2017-08-18 17:02:18 +03:00
Lucas Käldström
0bf84aa182 kubeadm: Adds dry-run support for kubeadm using the '--dry-run' option 2017-08-18 16:05:12 +03:00
Daneyon Hansen
3390bc3cbc Updates Kubeadm Master Endpoint for IPv6
Previously, kubeadm would use <ip>:<port> to construct a master
endpoint. This works fine for IPv4 addresses, but not for IPv6.
IPv6 requires the ip to be encased in brackets when being joined
to a port with a colon.

This patch updates kubeadm to support wrapping a v6 address with
[] to form the master endpoint url. Since this functionality is
needed in multiple areas, a dedicated util function was created.

Fixes: https://github.com/kubernetes/kubernetes/issues/48227
2017-08-17 10:57:54 -07:00
Lucas Käldström
c08091699c kubeadm: Fix self-hosting race condition 2017-08-17 16:07:04 +03:00
Kubernetes Submit Queue
d72fc055ee Merge pull request #50626 from luxas/kubeadm_separate_apiclient
Automatic merge from submit-queue (batch tested with PRs 50626, 50683, 50679, 50684, 50460)

kubeadm: Centralize client create-or-update logic in one package

**What this PR does / why we need it**:

Moves all Create-or-Update logic into one package instead of duplicating that logic all around in the codebase.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

This PR depends on https://github.com/kubernetes/kubernetes/pull/50214.
Note that commit 2 is the only one that needs reviewing.
This PR is required for https://github.com/kubernetes/kubernetes/pull/48899 (kubeadm upgrade)

**Release note**:

```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @mattmoyer @fabriziopandini
2017-08-15 10:28:21 -07:00
Lucas Käldström
8c5c3ca197 autogenerated bazel 2017-08-15 15:52:49 +03:00
Lucas Käldström
d725fe2c2c kubeadm: Centralize client create-or-update logic in one package 2017-08-15 15:52:37 +03:00
fabriziopandini
8ab27c1fbe Autogenerated bazel etc. 2017-08-14 16:31:53 +02:00
fabriziopandini
4db581c8ee Move all staticpod utils to separate package 2017-08-14 16:30:31 +02:00