Jordan Liggitt
d4591ea324
Revert "Stop using API server's --insecure-port"
...
This reverts commit 5b64a98689 .
2019-03-16 16:24:49 -04:00
ducnv
e11916da8e
kubeadm cleanup: master -> control-plane (cont.4)
2019-02-25 08:29:19 +07:00
vanduc95
ae1ec8826a
kubeadm cleanup: master -> control-plane (cont.2)
2019-02-21 10:02:24 +07:00
Rafael Fernández López
30dc43ff86
kubeadm: set priority class name to system-cluster-critical for all master components
...
Remove the deprecated `scheduler.alpha.kubernetes.io/critical-pod` pod annotation and use
the `priorityClassName` first class attribute instead, setting all master components to
`system-cluster-critical`.
2019-02-12 17:50:36 +01:00
Jordan Liggitt
89b0b0b84b
Clean up initializer-related comments, test data
2019-01-25 12:37:45 -05:00
RA489
5b64a98689
Stop using API server's --insecure-port
2019-01-22 17:31:39 +05:30
Ed Bartosh
d91861e883
kubeadm: add front-proxy CA certificate to selfhosting controller-manager
...
Selfhosting pivoting fails when using --store-certs-in-secrets
as controller-manager fails to start because of missing front-proxy CA
certificate:
unable to load client CA file: unable to load client CA file: open
/etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory
Added required certificate to fix this.
This should fix kubernetes/kubeadm#1281
2019-01-09 17:01:18 +02:00
Ed Bartosh
8148d95ac9
kubeadm selfhosting: fix pod spec mutation for controller-manager
...
Modified command line options --authentication-kubeconfig and
--authorization-kubeconfig to point out to the correct location
of the controller-manager.conf
This should fix this controller-manager crash:
failed to get delegated authentication kubeconfig: failed to get
delegated authentication kubeconfig: stat
/etc/kubernetes/controller-manager.conf: no such file or directory
Related issue: kubernetes/kubeadm#1281
2019-01-07 15:20:02 +02:00
Ed Bartosh
442098bdec
kubeadm: use t.Run in selfhosting and update phases
...
Used T.Run API for kubeadm tests in app/phases/selfhosting and
app/phases/update directories
This should improve testing output and make it more visible
which test is doing what.
2019-01-03 19:23:54 +02:00
Ed Bartosh
7b058c4357
kubeadm: add required etcd certs to selfhosting api-server
...
Selfhosting pivoting fails when using --store-certs-in-secrets
as api-server fails to start because of missing etcd/ca and
apiserver-etcd-client certificates:
F1227 16:01:52.237352 1 storage_decorator.go:57] Unable to create storage backend:
config (&{ /registry [https://127.0.0.1:2379 ]
/etc/kubernetes/pki/apiserver-etcd-client.key
/etc/kubernetes/pki/apiserver-etcd-client.crt
/etc/kubernetes/pki/etcd/ca.crt true 0xc000884120 <nil> 5m0s 1m0s}),
err (open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory)
Added required certificates to fix this.
Secret name for etc/ca certifcate has been converted to conform RFC-1123 subdomain
naming conventions to prevent this TLS secret creation failure:
unable to create secret: Secret "etcd/ca" is invalid: metadata.name:
Invalid value: "etcd/ca": a DNS-1123 subdomain must consist of lower
case alphanumeric characters, '-' or '.', and must start and end with an
alphanumeric character (e.g. 'example.com', regex used for validation is
'[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
Related issue: kubernetes/kubeadm#1281
2019-01-02 13:40:04 +02:00
dmaiocchi
6148992056
Replace address with bind-address
2018-12-20 22:14:16 +01:00
Davanum Srinivas
954996e231
Move from glog to klog
...
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
2018-11-10 07:50:31 -05:00
Marek Counts
18dc529d05
Removed feature gates selfhosting, HA and store certs in secrets.
...
Added new alpha command to pivot to self hosted
Removed slelfhosting upgrade ability
Added warning message to self hosted pivot
added certs in secrets flag to new selfhosting comand
2018-11-07 11:44:54 -05:00
yuexiao-wang
c0a9b4d04d
add BUILD
...
Signed-off-by: yuexiao-wang <wang.yuexiao@zte.com.cn >
2018-10-30 16:23:52 +08:00
yuexiao-wang
cc303c8774
[kubeadm/app/]switch to github.com/pkg/errors
...
Signed-off-by: yuexiao-wang <wang.yuexiao@zte.com.cn >
2018-10-30 16:23:24 +08:00
Christoph Blecker
97b2992dc1
Update gofmt for go1.11
2018-10-05 12:59:38 -07:00
Lucas Käldström
52f0591ad9
Automated rename from MasterConfiguration to InitConfiguration
2018-07-09 04:55:02 +03:00
Jeff Grafton
23ceebac22
Run hack/update-bazel.sh
2018-06-22 16:22:57 -07:00
Chuck Ha
125f5ac61a
Replace glog.Info{f,ln} with fmt.Print{f,ln}
...
This follows the pattern `kubectl` uses for logging.
There are two remaining glog.Infof call that cannot be removed easily.
One glog call comes from kubelet validation which calls features.SetFromMap.
The other comes from test/e2e during kernel validation.
Mostly fixes kubernetes/kubeadm#852
Signed-off-by: Chuck Ha <ha.chuck@gmail.com >
2018-06-04 10:34:31 -04:00
Lucas Käldström
b48f23b786
kubeadm: Move .NodeName and .CRISocket to a common sub-struct
2018-05-29 17:51:39 +03:00
Rostislav M. Georgiev
a4e0659815
kubeadm: Use loadPodSpecFromFile instead of LoadPodFromFile
...
Implement and use loadPodSpecFromFile which loads and returns PodSpec object
from YAML or JSON file. Use this function in the places where LoadPodFromFile
is used to load Pod object and then return the PodSpec portion of it. This
also removes the dependency on //pkg/volume/util and its dependencies (thus,
making kubeadm more lean).
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com >
2018-05-28 15:25:37 +03:00
Malhar Vora
9c4706f519
Implement verbosity feature for kubeadm init
...
Fixes #340
Adds functionality to see logs with various level of verbosity.
Currently there are two verbosity levels: 0 and 1
2018-03-25 09:43:31 -07:00
Jeff Grafton
ef56a8d6bb
Autogenerated: hack/update-bazel.sh
2018-02-16 13:43:01 -08:00
Kubernetes Submit Queue
36f902d5d0
Merge pull request #59344 from cheyang/fix_kubeadm_typo
...
Automatic merge from submit-queue (batch tested with PRs 59344, 59595, 59598). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md ">here</a>.
Fix kubeadm typo
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
2018-02-08 18:06:32 -08:00
Tim Hockin
3586986416
Switch to k8s.gcr.io vanity domain
...
This is the 2nd attempt. The previous was reverted while we figured out
the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
2018-02-07 21:14:19 -08:00
cheyang
4ca3903eab
fix typo in kubeadm
...
Signed-off-by: cheyang <cheyang@163.com >
2018-02-06 13:48:18 +08:00
andrewsykim
49e01b05e7
kubeadm: set kube-apiserver advertise address using downward API
2017-12-26 14:41:07 -05:00
Jeff Grafton
efee0704c6
Autogenerate BUILD files
2017-12-23 13:12:11 -08:00
Tim Hockin
e9dd8a68f6
Revert k8s.gcr.io vanity domain
...
This reverts commit eba5b6092a .
Fixes https://github.com/kubernetes/kubernetes/issues/57526
2017-12-22 14:36:16 -08:00
Kubernetes Submit Queue
09b5e8f411
Merge pull request #57207 from cimomo/kubeadm-fixes
...
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md ">here</a>.
Improve error messages and comments in KubeAdm.
**What this PR does / why we need it**:
Improve error messages and comments in KubeAdm.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
2017-12-22 06:56:13 -08:00
xiangpengzhao
88f609fe4d
Auto generate BUILD files.
2017-12-19 11:44:19 +08:00
xiangpengzhao
7d919fbd0c
Use apps/v1 API in kubeadm.
2017-12-19 11:44:19 +08:00
Tim Hockin
eba5b6092a
Use k8s.gcr.io vanity domain for container images
2017-12-18 09:18:34 -08:00
Kai Chen
67cf959a1d
Improve error messages and comments in KubeAdm.
2017-12-14 11:11:58 -08:00
Lucas Käldström
f7c494fe5b
kubeadm: Fix a couple of upgrade/downgrade-related bugs
2017-12-02 00:27:07 +02:00
Lucas Käldström
2a047211f4
kubeadm: Fix a small bug in the self-hosting code
2017-11-19 14:45:16 +02:00
xiangpengzhao
0faa96e7ff
Use volumeutil.LoadPodFromFile for pod spec
2017-11-09 18:57:24 +08:00
Lars Lehtonen
1884055329
cmd/kubeadm/app/util/apiclient: fix swallowed errors
...
cmd/kubeadm/app/phases/upgrade: fix swallowed error
cmd/kubeadm/app/phases/selfhosting: fix swallowed errors
cmd/kubeadm/app/phases/certs: fix swallowed errors
cmd/kubeadm/app/cmd: fix swallowed error
cmd/kubeadm/app/cmd: descriptive error returns
cmd/kubeadm/app/cmd: govet fixes
cmd/kubeadm: error formatting
2017-10-25 18:10:21 -07:00
Dr. Stefan Schimanski
cad0364e73
Update bazel
2017-10-18 17:24:04 +02:00
Dr. Stefan Schimanski
7773a30f67
pkg/api/legacyscheme: fixup imports
2017-10-18 17:23:55 +02:00
Jeff Grafton
aee5f457db
update BUILD files
2017-10-15 18:18:13 -07:00
Feng Min
3add91fd3c
Kubeadm: Change the marshal code to use ApiMachinery code.
2017-09-28 13:36:36 -07:00
Serguei Bezverkhi
834a02e673
Switching to apps/v1beta2
...
Closes https://github.com/kubernetes/kubeadm/issues/390
2017-09-15 18:48:17 -04:00
Lucas Käldström
92c5997b8e
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
2017-09-03 18:02:46 +03:00
Lucas Käldström
94983530d4
Add unit tests for kubeadm upgrade
2017-09-03 12:26:10 +03:00
Lucas Käldström
c237ff5bc0
Fully implement the kubeadm upgrade functionality
2017-09-03 12:25:47 +03:00
Lucas Käldström
b371acb60b
kubeadm: Rename FeatureFlags to FeatureGates
2017-08-27 12:52:42 +03:00
Kubernetes Submit Queue
39581ac9bf
Merge pull request #51122 from luxas/kubeadm_impl_dryrun
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
kubeadm: Fully implement --dry-run
**What this PR does / why we need it**:
Finishes the work begun in #50631
- Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable
- Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those.
- Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/389
**Special notes for your reviewer**:
Full `kubeadm init --dry-run` output:
```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization mode: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930"
[dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them
[dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --allow-privileged=true
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --experimental-bootstrap-token-auth=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --secure-port=6443
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --insecure-port=0
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-allowed-names=front-proxy-client
- --advertise-address=192.168.200.101
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --address=127.0.0.1
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --leader-elect=true
- --use-service-account-credentials=true
- --controllers=*,bootstrapsigner,tokencleaner
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
name: kubeconfig
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --leader-elect=true
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --address=127.0.0.1
image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
name: kubeconfig
status: {}
[markmaster] Will mark node thegopher as master by adding a label and a taint
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Attached patch:
{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[token] Using token: 96efd6.98bbb2f4603c026b
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-96efd6"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA==
token-id: OTZlZmQ2
token-secret: OThiYmIyZjQ2MDNjMDI2Yg==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: null
name: bootstrap-token-96efd6
type: bootstrap.kubernetes.io/token
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
creationTimestamp: null
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/nodeclient
verbs:
- create
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.200.101:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
kind: ConfigMap
metadata:
creationTimestamp: null
name: cluster-info
namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
rules:
- apiGroups:
- ""
resourceNames:
- cluster-info
resources:
- configmaps
verbs:
- get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-clusterinfo
subjects:
- kind: User
name: system:anonymous
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
MasterConfiguration: |
api:
advertiseAddress: 192.168.200.101
bindPort: 6443
apiServerCertSANs: []
apiServerExtraArgs: null
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: ""
controllerManagerExtraArgs: null
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: []
extraArgs: null
image: ""
keyFile: ""
featureFlags: null
imageRepository: gcr.io/google_containers
kubernetesVersion: v1.7.4
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
nodeName: thegopher
schedulerExtraArgs: null
token: 96efd6.98bbb2f4603c026b
tokenTTL: 86400000000000
unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
creationTimestamp: null
name: kubeadm-config
namespace: kube-system
[dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Resource name: "system:node"
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-dns
namespace: kube-system
[dryrun] Would perform action GET on resource "services" in API group "core/v1"
[dryrun] Resource name: "kubernetes"
[dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
name: kube-dns
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
containers:
- args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
volumeMounts:
- mountPath: /kube-dns-config
name: kube-dns-config
- args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- mountPath: /etc/k8s/dns/dnsmasq-nanny
name: kube-dns-config
- args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: sidecar
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default
serviceAccountName: kube-dns
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- configMap:
name: kube-dns
optional: true
name: kube-dns-config
status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: KubeDNS
name: kube-dns
namespace: kube-system
resourceVersion: "0"
spec:
clusterIP: 10.96.0.10
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
status:
loadBalancer: {}
[addons] Applied essential addon: kube-dns
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig.conf: |
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://192.168.200.101:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
volumes:
- configMap:
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
name: xtables-lock
updateStrategy:
type: RollingUpdate
status:
currentNumberScheduled: 0
desiredNumberScheduled: 0
numberMisscheduled: 0
numberReady: 0
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-proxier
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1
```
**Release note**:
```release-note
kubeadm: Implement a `--dry-run` mode and flag for `kubeadm`
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
2017-08-25 14:01:27 -07:00
Lucas Käldström
2c71814344
kubeadm: Fully implement 'kubeadm init --dry-run'
2017-08-25 20:31:14 +03:00
Di Xu
01e4b960d8
update kubeadm to use hostpath type
2017-08-24 21:11:52 +08:00