andrewsykim
49e01b05e7
kubeadm: set kube-apiserver advertise address using downward API
2017-12-26 14:41:07 -05:00
Jeff Grafton
efee0704c6
Autogenerate BUILD files
2017-12-23 13:12:11 -08:00
Tim Hockin
e9dd8a68f6
Revert k8s.gcr.io vanity domain
...
This reverts commit eba5b6092a .
Fixes https://github.com/kubernetes/kubernetes/issues/57526
2017-12-22 14:36:16 -08:00
Kubernetes Submit Queue
09b5e8f411
Merge pull request #57207 from cimomo/kubeadm-fixes
...
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md ">here</a>.
Improve error messages and comments in KubeAdm.
**What this PR does / why we need it**:
Improve error messages and comments in KubeAdm.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
2017-12-22 06:56:13 -08:00
xiangpengzhao
88f609fe4d
Auto generate BUILD files.
2017-12-19 11:44:19 +08:00
xiangpengzhao
7d919fbd0c
Use apps/v1 API in kubeadm.
2017-12-19 11:44:19 +08:00
Tim Hockin
eba5b6092a
Use k8s.gcr.io vanity domain for container images
2017-12-18 09:18:34 -08:00
Kai Chen
67cf959a1d
Improve error messages and comments in KubeAdm.
2017-12-14 11:11:58 -08:00
Lucas Käldström
f7c494fe5b
kubeadm: Fix a couple of upgrade/downgrade-related bugs
2017-12-02 00:27:07 +02:00
Lucas Käldström
2a047211f4
kubeadm: Fix a small bug in the self-hosting code
2017-11-19 14:45:16 +02:00
xiangpengzhao
0faa96e7ff
Use volumeutil.LoadPodFromFile for pod spec
2017-11-09 18:57:24 +08:00
Lars Lehtonen
1884055329
cmd/kubeadm/app/util/apiclient: fix swallowed errors
...
cmd/kubeadm/app/phases/upgrade: fix swallowed error
cmd/kubeadm/app/phases/selfhosting: fix swallowed errors
cmd/kubeadm/app/phases/certs: fix swallowed errors
cmd/kubeadm/app/cmd: fix swallowed error
cmd/kubeadm/app/cmd: descriptive error returns
cmd/kubeadm/app/cmd: govet fixes
cmd/kubeadm: error formatting
2017-10-25 18:10:21 -07:00
Dr. Stefan Schimanski
cad0364e73
Update bazel
2017-10-18 17:24:04 +02:00
Dr. Stefan Schimanski
7773a30f67
pkg/api/legacyscheme: fixup imports
2017-10-18 17:23:55 +02:00
Jeff Grafton
aee5f457db
update BUILD files
2017-10-15 18:18:13 -07:00
Feng Min
3add91fd3c
Kubeadm: Change the marshal code to use ApiMachinery code.
2017-09-28 13:36:36 -07:00
Serguei Bezverkhi
834a02e673
Switching to apps/v1beta2
...
Closes https://github.com/kubernetes/kubeadm/issues/390
2017-09-15 18:48:17 -04:00
Lucas Käldström
92c5997b8e
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
2017-09-03 18:02:46 +03:00
Lucas Käldström
94983530d4
Add unit tests for kubeadm upgrade
2017-09-03 12:26:10 +03:00
Lucas Käldström
c237ff5bc0
Fully implement the kubeadm upgrade functionality
2017-09-03 12:25:47 +03:00
Lucas Käldström
b371acb60b
kubeadm: Rename FeatureFlags to FeatureGates
2017-08-27 12:52:42 +03:00
Kubernetes Submit Queue
39581ac9bf
Merge pull request #51122 from luxas/kubeadm_impl_dryrun
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
kubeadm: Fully implement --dry-run
**What this PR does / why we need it**:
Finishes the work begun in #50631
- Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable
- Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those.
- Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/389
**Special notes for your reviewer**:
Full `kubeadm init --dry-run` output:
```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization mode: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930"
[dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them
[dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --allow-privileged=true
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --experimental-bootstrap-token-auth=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --secure-port=6443
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --insecure-port=0
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-allowed-names=front-proxy-client
- --advertise-address=192.168.200.101
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --address=127.0.0.1
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --leader-elect=true
- --use-service-account-credentials=true
- --controllers=*,bootstrapsigner,tokencleaner
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
name: kubeconfig
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --leader-elect=true
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --address=127.0.0.1
image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
name: kubeconfig
status: {}
[markmaster] Will mark node thegopher as master by adding a label and a taint
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Attached patch:
{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[token] Using token: 96efd6.98bbb2f4603c026b
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-96efd6"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA==
token-id: OTZlZmQ2
token-secret: OThiYmIyZjQ2MDNjMDI2Yg==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: null
name: bootstrap-token-96efd6
type: bootstrap.kubernetes.io/token
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
creationTimestamp: null
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/nodeclient
verbs:
- create
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.200.101:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
kind: ConfigMap
metadata:
creationTimestamp: null
name: cluster-info
namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
rules:
- apiGroups:
- ""
resourceNames:
- cluster-info
resources:
- configmaps
verbs:
- get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-clusterinfo
subjects:
- kind: User
name: system:anonymous
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
MasterConfiguration: |
api:
advertiseAddress: 192.168.200.101
bindPort: 6443
apiServerCertSANs: []
apiServerExtraArgs: null
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: ""
controllerManagerExtraArgs: null
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: []
extraArgs: null
image: ""
keyFile: ""
featureFlags: null
imageRepository: gcr.io/google_containers
kubernetesVersion: v1.7.4
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
nodeName: thegopher
schedulerExtraArgs: null
token: 96efd6.98bbb2f4603c026b
tokenTTL: 86400000000000
unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
creationTimestamp: null
name: kubeadm-config
namespace: kube-system
[dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Resource name: "system:node"
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-dns
namespace: kube-system
[dryrun] Would perform action GET on resource "services" in API group "core/v1"
[dryrun] Resource name: "kubernetes"
[dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
name: kube-dns
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
containers:
- args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
volumeMounts:
- mountPath: /kube-dns-config
name: kube-dns-config
- args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- mountPath: /etc/k8s/dns/dnsmasq-nanny
name: kube-dns-config
- args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: sidecar
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default
serviceAccountName: kube-dns
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- configMap:
name: kube-dns
optional: true
name: kube-dns-config
status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: KubeDNS
name: kube-dns
namespace: kube-system
resourceVersion: "0"
spec:
clusterIP: 10.96.0.10
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
status:
loadBalancer: {}
[addons] Applied essential addon: kube-dns
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig.conf: |
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://192.168.200.101:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
volumes:
- configMap:
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
name: xtables-lock
updateStrategy:
type: RollingUpdate
status:
currentNumberScheduled: 0
desiredNumberScheduled: 0
numberMisscheduled: 0
numberReady: 0
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-proxier
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1
```
**Release note**:
```release-note
kubeadm: Implement a `--dry-run` mode and flag for `kubeadm`
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
2017-08-25 14:01:27 -07:00
Lucas Käldström
2c71814344
kubeadm: Fully implement 'kubeadm init --dry-run'
2017-08-25 20:31:14 +03:00
Di Xu
01e4b960d8
update kubeadm to use hostpath type
2017-08-24 21:11:52 +08:00
fabriziopandini
5ff994f17b
Move package app/cmd/features to app/features + bazel files
2017-08-23 09:55:47 +02:00
Lucas Käldström
d2e08fd739
autogenerated bazel
2017-08-19 00:46:38 +03:00
Lucas Käldström
21eeb5c925
kubeadm: Adding unit tests for newly added funcs
2017-08-19 00:45:49 +03:00
Lucas Käldström
d1acdf1627
kubeadm: Make the self-hosting with certificates in Secrets mode work again
2017-08-19 00:45:16 +03:00
Lucas Käldström
c08091699c
kubeadm: Fix self-hosting race condition
2017-08-17 16:07:04 +03:00
Jamie Hannaford
abedc49b71
Feature-gate self-hosted secrets
2017-08-16 20:01:01 +02:00
Lucas Käldström
8c5c3ca197
autogenerated bazel
2017-08-15 15:52:49 +03:00
Lucas Käldström
d725fe2c2c
kubeadm: Centralize client create-or-update logic in one package
2017-08-15 15:52:37 +03:00
Jeff Grafton
a7f49c906d
Use buildozer to delete licenses() rules except under third_party/
2017-08-11 09:32:39 -07:00
Jeff Grafton
33276f06be
Use buildozer to remove deprecated automanaged tags
2017-08-11 09:31:50 -07:00
Lucas Käldström
9a6ef9677a
kubeadm: Centralize commonly used paths/constants to the constants pkg
2017-08-08 15:07:21 +03:00
Lucas Käldström
0734b63dc0
kubeadm: Replace *clientset.Clientset with clientset.Interface
2017-08-04 21:14:50 +03:00
Lucas Käldström
66328996f2
kubeadm: Remove the old KubernetesDir envparam
2017-07-17 14:40:53 +03:00
Andrew Rynhard
38c6e83033
Use Secrets for files that self-hosted pods depend on
2017-07-06 20:36:18 -07:00
Lucas Käldström
9f1c5a6f0f
kubeadm self-hosting: unit tests and bazel
2017-07-06 20:54:47 +03:00
Lucas Käldström
d14478f27a
kubeadm: Make self-hosting work and split out to a phase
2017-07-06 20:54:15 +03:00