Merge pull request #11424 from lavalamp/mungePreformatted

Munge preformatted
This commit is contained in:
Abhi Shah
2015-07-17 13:32:38 -07:00
95 changed files with 629 additions and 23 deletions

View File

@@ -52,6 +52,8 @@ Getting started on AWS EC2
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
## Cluster turnup
### Supported procedure: `get-kube`
```bash
#Using wget
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash

View File

@@ -33,6 +33,7 @@ Documentation for other releases can be found at
# Install and configure kubectl
## Download the kubectl CLI tool
```bash
### Darwin
wget https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/darwin/amd64/kubectl
@@ -42,12 +43,14 @@ wget https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux
```
### Copy kubectl to your path
```bash
chmod +x kubectl
mv kubectl /usr/local/bin/
```
### Create a secure tunnel for API communication
```bash
ssh -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
```

View File

@@ -100,6 +100,7 @@ See [a simple nginx example](../user-guide/simple-nginx.md) to try out your new
For more complete applications, please look in the [examples directory](../../examples/).
## Tearing down the cluster
```
cluster/kube-down.sh
```

View File

@@ -50,6 +50,7 @@ The kubernetes package provides a few services: kube-apiserver, kube-scheduler,
**System Information:**
Hosts:
```
centos-master = 192.168.121.9
centos-minion = 192.168.121.65

View File

@@ -54,6 +54,7 @@ In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure clo
## Let's go!
To get started, you need to checkout the code:
```
git clone https://github.com/GoogleCloudPlatform/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
@@ -89,12 +90,15 @@ azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.ym
```
Let's login to the master node like so:
```
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
```
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
@@ -105,6 +109,7 @@ kube-02 environment=production Ready
## Deploying the workload
Let's follow the Guestbook example now:
```
cd guestbook-example
kubectl create -f examples/guestbook/redis-master-controller.yaml
@@ -116,12 +121,15 @@ kubectl create -f examples/guestbook/frontend-service.yaml
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`.
```
kubectl get pods --watch
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
```
NAME READY STATUS RESTARTS AGE
frontend-8anh8 1/1 Running 0 1m
@@ -139,10 +147,13 @@ Two single-core nodes are certainly not enough for a production system of today,
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/weave-demos/coreos-azure`).
First, lets set the size of new VMs:
```
export AZ_VM_SIZE=Large
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```
./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
@@ -158,9 +169,11 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
```
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
@@ -181,14 +194,18 @@ frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=f
redis-master master redis name=redis-master 1
redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2
```
As there are 4 nodes, let's scale proportionally:
```
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
Check what you have now:
```
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS

View File

@@ -50,6 +50,7 @@ Docker containers themselves. To achieve this, we need a separate "bootstrap" i
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
@@ -61,6 +62,7 @@ across reboots and failures.
### Startup etcd for flannel and the API server to use
Run:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
@@ -97,6 +99,7 @@ or it may be something else.
#### Run flannel
Now run flanneld itself:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0
```
@@ -104,6 +107,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privile
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
@@ -114,6 +118,7 @@ You now need to edit the docker configuration to activate new flags. Again, thi
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
@@ -136,6 +141,7 @@ sudo /etc/init.d/docker start
```
it may be:
```sh
systemctl start docker
```
@@ -148,6 +154,7 @@ sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.
```
### Also run the service proxy
```sh
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
@@ -166,6 +173,7 @@ kubectl get nodes
```
This should print:
```
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready

View File

@@ -39,6 +39,7 @@ kubectl get nodes
```
That should show something like:
```
NAME LABELS STATUS
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
@@ -49,6 +50,7 @@ If the status of any node is ```Unknown``` or ```NotReady``` your cluster is bro
[```#google-containers```](http://webchat.freenode.net/?channels=google-containers) for advice.
### Run an application
```sh
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
@@ -56,17 +58,20 @@ kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```sh
kubectl expose rc nginx --port=80
```
This should print:
```
NAME LABELS SELECTOR IP PORT(S)
nginx <none> run=nginx <ip-addr> 80/TCP
```
Hit the webserver:
```sh
curl <insert-ip-from-above-here>
```

View File

@@ -55,6 +55,7 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
@@ -83,6 +84,7 @@ or it may be something else.
#### Run flannel
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
```
@@ -90,6 +92,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privile
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
@@ -101,6 +104,7 @@ You now need to edit the docker configuration to activate new flags. Again, thi
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
@@ -123,6 +127,7 @@ sudo /etc/init.d/docker start
```
it may be:
```sh
systemctl start docker
```

View File

@@ -56,11 +56,13 @@ Here's a diagram of what the final result will look like:
1. You need to have docker installed on one machine.
### Step One: Run etcd
```sh
docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
### Step Two: Run the master
```sh
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
```
@@ -69,6 +71,7 @@ This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods.md
### Step Three: Run the service proxy
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
```sh
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
@@ -81,6 +84,7 @@ binary
*Note:*
On OS/X you will need to set up port forwarding via ssh:
```sh
boot2docker ssh -L8080:localhost:8080
```
@@ -92,6 +96,7 @@ kubectl get nodes
```
This should print:
```
NAME LABELS STATUS
127.0.0.1 <none> Ready
@@ -100,6 +105,7 @@ NAME LABELS STATUS
If you are running different kubernetes clusters, you may need to specify ```-s http://localhost:8080``` to select the local cluster.
### Run an application
```sh
kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
```
@@ -107,17 +113,20 @@ kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```sh
kubectl expose rc nginx --port=80
```
This should print:
```
NAME LABELS SELECTOR IP PORT(S)
nginx <none> run=nginx <ip-addr> 80/TCP
```
Hit the webserver:
```sh
curl <insert-ip-from-above-here>
```

View File

@@ -130,6 +130,7 @@ ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
**Push your ssh public key to every machine**
Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster.
```
ansible-playbook -i inventory keys.yml
```
@@ -161,6 +162,7 @@ Flannel is a cleaner mechanism to use, and is the recommended choice.
- If you are using flannel, you should check the kubernetes-ansible repository above.
Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run
```
ansible-playbook -i inventory flannel.yml
```

View File

@@ -52,6 +52,7 @@ The kubernetes package provides a few services: kube-apiserver, kube-scheduler,
**System Information:**
Hosts:
```
fed-master = 192.168.121.9
fed-node = 192.168.121.65
@@ -66,6 +67,7 @@ fed-node = 192.168.121.65
```
yum -y install --enablerepo=updates-testing kubernetes
```
* Install etcd and iptables
```
@@ -121,6 +123,7 @@ KUBE_API_ARGS=""
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused"
```
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
@@ -210,6 +213,7 @@ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
* Deletion of nodes:
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):

View File

@@ -64,6 +64,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
}
}
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
* Add the configuration to the etcd server on fed-master.

View File

@@ -96,6 +96,7 @@ Alternately, you can download and install the latest Kubernetes release from [th
cd kubernetes
cluster/kube-up.sh
```
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
@@ -154,12 +155,14 @@ kube-system monitoring-heapster kubernetes.io/cluster-service=true,kubernete
kube-system monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.210.156 8083/TCP
8086/TCP
```
Similarly, you can take a look at the set of [pods](../user-guide/pods.md) that were created during cluster startup.
You can do this via the
```shell
$ kubectl get --all-namespaces pods
```
command.
You'll see a list of pods that looks something like this (the name specifics will be different):

View File

@@ -67,6 +67,7 @@ Getting started with libvirt CoreOS
#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
You can test it with the following command:
```
virsh -c qemu:///system pool-list
```
@@ -176,11 +177,13 @@ The IP to connect to the master is 192.168.10.1.
The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to `kubernetes_master`:
```
ssh core@192.168.10.1
```
Connect to `kubernetes_minion-01`:
```
ssh core@192.168.10.2
```
@@ -212,6 +215,7 @@ cluster/kube-push.sh
```
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
```
KUBE_PUSH=local cluster/kube-push.sh
```

View File

@@ -38,6 +38,7 @@ started page. Here we describe how to set up a cluster to ingest logs into Elast
alternative to Google Cloud Logging.
To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:
```
KUBE_LOGGING_DESTINATION=elasticsearch
```
@@ -160,6 +161,7 @@ status page for Elasticsearch.
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
from your local machine using `curl` but first you need to know what your bearer token is:
```
$ kubectl config view --minify
apiVersion: v1
@@ -185,6 +187,7 @@ users:
```
Now you can issue requests to Elasticsearch:
```
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/
{
@@ -202,7 +205,9 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
}
```
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
```
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?pretty=true
{

View File

@@ -56,6 +56,7 @@ This diagram shows four nodes created on a Google Compute Engine cluster with th
[cluster DNS service](../admin/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
To help explain how cluster level logging works lets start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
```
apiVersion: v1
kind: Pod
@@ -69,6 +70,7 @@ To help explain how cluster level logging works lets start off with a synthet
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Lets create the pod in the default
namespace.
@@ -78,11 +80,13 @@ namespace.
```
We can observe the running pod:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 5m
```
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
One of the nodes is now running the counter pod:
@@ -127,6 +131,7 @@ Now lets restart the counter.
$ kubectl create -f examples/blog-logging/counter-pod.yaml
pods/counter
```
Lets wait for the container to restart and get the log lines again.
```

View File

@@ -108,23 +108,31 @@ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 quay.io/coreos/etcd:v2.0.12 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
```
It's also a good idea to ensure your etcd instance is reachable by testing it
```bash
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
```
If connectivity is OK, you will see an output of the available keys in etcd (if any).
### Start Kubernetes-Mesos Services
Update your PATH to more easily run the Kubernetes-Mesos binaries:
```bash
$ export PATH="$(pwd)/_output/local/go/bin:$PATH"
```
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos_master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
```bash
$ export MESOS_MASTER=<host:port or zk:// url>
```
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
```bash
$ cat <<EOF >mesos-cloud.conf
[mesos-cloud]
@@ -166,6 +174,7 @@ Disown your background jobs so that they'll stay running if you log out.
```bash
$ disown -a
```
#### Validate KM Services
Add the appropriate binary folder to your ```PATH``` to access kubectl:
@@ -312,6 +321,7 @@ kubectl exec busybox -- nslookup kubernetes
```
If everything works fine, you will get this output:
```
Server: 10.10.10.10
Address 1: 10.10.10.10

View File

@@ -47,20 +47,24 @@ We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernete
More details about the networking of rkt can be found in the [documentation](https://github.com/coreos/rkt/blob/master/Documentation/networking.md).
To start the `rkt metadata service`, you can simply run:
```shell
$ sudo rkt metadata-service
```
If you want the service to be running as a systemd service, then:
```shell
$ sudo systemd-run rkt metadata-service
```
Alternatively, you can use the [rkt-metadata.service](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.service) and [rkt-metadata.socket](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.socket) to start the service.
### Local cluster
To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`:
```shell
$ export CONTAINER_RUNTIME=rkt
$ hack/local-up-cluster.sh
@@ -69,6 +73,7 @@ $ hack/local-up-cluster.sh
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
@@ -77,11 +82,13 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.5.6
```
Then you can launch the cluster by:
````shell
$ kube-up.sh
```
@@ -91,6 +98,7 @@ Note that we are still working on making all containerized the master components
### CoreOS cluster on AWS
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
```shell
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
@@ -98,16 +106,19 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.5.6
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
```shell
$ export COREOS_CHANNEL=stable
```
Then you can launch the cluster by:
````shell
$ kube-up.sh
```

View File

@@ -298,6 +298,7 @@ many distinct files to make:
You can make the files by copying the `$HOME/.kube/config`, by following the code
in `cluster/gce/configure-vm.sh` or by using the following template:
```
apiVersion: v1
kind: Config
@@ -316,6 +317,7 @@ contexts:
name: service-account-context
current-context: service-account-context
```
Put the kubeconfig(s) on every node. The examples later in this
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
`/var/lib/kubelet/kubeconfig`.
@@ -342,6 +344,7 @@ The minimum required Docker version will vary as the kubelet version changes. T
If you previously had Docker installed on a node without setting Kubernetes-specific
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
as follows before proceeding to configure Docker for Kubernetes.
```
iptables -t nat -F
ifconfig docker0 down
@@ -615,13 +618,17 @@ Place the completed pod template into the kubelet config dir
`/etc/kubernetes/manifests`).
Next, verify that kubelet has started a container for the apiserver:
```
$ sudo docker ps | grep apiserver:
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695 ```
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
```
Then try to connect to the apiserver:
```
$ echo $(curl -s http://localhost:8080/healthz)
ok
$ curl -s http://localhost:8080/api
@@ -631,6 +638,7 @@ $ curl -s http://localhost:8080/api
"v1"
]
}
```
If you have selected the `--register-node=true` option for kubelets, they will now being self-registering with the apiserver.
@@ -640,7 +648,9 @@ Otherwise, you will need to manually create node objects.
### Scheduler
Complete this template for the scheduler pod:
```json
{
"kind": "Pod",
"apiVersion": "v1",
@@ -670,7 +680,9 @@ Complete this template for the scheduler pod:
]
}
}
```
Optionally, you may want to mount `/var/log` as well and redirect output there.
Start as described for apiserver.
@@ -688,11 +700,13 @@ Flags to consider using with controller manager.
- `--allocate-node-cidrs=`
- *TODO*: explain when you want controller to do this and when you wanna do it another way.
- `--cloud-provider=` and `--cloud-config` as described in apiserver section.
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by [service account](../service-accounts.md) feature.
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by [service account](../user-guide/service-accounts.md) feature.
- `--master=127.0.0.1:8080`
Template for controller manager pod:
```json
{
"kind": "Pod",
"apiVersion": "v1",
@@ -748,6 +762,7 @@ Template for controller manager pod:
]
}
}
```

View File

@@ -172,6 +172,7 @@ DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
```
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range.
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
@@ -213,6 +214,7 @@ Please try:
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
```
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
```

View File

@@ -131,6 +131,7 @@ vagrant ssh master
```
To view the services on any of the nodes:
```sh
vagrant ssh minion-1
[vagrant@kubernetes-master ~] $ sudo su
@@ -147,17 +148,20 @@ vagrant ssh minion-1
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
To push updates to new Kubernetes code after making source changes:
```sh
./cluster/kube-push.sh
```
To stop and then restart the cluster:
```sh
vagrant halt
./cluster/kube-up.sh
```
To destroy the cluster:
```sh
vagrant destroy
```