Remove all docs which are moving to http://kubernetes.github.io
All .md files now are only a pointer to where they likely are on the new site. All other files are untouched.
This commit is contained in:
@@ -32,41 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Deploy DNS on `docker` and `docker-multinode`
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/deployDNS/
|
||||
|
||||
### Get the template file
|
||||
|
||||
First of all, download the dns template
|
||||
|
||||
[skydns template](skydns.yaml.in)
|
||||
|
||||
### Set environment variables
|
||||
|
||||
Then you need to set `DNS_REPLICAS`, `DNS_DOMAIN` and `DNS_SERVER_IP` envs
|
||||
|
||||
```console
|
||||
$ export DNS_REPLICAS=1
|
||||
|
||||
$ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-domain` for containerized kubelet
|
||||
|
||||
$ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns` for containerized kubelet
|
||||
```
|
||||
|
||||
### Replace the corresponding value in the template and create the pod
|
||||
|
||||
```console
|
||||
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns.yaml.in > ./skydns.yaml
|
||||
|
||||
# If the kube-system namespace isn't already created, create it
|
||||
$ kubectl get ns
|
||||
$ kubectl create -f ./kube-system.yaml
|
||||
|
||||
$ kubectl create -f ./skydns.yaml
|
||||
```
|
||||
|
||||
### Test if DNS works
|
||||
|
||||
Follow [this link](../../../cluster/addons/dns/#how-do-i-test-if-it-is-working) to check it out.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
||||
@@ -32,247 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Installing a Kubernetes Master Node via Docker
|
||||
|
||||
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine
|
||||
is `${MASTER_IP}`. We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
|
||||
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.7"
|
||||
|
||||
Enviroinment variables used:
|
||||
|
||||
```sh
|
||||
export MASTER_IP=<the_master_ip_here>
|
||||
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
|
||||
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
|
||||
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
|
||||
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
|
||||
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
|
||||
```
|
||||
|
||||
There are two main phases to installing the master:
|
||||
* [Setting up `flanneld` and `etcd`](#setting-up-flanneld-and-etcd)
|
||||
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
|
||||
|
||||
|
||||
## Setting up flanneld and etcd
|
||||
|
||||
_Note_:
|
||||
This guide expects **Docker 1.7.1 or higher**.
|
||||
|
||||
### Setup Docker Bootstrap
|
||||
|
||||
We're going to use `flannel` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
|
||||
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
|
||||
`--iptables=false` so that it can only run containers with `--net=host`. That's sufficient to bootstrap our system.
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_If you have Docker 1.8.0 or higher run this instead_
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_Important Note_:
|
||||
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
|
||||
across reboots and failures.
|
||||
|
||||
|
||||
### Startup etcd for flannel and the API server to use
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
|
||||
--net=host \
|
||||
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--listen-client-urls=http://127.0.0.1:4001,http://${MASTER_IP}:4001 \
|
||||
--advertise-client-urls=http://${MASTER_IP}:4001 \
|
||||
--data-dir=/var/etcd/data
|
||||
```
|
||||
|
||||
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run \
|
||||
--net=host \
|
||||
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
|
||||
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
|
||||
```
|
||||
|
||||
|
||||
### Set up Flannel on the master node
|
||||
|
||||
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
|
||||
|
||||
Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.
|
||||
|
||||
#### Bring down Docker
|
||||
|
||||
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
|
||||
|
||||
Turning down Docker is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker stop
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
sudo systemctl stop docker
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
sudo service docker stop
|
||||
```
|
||||
|
||||
or it may be something else.
|
||||
|
||||
#### Run flannel
|
||||
|
||||
Now run flanneld itself:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
|
||||
--net=host \
|
||||
--privileged \
|
||||
-v /dev/net:/dev/net \
|
||||
quay.io/coreos/flannel:${FLANNEL_VERSION} \
|
||||
--ip-masq=${FLANNEL_IPMASQ} \
|
||||
--iface=${FLANNEL_IFACE}
|
||||
```
|
||||
|
||||
The previous command should have printed a really long hash, the container id, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
|
||||
#### Edit the docker configuration
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
Again this is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker start
|
||||
```
|
||||
|
||||
it may be:
|
||||
|
||||
```sh
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
## Starting the Kubernetes Master
|
||||
|
||||
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
|
||||
|
||||
```sh
|
||||
sudo docker run \
|
||||
--volume=/:/rootfs:ro \
|
||||
--volume=/sys:/sys:ro \
|
||||
--volume=/var/lib/docker/:/var/lib/docker:rw \
|
||||
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
--volume=/var/run:/var/run:rw \
|
||||
--net=host \
|
||||
--privileged=true \
|
||||
--pid=host \
|
||||
-d \
|
||||
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
|
||||
/hyperkube kubelet \
|
||||
--allow-privileged=true \
|
||||
--api-servers=http://localhost:8080 \
|
||||
--v=2 \
|
||||
--address=0.0.0.0 \
|
||||
--enable-server \
|
||||
--hostname-override=127.0.0.1 \
|
||||
--config=/etc/kubernetes/manifests-multi \
|
||||
--containerized \
|
||||
--cluster-dns=10.0.0.10 \
|
||||
--cluster-domain=cluster.local
|
||||
```
|
||||
|
||||
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
|
||||
|
||||
### Test it out
|
||||
|
||||
At this point, you should have a functioning 1-node cluster. Let's test it out!
|
||||
|
||||
Download the kubectl binary for `${K8S_VERSION}` (look at the URL in the following links) and make it available by editing your PATH environment variable.
|
||||
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/amd64/kubectl))
|
||||
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/386/kubectl))
|
||||
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/amd64/kubectl))
|
||||
([linux/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/386/kubectl))
|
||||
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/arm/kubectl))
|
||||
|
||||
For example, OS X:
|
||||
|
||||
```console
|
||||
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
|
||||
$ chmod 755 kubectl
|
||||
$ PATH=$PATH:`pwd`
|
||||
```
|
||||
|
||||
Linux:
|
||||
|
||||
```console
|
||||
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
|
||||
$ chmod 755 kubectl
|
||||
$ PATH=$PATH:`pwd`
|
||||
```
|
||||
|
||||
Now you can list the nodes:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
This should print something like:
|
||||
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
|
||||
If all else fails, ask questions on [Slack](../../troubleshooting.md#slack).
|
||||
|
||||
|
||||
### Next steps
|
||||
|
||||
Move on to [adding one or more workers](worker.md) or [deploy a dns](deployDNS.md)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/master/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
||||
@@ -32,73 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Testing your Kubernetes cluster.
|
||||
|
||||
To validate that your node(s) have been added, run:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
That should show something like:
|
||||
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on [Slack](../../troubleshooting.md#slack).
|
||||
|
||||
### Run an application
|
||||
|
||||
```sh
|
||||
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
```sh
|
||||
kubectl expose rc nginx --port=80
|
||||
```
|
||||
|
||||
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
|
||||
|
||||
```sh
|
||||
kubectl get svc nginx
|
||||
```
|
||||
|
||||
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
|
||||
|
||||
```sh
|
||||
kubectl get svc nginx --template={{.spec.clusterIP}}
|
||||
```
|
||||
|
||||
Hit the webserver with the first IP (CLUSTER_IP):
|
||||
|
||||
```sh
|
||||
curl <insert-cluster-ip-here>
|
||||
```
|
||||
|
||||
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
|
||||
|
||||
### Scaling
|
||||
|
||||
Now try to scale up the nginx you created before:
|
||||
|
||||
```sh
|
||||
kubectl scale rc nginx --replicas=3
|
||||
```
|
||||
|
||||
And list the pods
|
||||
|
||||
```sh
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
You should see pods landing on the newly added machine.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/testing/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
||||
@@ -32,184 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Adding a Kubernetes worker node via Docker.
|
||||
|
||||
|
||||
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
|
||||
You need to repeat these instructions for each node you want to join the cluster.
|
||||
We will assume that you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master.md). We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
|
||||
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.6"
|
||||
|
||||
Enviroinment variables used:
|
||||
|
||||
```sh
|
||||
export MASTER_IP=<the_master_ip_here>
|
||||
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.6)>
|
||||
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
|
||||
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
|
||||
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
|
||||
```
|
||||
|
||||
For each worker node, there are three steps:
|
||||
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
|
||||
|
||||
### Set up Flanneld on the worker node
|
||||
|
||||
As before, the Flannel daemon is going to provide network connectivity.
|
||||
|
||||
_Note_:
|
||||
This guide expects **Docker 1.7.1 or higher**.
|
||||
|
||||
|
||||
#### Set up a bootstrap docker
|
||||
|
||||
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_If you have Docker 1.8.0 or higher run this instead_
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_Important Note_:
|
||||
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
|
||||
across reboots and failures.
|
||||
|
||||
#### Bring down Docker
|
||||
|
||||
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
|
||||
|
||||
Turning down Docker is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker stop
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
sudo systemctl stop docker
|
||||
```
|
||||
|
||||
or it may be something else.
|
||||
|
||||
#### Run flannel
|
||||
|
||||
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
|
||||
--net=host \
|
||||
--privileged \
|
||||
-v /dev/net:/dev/net \
|
||||
quay.io/coreos/flannel:${FLANNEL_VERSION} \
|
||||
/opt/bin/flanneld \
|
||||
--ip-masq=${FLANNEL_IPMASQ} \
|
||||
--etcd-endpoints=http://${MASTER_IP}:4001 \
|
||||
--iface=${FLANNEL_IFACE}
|
||||
```
|
||||
|
||||
The previous command should have printed a really long hash, the container id, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
|
||||
|
||||
#### Edit the docker configuration
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
Again this is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker start
|
||||
```
|
||||
|
||||
or it may be:
|
||||
|
||||
```sh
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
### Start Kubernetes on the worker node
|
||||
|
||||
#### Run the kubelet
|
||||
|
||||
Again this is similar to the above, but the `--api-servers` now points to the master we set up in the beginning.
|
||||
|
||||
```sh
|
||||
sudo docker run \
|
||||
--volume=/:/rootfs:ro \
|
||||
--volume=/sys:/sys:ro \
|
||||
--volume=/dev:/dev \
|
||||
--volume=/var/lib/docker/:/var/lib/docker:rw \
|
||||
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
--volume=/var/run:/var/run:rw \
|
||||
--net=host \
|
||||
--privileged=true \
|
||||
--pid=host \
|
||||
-d \
|
||||
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
|
||||
/hyperkube kubelet \
|
||||
--allow-privileged=true \
|
||||
--api-servers=http://${MASTER_IP}:8080 \
|
||||
--v=2 \
|
||||
--address=0.0.0.0 \
|
||||
--enable-server \
|
||||
--containerized \
|
||||
--cluster-dns=10.0.0.10 \
|
||||
--cluster-domain=cluster.local
|
||||
```
|
||||
|
||||
#### Run the service proxy
|
||||
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
|
||||
|
||||
```sh
|
||||
sudo docker run -d \
|
||||
--net=host \
|
||||
--privileged \
|
||||
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
|
||||
/hyperkube proxy \
|
||||
--master=http://${MASTER_IP}:8080 \
|
||||
--v=2
|
||||
```
|
||||
|
||||
### Next steps
|
||||
|
||||
Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/worker/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
||||
Reference in New Issue
Block a user