Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Vincenzo D'Amore
2015-09-14 15:36:35 +02:00
296 changed files with 7389 additions and 2082 deletions

View File

@@ -108,7 +108,7 @@ Once the playbook as finished, it will print out the IP of the Kubernetes master
SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster:
$ ssh -i ~/.ssh/id_rsa_k8s core@<maste IP>
$ ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
$ fleetctl list-machines
MACHINE IP METADATA
a017c422... <node #1 IP> role=node

View File

@@ -42,7 +42,7 @@ Running Kubernetes locally via Docker
- [Step Three: Run the service proxy](#step-three-run-the-service-proxy)
- [Test it out](#test-it-out)
- [Run an application](#run-an-application)
- [Expose it as a service:](#expose-it-as-a-service)
- [Expose it as a service](#expose-it-as-a-service)
- [A note on turning down your cluster](#a-note-on-turning-down-your-cluster)
### Overview
@@ -128,7 +128,7 @@ On OS/X you will need to set up port forwarding via ssh:
boot2docker ssh -L8080:localhost:8080
```
List the nodes in your cluster by running::
List the nodes in your cluster by running:
```sh
kubectl get nodes
@@ -149,7 +149,7 @@ If you are running different Kubernetes clusters, you may need to specify `-s ht
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
@@ -164,7 +164,7 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
nginx 10.0.93.211 <none> 80/TCP run=nginx 1h
```
If `CLUSTER_IP` is blank run the following command to obtain it. Know issue #10836
If `CLUSTER_IP` is blank run the following command to obtain it. Know issue [#10836](https://github.com/kubernetes/kubernetes/issues/10836)
```sh
kubectl get svc nginx

View File

@@ -123,7 +123,7 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_API_ARGS=""
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused"
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```sh
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"

View File

@@ -132,6 +132,24 @@ However the gcloud bundled kubectl version may be older than the one downloaded
get.k8s.io install script. We recommend you use the downloaded binary to avoid
potential issues with client/server version skew.
#### Enabling bash completion of the Kubernetes command line tools
You may find it useful to enable `kubectl` bash completion:
```
$ source ./contrib/completions/bash/kubectl
```
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
```
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
```
but then you have to update it when you update kubectl.
### Getting started with your cluster
#### Inspect your cluster

View File

@@ -38,36 +38,31 @@ We still have [a bunch of work](http://issue.k8s.io/8262) to do to make the expe
### **Prerequisite**
- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on your machine and should be enabled. The minimum version required at this moment (2015/05/28) is [215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on the machine and should be enabled. The minimum version required at this moment (2015/09/01) is 219
*(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)*
- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt).
The minimum version required for now is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
- Make sure the `rkt metadata service` is running because it is necessary for running pod in private network mode.
More details about the networking of rkt can be found in the [documentation](https://github.com/coreos/rkt/blob/master/Documentation/networking.md).
To start the `rkt metadata service`, you can simply run:
```console
$ sudo rkt metadata-service
```
If you want the service to be running as a systemd service, then:
```console
$ sudo systemd-run rkt metadata-service
```
Alternatively, you can use the [rkt-metadata.service](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.service) and [rkt-metadata.socket](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.socket) to start the service.
The minimum version required for now is [v0.8.0](https://github.com/coreos/rkt/releases/tag/v0.8.0).
- Note that for rkt version later than v0.7.0, `metadata service` is not required for running pods in private networks. So now rkt pods will not register the metadata service be default.
### Local cluster
To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`:
To use rkt as the container runtime, we need to supply `--container-runtime=rkt` and `--rkt-path=$PATH_TO_RKT_BINARY` to kubelet. Additionally we can provide `--rkt-stage1-image` flag
as well to select which [stage1 image](https://github.com/coreos/rkt/blob/master/Documentation/running-lkvm-stage1.md) we want to use.
If you are using the [hack/local-up-cluster.sh](../../../hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
set these flags:
```console
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
```
Then we can launch the local cluster using the script:
```console
$ hack/local-up-cluster.sh
```
@@ -85,7 +80,7 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```console
$ export KUBE_RKT_VERSION=0.5.6
$ export KUBE_RKT_VERSION=0.8.0
```
Then you can launch the cluster by:
@@ -109,7 +104,7 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```console
$ export KUBE_RKT_VERSION=0.5.6
$ export KUBE_RKT_VERSION=0.8.0
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
@@ -134,6 +129,46 @@ See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try ou
For more complete applications, please look in the [examples directory](../../../examples/).
### Debugging
Here are severals tips for you when you run into any issues.
##### Check logs
By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4.
For local cluster, we can set the environment variable: `LOG_LEVEL=4`.
If the cluster is using salt, we can edit the [logging.sls](../../../cluster/saltbase/pillar/logging.sls) in the saltbase.
##### Check rkt pod status
To check the pods' status, we can use rkt command, such as `rkt list`, `rkt status`, `rkt image list`, etc.
More information about rkt command line can be found [here](https://github.com/coreos/rkt/blob/master/Documentation/commands.md)
##### Check journal logs
As we use systemd to launch rkt pods(by creating service files which will run `rkt run-prepared`, we can check the pods' log
using `journalctl`:
- Check the running state of the systemd service:
```console
$ sudo journalctl -u $SERVICE_FILE
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
```console
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
##### Check Kubernetes events, logs.
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](../../../docs/user-guide/application-troubleshooting.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rkt/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -246,7 +246,8 @@ kubernetes/cluster/ubuntu/build.sh
sudo cp -f binaries/minion/* /usr/bin
# Get the iptables based kube-proxy reccomended for this demo
sudo wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy -P /usr/bin/
wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy
sudo cp kube-proxy /usr/bin/
sudo chmod +x /usr/bin/kube-proxy
```