Run gendocs

This commit is contained in:
Tim Hockin
2015-07-17 15:35:41 -07:00
parent aacc4c864c
commit 33f1862830
210 changed files with 599 additions and 27 deletions

View File

@@ -55,6 +55,7 @@ they vary from step-by-step instructions to general advice for setting up
a kubernetes cluster from scratch.
### Local-machine Solutions
Local-machine solutions create a single cluster with one or more kubernetes nodes on a single
physical machine. Setup is completely automated and doesn't require a cloud provider account.
But their size and availability is limited to that of a single machine.
@@ -66,10 +67,12 @@ The local-machine solutions are:
### Hosted Solutions
[Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes
clusters.
### Turn-key Cloud Solutions
These solutions allow you to create Kubernetes clusters on range of Cloud IaaS providers with only a
few commands, and have active community support.
- [GCE](gce.md)
@@ -90,6 +93,7 @@ If you are interested in supporting Kubernetes on a new platform, check out our
writing a new solution](../../docs/devel/writing-a-getting-started-guide.md).
#### Cloud
These solutions are combinations of cloud provider and OS not covered by the above solutions.
- [AWS + coreos](coreos.md)
- [GCE + CoreOS](coreos.md)
@@ -98,6 +102,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [Rackspace + CoreOS](rackspace.md)
#### On-Premises VMs
- [Vagrant](coreos.md) (uses CoreOS and flannel)
- [CloudStack](cloudstack.md) (uses Ansible, CoreOS and flannel)
- [Vmware](vsphere.md) (uses Debian)
@@ -109,6 +114,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [KVM](fedora/flannel_multi_node_cluster.md) (uses Fedora and flannel)
#### Bare Metal
- [Offline](coreos/bare_metal_offline.md) (no internet required. Uses CoreOS and Flannel)
- [fedora/fedora_ansible_config.md](fedora/fedora_ansible_config.md)
- [Fedora single node](fedora/fedora_manual_config.md)
@@ -118,9 +124,11 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [Docker Multi Node](docker-multinode.md)
#### Integrations
- [Kubernetes on Mesos](mesos.md) (Uses GCE)
## Table of Solutions
Here are all the solutions mentioned above in table form.
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Getting started on Amazon EC2 with CoreOS
The example below creates an elastic Kubernetes cluster with a custom number of worker nodes and a master.

View File

@@ -52,6 +52,7 @@ Getting started on AWS EC2
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
## Cluster turnup
### Supported procedure: `get-kube`
```bash
@@ -89,11 +90,14 @@ If these already exist, make sure you want them to be used here.
NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key.
### Alternatives
A contributed [example](aws-coreos.md) allows you to setup a Kubernetes cluster based on [CoreOS](http://www.coreos.com), either using
AWS CloudFormation or EC2 with user data (cloud-config).
## Getting started with your cluster
### Command line administration tool: `kubectl`
The cluster startup script will leave you with a ```kubernetes``` directory on your workstation.
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/GoogleCloudPlatform/kubernetes/releases).
@@ -113,6 +117,7 @@ By default, `kubectl` will use the `kubeconfig` file generated during the cluste
For more information, please read [kubeconfig files](../../docs/user-guide/kubeconfig-file.md)
### Examples
See [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](../../examples/guestbook/)
@@ -120,6 +125,7 @@ The "Guestbook" application is another popular example to get started with Kuber
For more complete applications, please look in the [examples directory](../../examples/)
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
@@ -128,6 +134,7 @@ cluster/kube-down.sh
```
## Further reading
Please see the [Kubernetes docs](../../docs/) for more details on administering
and using a Kubernetes cluster.

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Install and configure kubectl
## Download the kubectl CLI tool

View File

@@ -58,7 +58,9 @@ installed](https://docs.docker.com/installation/). On Mac OS X you can use
[boot2docker](http://boot2docker.io/).
## Setup
###Starting a cluster
### Starting a cluster
The cluster setup scripts can setup Kubernetes for multiple targets. First modify `cluster/kube-env.sh` to specify azure:
KUBERNETES_PROVIDER="azure"
@@ -83,6 +85,7 @@ The script above will start (by default) a single master VM along with 4 worker
can tweak some of these parameters by editing `cluster/azure/config-default.sh`.
### Adding the kubernetes command line tools to PATH
The [kubectl](../../docs/user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
You will use it to look at your new cluster and bring up example apps.
@@ -95,6 +98,7 @@ Add the appropriate binary folder to your ```PATH``` to access kubectl:
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
## Getting started with your cluster
See [a simple nginx example](../user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples/).

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Getting a Binary Release
You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release.

View File

@@ -37,10 +37,13 @@ Getting started on [CentOS](http://centos.org)
- [Prerequisites](#prerequisites)
- [Starting a cluster](#starting-a-cluster)
## Prerequisites
You need two machines with CentOS installed on them.
## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.

View File

@@ -52,7 +52,7 @@ This is a completely automated, a single playbook deploys Kubernetes based on th
This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
###Prerequisites
### Prerequisites
$ sudo apt-get install -y python-pip
$ sudo pip install ansible
@@ -74,14 +74,14 @@ Or create a `~/.cloudstack.ini` file:
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
###Clone the playbook
### Clone the playbook
$ git clone --recursive https://github.com/runseb/ansible-kubernetes.git
$ cd ansible-kubernetes
The [ansible-cloudstack](https://github.com/resmo/ansible-cloudstack) module is setup in this repository as a submodule, hence the `--recursive`.
###Create a Kubernetes cluster
### Create a Kubernetes cluster
You simply need to run the playbook.

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Getting started on [CoreOS](http://coreos.com)
There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com):

View File

@@ -49,6 +49,7 @@ Kubernetes on Azure with CoreOS and [Weave](http://weave.works)
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
### Prerequisites
1. You need an Azure account.
## Let's go!

View File

@@ -53,10 +53,12 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to
## Prerequisites
1. Installed *CentOS 6* for PXE server
2. At least two bare metal nodes to work with
## High Level Design
1. Manage the tftp directory
* /tftpboot/(coreos)(centos)(RHEL)
* /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file
@@ -67,6 +69,7 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to
6. Installing the CoreOS slaves to become Kubernetes nodes.
## This Guides variables
| Node Description | MAC | IP |
| :---------------------------- | :---------------: | :---------: |
| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
@@ -75,6 +78,7 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to
## Setup PXELINUX CentOS
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version.
1. Install packages needed on CentOS
@@ -121,6 +125,7 @@ To setup CentOS PXELINUX environment there is a complete [guide here](http://doc
Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
## Adding CoreOS to PXE
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
1. Find or create the TFTP root directory that everything will be based off of.
@@ -168,6 +173,7 @@ This section describes how to setup the CoreOS images to live alongside a pre-ex
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
## DHCP configuration
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
1. Add the ```filename``` to the _host_ or _subnet_ sections.
@@ -210,6 +216,7 @@ This section covers configuring the DHCP server to hand out our new images. In t
We will be specifying the node configuration later in the guide.
## Kubernetes
To deploy our configuration we need to create an ```etcd``` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
2. Have a service discovery protocol running in our stack to do auto discovery.
@@ -243,6 +250,7 @@ This sets up our binaries we need to run Kubernetes. This would need to be enhan
Now for the good stuff!
## Cloud Configs
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
These are based on the work found here: [master.yml](cloud-configs/master.yaml), [node.yml](cloud-configs/node.yaml)
@@ -256,6 +264,7 @@ To make the setup work, you need to replace a few placeholders:
- Add your own SSH public key(s) to the cloud config at the end
### master.yml
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-master.yml```.
@@ -476,6 +485,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-
### node.yml
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-slave.yml```.
#cloud-config
@@ -610,6 +620,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-
## New pxelinux.cfg file
Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-slave```
default coreos
@@ -637,6 +648,7 @@ And one for the _master_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-master``
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
## Specify the pxelinux targets
Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file.
Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX).
@@ -650,6 +662,7 @@ Refer to the MAC address table in the beginning of this guide. Documentation for
Reboot these servers to get the images PXEd and ready for running containers!
## Creating test pod
Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# CoreOS Multinode Cluster
Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster.

View File

@@ -51,9 +51,11 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
- [Testing your cluster](#testing-your-cluster)
## Prerequisites
1. You need a machine with docker installed.
## Overview
This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
@@ -62,6 +64,7 @@ Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-docker.png)
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd```
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
@@ -71,6 +74,7 @@ all of the Docker containers created by Kubernetes. To achieve this, it must ru
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
## Master Node
The first step in the process is to initialize the master node.
See [here](docker-multinode/master.md) for detailed instructions.

View File

@@ -30,7 +30,9 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Installing a Kubernetes Master Node via Docker
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}```
There are two main phases to installing the master:
@@ -45,6 +47,7 @@ There is a [bug](https://github.com/docker/docker/issues/14106) in Docker 1.7.0
Please install Docker 1.6.2 or wait for Docker 1.7.1.
### Setup Docker-Bootstrap
We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.
@@ -61,6 +64,7 @@ across reboots and failures.
### Startup etcd for flannel and the API server to use
Run:
```
@@ -75,11 +79,13 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/googl
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
@@ -113,6 +119,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
@@ -124,6 +131,7 @@ Regardless, you need to add the following to the docker command line:
```
#### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:
```sh
@@ -134,6 +142,7 @@ sudo brctl delbr docker0
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
@@ -147,6 +156,7 @@ systemctl start docker
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
```sh
@@ -160,6 +170,7 @@ sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0
```
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
Download the kubectl binary
@@ -184,6 +195,7 @@ If all else fails, ask questions on IRC at [#google-containers](http://webchat.f
### Next steps
Move on to [adding one or more workers](worker.md)

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Testing your Kubernetes cluster.
To validate that your node(s) have been added, run:

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Adding a Kubernetes worker node via Docker.
@@ -44,6 +45,7 @@ For each worker node, there are three steps:
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
### Set up Flanneld on the worker node
As before, the Flannel daemon is going to provide network connectivity.
_Note_:
@@ -52,6 +54,7 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
#### Set up a bootstrap docker
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
Run:
@@ -65,6 +68,7 @@ If you are running this on a long running system, rather than experimenting, you
across reboots and failures.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
@@ -99,6 +103,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
@@ -110,6 +115,7 @@ Regardless, you need to add the following to the docker command line:
```
#### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:
```sh
@@ -120,6 +126,7 @@ sudo brctl delbr docker0
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
@@ -133,7 +140,9 @@ systemctl start docker
```
### Start Kubernetes on the worker node
#### Run the kubelet
Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning.
```sh
@@ -141,6 +150,7 @@ sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services```
```sh

View File

@@ -53,6 +53,7 @@ Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-singlenode-docker.png)
### Prerequisites
1. You need to have docker installed on one machine.
### Step One: Run etcd
@@ -70,6 +71,7 @@ docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/go
This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods.md) that contains the other master components.
### Step Three: Run the service proxy
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
```sh
@@ -77,6 +79,7 @@ docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2
```
### Test it out
At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl
binary
([OS X](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/darwin/amd64/kubectl))
@@ -134,6 +137,7 @@ curl <insert-ip-from-above-here>
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### A note on turning down your cluster
Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers.

View File

@@ -44,7 +44,7 @@ Configuring kubernetes on Fedora via Ansible offers a simple way to quickly crea
- [Setting up the cluster](#setting-up-the-cluster)
- [Testing and using your new cluster](#testing-and-using-your-new-cluster)
##Prerequisites
## Prerequisites
1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible)
2. A Fedora 20+ or RHEL7 host to act as cluster master

View File

@@ -39,6 +39,7 @@ Getting started on [Fedora](http://fedoraproject.org)
- [Instructions](#instructions)
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Instructions

View File

@@ -46,6 +46,7 @@ Kubernetes multiple nodes cluster with flannel on Fedora
This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Master Setup
@@ -124,7 +125,7 @@ FLANNEL_OPTIONS=""
***
##**Test the cluster and flannel configuration**
## **Test the cluster and flannel configuration**
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this:

View File

@@ -188,6 +188,7 @@ Then, see [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try
For more complete applications, please look in the [examples directory](../../examples/). The [guestbook example](../../examples/guestbook/) is a good "getting started" walkthrough.
### Tearing down the cluster
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
```bash

View File

@@ -160,6 +160,7 @@ hack/local-up-cluster.sh
One or more of the kubernetes daemons might've crashed. Tail the logs of each in /tmp.
#### The pods fail to connect to the services by host names
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/issues/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it)

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Level Logging with Elasticsearch and Kibana
On the Google Compute Engine (GCE) platform the default cluster level logging support targets

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Level Logging to Google Cloud Logging
A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services.

View File

@@ -46,6 +46,7 @@ Getting started with Kubernetes on Mesos
- [Test Guestbook App](#test-guestbook-app)
## About Kubernetes on Mesos
<!-- TODO: Update, clean up. -->
Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3].
@@ -97,6 +98,7 @@ $ export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
```
### Deploy etcd
Start etcd and verify that it is running:
```bash
@@ -118,6 +120,7 @@ curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
If connectivity is OK, you will see an output of the available keys in etcd (if any).
### Start Kubernetes-Mesos Services
Update your PATH to more easily run the Kubernetes-Mesos binaries:
```bash
@@ -176,6 +179,7 @@ $ disown -a
```
#### Validate KM Services
Add the appropriate binary folder to your ```PATH``` to access kubectl:
```bash

View File

@@ -58,23 +58,26 @@ The current cluster design is inspired by:
- [Angus Lees](https://github.com/anguslees/kube-openstack)
## Prerequisites
1. Python2.7
2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into.
3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details.
##Provider: Rackspace
## Provider: Rackspace
- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh`
- Note: The get.k8s.io install method is not working yet for our scripts.
* To install the latest released version of kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
## Build
1. The kubernetes binaries will be built via the common build scripts in `build/`.
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/nodes when booted.
## Cluster
There is a specific `cluster/rackspace` directory with the scripts for the following steps:
1. A cloud network will be created and all instances will be attached to this network.
- flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network.
@@ -83,6 +86,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo
4. We then boot as many nodes as defined via `$NUM_MINIONS`.
## Some notes
- The scripts expect `eth2` to be the cloud network that the containers will communicate across.
- A number of the items in `config-default.sh` are overridable via environment variables.
- For older versions please either:
@@ -92,6 +96,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo
* Download a [snapshot of `v0.3`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.3.tar.gz)
## Network Design
- eth0 - Public Interface used for servers/containers to reach the internet
- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services.
- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface.

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Run Kubernetes with rkt
This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime.
@@ -127,6 +128,7 @@ Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
### Getting started with your cluster
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).

View File

@@ -72,6 +72,7 @@ steps that existing cluster setup scripts are making.
## Designing and Preparing
### Learning
1. You should be familiar with using Kubernetes already. We suggest you set
up a temporary cluster by following one of the other Getting Started Guides.
This will help you become familiar with the CLI ([kubectl](../user-guide/kubectl/kubectl.md)) and concepts ([pods](../user-guide/pods.md), [services](../user-guide/services.md), etc.) first.
@@ -79,6 +80,7 @@ steps that existing cluster setup scripts are making.
effect of completing one of the other Getting Started Guides.
### Cloud Provider
Kubernetes has the concept of a Cloud Provider, which is a module which provides
an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes.
The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to
@@ -87,6 +89,7 @@ bare-metal), and not all parts of the interface need to be implemented, dependin
on how flags are set on various components.
### Nodes
- You can use virtual or physical machines.
- While you can build a cluster with 1 machine, in order to run all the examples and tests you
need at least 4 nodes.
@@ -100,6 +103,7 @@ on how flags are set on various components.
have identical configurations.
### Network
Kubernetes has a distinctive [networking model](../admin/networking.md).
Kubernetes allocates an IP address to each pod. When creating a cluster, you
@@ -167,6 +171,7 @@ region of the world, etc.
need to distinguish which resources each created. Call this `CLUSTERNAME`.
### Software Binaries
You will need binaries for:
- etcd
- A container runner, one of:
@@ -180,6 +185,7 @@ You will need binaries for:
- kube-scheduler
#### Downloading and Extracting Kubernetes Binaries
A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd.
You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the
[Developer Documentation](../devel/README.md). Only using a binary release is covered in this guide.
@@ -190,6 +196,7 @@ Then, within the second set of unzipped files, locate `./kubernetes/server/bin`,
all the necessary binaries.
#### Selecting Images
You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so
you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
we recommend that you run these as containers, so you need an image to be built.
@@ -238,6 +245,7 @@ There are two main options for security:
If following the HTTPS approach, you will need to prepare certs and credentials.
#### Preparing Certs
You need to prepare several certs:
- The master needs a cert to act as an HTTPS server.
- The kubelets optionally need certs to identify themselves as clients of the master, and when
@@ -262,6 +270,7 @@ You will end up with the following files (we will use these variables later on)
- optional
#### Preparing Credentials
The admin user (and any users) need:
- a token or a password to identify them.
- tokens are just long alphanumeric strings, e.g. 32 chars. See
@@ -339,6 +348,7 @@ Started Guide. After getting a cluster running, you can then copy the init.d s
cluster, and then modify them for use on your custom cluster.
### Docker
The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it.
If you previously had Docker installed on a node without setting Kubernetes-specific
@@ -422,6 +432,7 @@ Arguments to consider:
- `--api-servers=http://$MASTER_IP`
### Networking
Each node needs to be allocated its own CIDR range for pod networking.
Call this `NODE_X_POD_CIDR`.
@@ -462,6 +473,7 @@ any masquerading at all. Others, such as GCE, will not allow pod IPs to send
traffic to the internet, but have no problem with them inside your GCE Project.
### Other
- Enable auto-upgrades for your OS package manager, if desired.
- Configure log rotation for all node components (e.g. using [logrotate](http://linux.die.net/man/8/logrotate)).
- Setup liveness-monitoring (e.g. using [monit](http://linux.die.net/man/1/monit)).
@@ -470,6 +482,7 @@ traffic to the internet, but have no problem with them inside your GCE Project.
volumes.
### Using Configuration Management
The previous steps all involved "conventional" system administration techniques for setting up
machines. You may want to use a Configuration Management system to automate the node configuration
process. There are examples of [Saltstack](../admin/salt.md), Ansible, Juju, and CoreOS Cloud Config in the
@@ -485,6 +498,7 @@ all configured and managed *by Kubernetes*:
- they are kept running by Kubernetes rather than by init.
### etcd
You will need to run one or more instances of etcd.
- Recommended approach: run one etcd instance, with its log written to a directory backed
by durable storage (RAID, GCE PD)
@@ -613,6 +627,7 @@ node disk.
Optionally, you may want to mount `/var/log` as well and redirect output there.
#### Starting Apiserver
Place the completed pod template into the kubelet config dir
(whatever `--config=` argument of kubelet is set to, typically
`/etc/kubernetes/manifests`).
@@ -688,6 +703,7 @@ Optionally, you may want to mount `/var/log` as well and redirect output there.
Start as described for apiserver.
### Controller Manager
To run the controller manager:
- select the correct flags for your cluster
- write a pod spec for the controller manager using the provided template
@@ -803,6 +819,7 @@ The nodes must be able to connect to each other using their private IP. Verify t
pinging or SSH-ing from one node to another.
### Getting Help
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers) on freenode.

View File

@@ -48,6 +48,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ku
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
## Prerequisites
*1 The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)*
@@ -60,6 +61,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ku
### Starting a Cluster
#### Make *kubernetes* , *etcd* and *flanneld* binaries
First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git`
@@ -74,6 +76,7 @@ Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `ku
> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example.
#### Configure and start the kubernetes cluster
An example cluster is listed as below:
| IP Address|Role |

View File

@@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Getting started with Vagrant
Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
@@ -53,6 +54,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
- [I want vagrant to sync folders via nfs!](#i-want-vagrant-to-sync-folders-via-nfs)
### Prerequisites
1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html
2. Install one of:
1. Version 4.3.28 of Virtual Box from https://www.virtualbox.org/wiki/Download_Old_Builds_4_3
@@ -366,6 +368,7 @@ export KUBERNETES_MINION_MEMORY=2048
```
#### I ran vagrant suspend and nothing works!
```vagrant suspend``` seems to mess up the network. This is not supported at this time.
#### I want vagrant to sync folders via nfs!