Fix capitalization of Kubernetes in the documentation.
This commit is contained in:
@@ -84,7 +84,7 @@ You can download and install the latest Kubernetes release from [this page](http
|
||||
The script above will start (by default) a single master VM along with 4 worker VMs. You
|
||||
can tweak some of these parameters by editing `cluster/azure/config-default.sh`.
|
||||
|
||||
### Adding the kubernetes command line tools to PATH
|
||||
### Adding the Kubernetes command line tools to PATH
|
||||
|
||||
The [kubectl](../../docs/user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
|
||||
You will use it to look at your new cluster and bring up example apps.
|
||||
|
@@ -46,9 +46,9 @@ You need two machines with CentOS installed on them.
|
||||
|
||||
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
|
||||
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
|
||||
|
||||
**System Information:**
|
||||
|
||||
@@ -70,7 +70,7 @@ baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/
|
||||
gpgcheck=0
|
||||
```
|
||||
|
||||
* Install kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
|
||||
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
|
||||
|
||||
```sh
|
||||
yum -y install --enablerepo=virt7-testing kubernetes
|
||||
@@ -123,7 +123,7 @@ systemctl disable iptables-services firewalld
|
||||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
|
||||
**Configure the kubernetes services on the master.**
|
||||
**Configure the Kubernetes services on the master.**
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such:
|
||||
|
||||
@@ -157,7 +157,7 @@ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
done
|
||||
```
|
||||
|
||||
**Configure the kubernetes services on the node.**
|
||||
**Configure the Kubernetes services on the node.**
|
||||
|
||||
***We need to configure the kubelet and start the kubelet and proxy***
|
||||
|
||||
|
@@ -258,7 +258,7 @@ These are based on the work found here: [master.yml](cloud-configs/master.yaml),
|
||||
To make the setup work, you need to replace a few placeholders:
|
||||
|
||||
- Replace `<PXE_SERVER_IP>` with your PXE server ip address (e.g. 10.20.30.242)
|
||||
- Replace `<MASTER_SERVER_IP>` with the kubernetes master ip address (e.g. 10.20.30.40)
|
||||
- Replace `<MASTER_SERVER_IP>` with the Kubernetes master ip address (e.g. 10.20.30.40)
|
||||
- If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name.
|
||||
- If you use a proxy, replace `rproxy.example.com` with your proxy server (and port)
|
||||
- Add your own SSH public key(s) to the cloud config at the end
|
||||
|
@@ -56,7 +56,7 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
|
||||
|
||||
## Overview
|
||||
|
||||
This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
|
||||
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
|
||||
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
|
||||
times to create larger clusters.
|
||||
|
||||
|
@@ -41,7 +41,7 @@ We will assume that the IP address of this node is `${NODE_IP}` and you have the
|
||||
|
||||
For each worker node, there are three steps:
|
||||
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
|
||||
|
||||
### Set up Flanneld on the worker node
|
||||
|
@@ -30,7 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Running kubernetes locally via Docker
|
||||
Running Kubernetes locally via Docker
|
||||
-------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
@@ -47,7 +47,7 @@ Running kubernetes locally via Docker
|
||||
|
||||
### Overview
|
||||
|
||||
The following instructions show you how to set up a simple, single node kubernetes cluster using Docker.
|
||||
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
|
||||
|
||||
Here's a diagram of what the final result will look like:
|
||||

|
||||
@@ -80,7 +80,7 @@ docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2
|
||||
|
||||
### Test it out
|
||||
|
||||
At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl
|
||||
At this point you should have a running Kubernetes cluster. You can test this by downloading the kubectl
|
||||
binary
|
||||
([OS X](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/darwin/amd64/kubectl))
|
||||
([linux](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl))
|
||||
@@ -105,7 +105,7 @@ NAME LABELS STATUS
|
||||
127.0.0.1 <none> Ready
|
||||
```
|
||||
|
||||
If you are running different kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
|
||||
If you are running different Kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
|
||||
|
||||
### Run an application
|
||||
|
||||
|
@@ -30,10 +30,10 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Configuring kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
|
||||
Configuring Kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
Configuring kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
|
||||
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
@@ -73,7 +73,7 @@ If not
|
||||
yum install -y ansible git python-netaddr
|
||||
```
|
||||
|
||||
**Now clone down the kubernetes repository**
|
||||
**Now clone down the Kubernetes repository**
|
||||
|
||||
```sh
|
||||
git clone https://github.com/GoogleCloudPlatform/kubernetes.git
|
||||
@@ -134,7 +134,7 @@ edit: ~/kubernetes/contrib/ansible/group_vars/all.yml
|
||||
|
||||
**Configure the IP addresses used for services**
|
||||
|
||||
Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
|
||||
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
|
||||
|
||||
```yaml
|
||||
kube_service_addresses: 10.254.0.0/16
|
||||
@@ -167,7 +167,7 @@ dns_setup: true
|
||||
|
||||
**Tell ansible to get to work!**
|
||||
|
||||
This will finally setup your whole kubernetes cluster for you.
|
||||
This will finally setup your whole Kubernetes cluster for you.
|
||||
|
||||
```sh
|
||||
cd ~/kubernetes/contrib/ansible/
|
||||
@@ -177,7 +177,7 @@ cd ~/kubernetes/contrib/ansible/
|
||||
|
||||
## Testing and using your new cluster
|
||||
|
||||
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
|
||||
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
|
||||
|
||||
**Show kubernets nodes**
|
||||
|
||||
|
@@ -46,9 +46,9 @@ Getting started on [Fedora](http://fedoraproject.org)
|
||||
|
||||
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
|
||||
**System Information:**
|
||||
|
||||
@@ -61,7 +61,7 @@ fed-node = 192.168.121.65
|
||||
|
||||
**Prepare the hosts:**
|
||||
|
||||
* Install kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
|
||||
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
|
||||
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
|
||||
|
||||
@@ -105,7 +105,7 @@ systemctl disable iptables-services firewalld
|
||||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
|
||||
**Configure the kubernetes services on the master.**
|
||||
**Configure the Kubernetes services on the master.**
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
|
||||
|
||||
@@ -141,7 +141,7 @@ done
|
||||
|
||||
* Addition of nodes:
|
||||
|
||||
* Create following node.json file on kubernetes master node:
|
||||
* Create following node.json file on Kubernetes master node:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -157,7 +157,7 @@ done
|
||||
}
|
||||
```
|
||||
|
||||
Now create a node object internally in your kubernetes cluster by running:
|
||||
Now create a node object internally in your Kubernetes cluster by running:
|
||||
|
||||
```console
|
||||
$ kubectl create -f ./node.json
|
||||
@@ -170,10 +170,10 @@ fed-node name=fed-node-label Unknown
|
||||
Please note that in the above, it only creates a representation for the node
|
||||
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
|
||||
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
|
||||
reachable from kubernetes master node. This guide will discuss how to provision
|
||||
a kubernetes node (fed-node) below.
|
||||
reachable from Kubernetes master node. This guide will discuss how to provision
|
||||
a Kubernetes node (fed-node) below.
|
||||
|
||||
**Configure the kubernetes services on the node.**
|
||||
**Configure the Kubernetes services on the node.**
|
||||
|
||||
***We need to configure the kubelet on the node.***
|
||||
|
||||
@@ -181,7 +181,7 @@ a kubernetes node (fed-node) below.
|
||||
|
||||
```sh
|
||||
###
|
||||
# kubernetes kubelet (node) config
|
||||
# Kubernetes kubelet (node) config
|
||||
|
||||
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
|
||||
KUBELET_ADDRESS="--address=0.0.0.0"
|
||||
@@ -216,7 +216,7 @@ fed-node name=fed-node-label Ready
|
||||
|
||||
* Deletion of nodes:
|
||||
|
||||
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
|
||||
```sh
|
||||
kubectl delete -f ./node.json
|
||||
|
@@ -43,7 +43,7 @@ Kubernetes multiple nodes cluster with flannel on Fedora
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
|
||||
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -51,7 +51,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
|
||||
|
||||
## Master Setup
|
||||
|
||||
**Perform following commands on the kubernetes master**
|
||||
**Perform following commands on the Kubernetes master**
|
||||
|
||||
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
|
||||
|
||||
@@ -82,7 +82,7 @@ etcdctl get /coreos.com/network/config
|
||||
|
||||
## Node Setup
|
||||
|
||||
**Perform following commands on all kubernetes nodes**
|
||||
**Perform following commands on all Kubernetes nodes**
|
||||
|
||||
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
|
||||
|
||||
@@ -127,7 +127,7 @@ systemctl start docker
|
||||
|
||||
## **Test the cluster and flannel configuration**
|
||||
|
||||
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this:
|
||||
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
|
||||
|
||||
```console
|
||||
# ip -4 a|grep inet
|
||||
@@ -172,7 +172,7 @@ FLANNEL_MTU=1450
|
||||
FLANNEL_IPMASQ=false
|
||||
```
|
||||
|
||||
* At this point, we have etcd running on the kubernetes master, and flannel / docker running on kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
|
||||
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
|
||||
|
||||
* Issue the following commands on any 2 nodes:
|
||||
|
||||
@@ -211,7 +211,7 @@ PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
|
||||
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
|
||||
```
|
||||
|
||||
* Now kubernetes multi-node cluster is set up with overlay networking set up by flannel.
|
||||
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -38,7 +38,7 @@ Getting started on Google Compute Engine
|
||||
- [Before you start](#before-you-start)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Starting a cluster](#starting-a-cluster)
|
||||
- [Installing the kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation)
|
||||
- [Installing the Kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation)
|
||||
- [Getting started with your cluster](#getting-started-with-your-cluster)
|
||||
- [Inspect your cluster](#inspect-your-cluster)
|
||||
- [Run some examples](#run-some-examples)
|
||||
@@ -109,7 +109,7 @@ The next few steps will show you:
|
||||
1. how to delete the cluster
|
||||
1. how to start clusters with non-default options (like larger clusters)
|
||||
|
||||
### Installing the kubernetes command line tools on your workstation
|
||||
### Installing the Kubernetes command line tools on your workstation
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
The next step is to make sure the `kubectl` tool is in your path.
|
||||
|
@@ -103,7 +103,7 @@ the required predependencies to get started with Juju, additionally it will
|
||||
launch a curses based configuration utility allowing you to select your cloud
|
||||
provider and enter the proper access credentials.
|
||||
|
||||
Next it will deploy the kubernetes master, etcd, 2 nodes with flannel based
|
||||
Next it will deploy the Kubernetes master, etcd, 2 nodes with flannel based
|
||||
Software Defined Networking.
|
||||
|
||||
|
||||
@@ -129,7 +129,7 @@ You can use `juju ssh` to access any of the units:
|
||||
|
||||
## Run some containers!
|
||||
|
||||
`kubectl` is available on the kubernetes master node. We'll ssh in to
|
||||
`kubectl` is available on the Kubernetes master node. We'll ssh in to
|
||||
launch some containers, but one could use kubectl locally setting
|
||||
KUBERNETES_MASTER to point at the ip of `kubernetes-master/0`.
|
||||
|
||||
|
@@ -103,7 +103,7 @@ $ usermod -a -G libvirtd $USER
|
||||
|
||||
#### ² Qemu will run with a specific user. It must have access to the VMs drives
|
||||
|
||||
All the disk drive resources needed by the VM (CoreOS disk image, kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
|
||||
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
|
||||
|
||||
As we’re using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
|
||||
|
||||
@@ -128,7 +128,7 @@ setfacl -m g:kvm:--x ~
|
||||
|
||||
### Setup
|
||||
|
||||
By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||
|
||||
To start your local cluster, open a shell and run:
|
||||
|
||||
@@ -143,7 +143,7 @@ The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster
|
||||
|
||||
The `NUM_MINIONS` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
|
||||
|
||||
The `KUBE_PUSH` environment variable may be set to specify which kubernetes binaries must be deployed on the cluster. Its possible values are:
|
||||
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
|
||||
|
||||
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
|
||||
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
|
||||
@@ -160,7 +160,7 @@ $ virsh -c qemu:///system list
|
||||
18 kubernetes_minion-03 running
|
||||
```
|
||||
|
||||
You can check that the kubernetes cluster is working with:
|
||||
You can check that the Kubernetes cluster is working with:
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
|
@@ -60,7 +60,7 @@ Not running Linux? Consider running Linux in a local virtual machine with [Vagra
|
||||
|
||||
At least [Docker](https://docs.docker.com/installation/#installation)
|
||||
1.3+. Ensure the Docker daemon is running and can be contacted (try `docker
|
||||
ps`). Some of the kubernetes components need to run as root, which normally
|
||||
ps`). Some of the Kubernetes components need to run as root, which normally
|
||||
works fine with docker.
|
||||
|
||||
#### etcd
|
||||
@@ -73,7 +73,7 @@ You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please
|
||||
|
||||
### Starting the cluster
|
||||
|
||||
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop kubernetes daemons, it is easier to run the entire script as root):
|
||||
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root):
|
||||
|
||||
```sh
|
||||
cd kubernetes
|
||||
@@ -108,7 +108,7 @@ cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
|
||||
exit
|
||||
## end wait
|
||||
|
||||
## introspect kubernetes!
|
||||
## introspect Kubernetes!
|
||||
cluster/kubectl.sh get pods
|
||||
cluster/kubectl.sh get services
|
||||
cluster/kubectl.sh get replicationcontrollers
|
||||
@@ -118,7 +118,7 @@ cluster/kubectl.sh get replicationcontrollers
|
||||
### Running a user defined pod
|
||||
|
||||
Note the difference between a [container](../user-guide/containers.md)
|
||||
and a [pod](../user-guide/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
|
||||
and a [pod](../user-guide/pods.md). Since you only asked for the former, Kubernetes will create a wrapper pod for you.
|
||||
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
|
||||
|
||||
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
|
||||
@@ -157,7 +157,7 @@ hack/local-up-cluster.sh
|
||||
|
||||
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.
|
||||
|
||||
One or more of the kubernetes daemons might've crashed. Tail the logs of each in /tmp.
|
||||
One or more of the KUbernetes daemons might've crashed. Tail the logs of each in /tmp.
|
||||
|
||||
#### The pods fail to connect to the services by host names
|
||||
|
||||
|
@@ -46,12 +46,12 @@ oVirt is a virtual datacenter manager that delivers powerful management of multi
|
||||
|
||||
## oVirt Cloud Provider Deployment
|
||||
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes kubernetes may work as well.
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
|
||||
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to kubernetes.
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
|
||||
|
||||
Once the kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
|
||||
[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
|
||||
[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines
|
||||
@@ -67,13 +67,13 @@ The oVirt Cloud Provider requires access to the oVirt REST-API to gather the pro
|
||||
username = admin@internal
|
||||
password = admin
|
||||
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to kubernetes:
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
|
||||
|
||||
[filters]
|
||||
# Search query used to find nodes
|
||||
vms = tag=kubernetes
|
||||
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to kubernetes.
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
|
||||
|
||||
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
||||
|
||||
@@ -81,7 +81,7 @@ The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
||||
|
||||
## oVirt Cloud Provider Screencast
|
||||
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your kubernetes cluster.
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
|
||||
|
||||
[](http://www.youtube.com/watch?v=JyyST4ZKne8)
|
||||
|
||||
|
@@ -67,11 +67,11 @@ The current cluster design is inspired by:
|
||||
|
||||
- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh`
|
||||
- Note: The get.k8s.io install method is not working yet for our scripts.
|
||||
* To install the latest released version of kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
|
||||
* To install the latest released version of Kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
|
||||
|
||||
## Build
|
||||
|
||||
1. The kubernetes binaries will be built via the common build scripts in `build/`.
|
||||
1. The Kubernetes binaries will be built via the common build scripts in `build/`.
|
||||
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
|
||||
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
|
||||
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/nodes when booted.
|
||||
|
@@ -136,7 +136,7 @@ accomplished in two ways:
|
||||
- Harder to setup from scratch.
|
||||
- Google Compute Engine ([GCE](gce.md)) and [AWS](aws.md) guides use this approach.
|
||||
- Need to make the Pod IPs routable by programming routers, switches, etc.
|
||||
- Can be configured external to kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
|
||||
- Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
|
||||
- Generally highest performance.
|
||||
- Create an Overlay network
|
||||
- Easier to setup
|
||||
@@ -241,7 +241,7 @@ For etcd, you can:
|
||||
- Build your own image
|
||||
- You can do: `cd kubernetes/cluster/images/etcd; make`
|
||||
|
||||
We recommend that you use the etcd version which is provided in the kubernetes binary distribution. The kubernetes binaries in the release
|
||||
We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release
|
||||
were tested extensively with this version of etcd and not with any other version.
|
||||
The recommended version number can also be found as the value of `ETCD_VERSION` in `kubernetes/cluster/images/etcd/Makefile`.
|
||||
|
||||
@@ -353,7 +353,7 @@ guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
|
||||
|
||||
## Configuring and Installing Base Software on Nodes
|
||||
|
||||
This section discusses how to configure machines to be kubernetes nodes.
|
||||
This section discusses how to configure machines to be Kubernetes nodes.
|
||||
|
||||
You should run three daemons on every node:
|
||||
- docker or rkt
|
||||
|
@@ -37,13 +37,13 @@ Kubernetes Deployment On Bare-metal Ubuntu Nodes
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Starting a Cluster](#starting-a-cluster)
|
||||
- [Make *kubernetes* , *etcd* and *flanneld* binaries](#make-kubernetes--etcd-and-flanneld-binaries)
|
||||
- [Configure and start the kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
|
||||
- [Configure and start the Kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
|
||||
- [Deploy addons](#deploy-addons)
|
||||
- [Trouble Shooting](#trouble-shooting)
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to deploy kubernetes on ubuntu nodes, including 1 kubernetes master and 3 kubernetes nodes, and people uses this approach can scale to **any number of nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
|
||||
This document describes how to deploy Kubernetes on ubuntu nodes, including 1 Kubernetes master and 3 Kubernetes nodes, and people uses this approach can scale to **any number of nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
|
||||
|
||||
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
|
||||
|
||||
@@ -64,7 +64,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ku
|
||||
|
||||
#### Make *kubernetes* , *etcd* and *flanneld* binaries
|
||||
|
||||
First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git`
|
||||
First clone the Kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git`
|
||||
then `$ cd kubernetes/cluster/ubuntu`.
|
||||
|
||||
Then run `$ ./build.sh`, this will download all the needed binaries into `./binaries`.
|
||||
@@ -75,7 +75,7 @@ Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `ku
|
||||
|
||||
> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example.
|
||||
|
||||
#### Configure and start the kubernetes cluster
|
||||
#### Configure and start the Kubernetes cluster
|
||||
|
||||
An example cluster is listed as below:
|
||||
|
||||
@@ -105,7 +105,7 @@ Then the `roles ` variable defines the role of above machine in the same order,
|
||||
|
||||
The `NUM_MINIONS` variable defines the total number of nodes.
|
||||
|
||||
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
|
||||
The `SERVICE_CLUSTER_IP_RANGE` variable defines the Kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
|
||||
|
||||
10.0.0.0 - 10.255.255.255 (10/8 prefix)
|
||||
|
||||
@@ -148,7 +148,7 @@ NAME LABELS STATUS
|
||||
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
|
||||
```
|
||||
|
||||
Also you can run kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s.
|
||||
Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s.
|
||||
|
||||
|
||||
#### Deploy addons
|
||||
|
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
||||
|
||||
## Getting started with Vagrant
|
||||
|
||||
Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
|
||||
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
|
Reference in New Issue
Block a user