Fix capitalization of Kubernetes in the documentation.

This commit is contained in:
Alex Robinson
2015-07-20 13:45:36 -07:00
parent 7536db6d53
commit acd1bed70e
61 changed files with 149 additions and 149 deletions

View File

@@ -30,10 +30,10 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Configuring kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
Configuring Kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
-------------------------------------------------------------------------------------------------------
Configuring kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
**Table of Contents**
@@ -73,7 +73,7 @@ If not
yum install -y ansible git python-netaddr
```
**Now clone down the kubernetes repository**
**Now clone down the Kubernetes repository**
```sh
git clone https://github.com/GoogleCloudPlatform/kubernetes.git
@@ -134,7 +134,7 @@ edit: ~/kubernetes/contrib/ansible/group_vars/all.yml
**Configure the IP addresses used for services**
Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
kube_service_addresses: 10.254.0.0/16
@@ -167,7 +167,7 @@ dns_setup: true
**Tell ansible to get to work!**
This will finally setup your whole kubernetes cluster for you.
This will finally setup your whole Kubernetes cluster for you.
```sh
cd ~/kubernetes/contrib/ansible/
@@ -177,7 +177,7 @@ cd ~/kubernetes/contrib/ansible/
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
**Show kubernets nodes**

View File

@@ -46,9 +46,9 @@ Getting started on [Fedora](http://fedoraproject.org)
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:**
@@ -61,7 +61,7 @@ fed-node = 192.168.121.65
**Prepare the hosts:**
* Install kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
@@ -105,7 +105,7 @@ systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the kubernetes services on the master.**
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
@@ -141,7 +141,7 @@ done
* Addition of nodes:
* Create following node.json file on kubernetes master node:
* Create following node.json file on Kubernetes master node:
```json
{
@@ -157,7 +157,7 @@ done
}
```
Now create a node object internally in your kubernetes cluster by running:
Now create a node object internally in your Kubernetes cluster by running:
```console
$ kubectl create -f ./node.json
@@ -170,10 +170,10 @@ fed-node name=fed-node-label Unknown
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
reachable from kubernetes master node. This guide will discuss how to provision
a kubernetes node (fed-node) below.
reachable from Kubernetes master node. This guide will discuss how to provision
a Kubernetes node (fed-node) below.
**Configure the kubernetes services on the node.**
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet on the node.***
@@ -181,7 +181,7 @@ a kubernetes node (fed-node) below.
```sh
###
# kubernetes kubelet (node) config
# Kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
@@ -216,7 +216,7 @@ fed-node name=fed-node-label Ready
* Deletion of nodes:
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```sh
kubectl delete -f ./node.json

View File

@@ -43,7 +43,7 @@ Kubernetes multiple nodes cluster with flannel on Fedora
## Introduction
This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
@@ -51,7 +51,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
## Master Setup
**Perform following commands on the kubernetes master**
**Perform following commands on the Kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
@@ -82,7 +82,7 @@ etcdctl get /coreos.com/network/config
## Node Setup
**Perform following commands on all kubernetes nodes**
**Perform following commands on all Kubernetes nodes**
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
@@ -127,7 +127,7 @@ systemctl start docker
## **Test the cluster and flannel configuration**
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this:
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
```console
# ip -4 a|grep inet
@@ -172,7 +172,7 @@ FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
* At this point, we have etcd running on the kubernetes master, and flannel / docker running on kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
* Issue the following commands on any 2 nodes:
@@ -211,7 +211,7 @@ PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
```
* Now kubernetes multi-node cluster is set up with overlay networking set up by flannel.
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->