Replace `` with
when emphasizing something inline in docs/
This commit is contained in:
@@ -98,10 +98,10 @@ AWS CloudFormation or EC2 with user data (cloud-config).
|
||||
|
||||
### Command line administration tool: `kubectl`
|
||||
|
||||
The cluster startup script will leave you with a ```kubernetes``` directory on your workstation.
|
||||
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
|
||||
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/GoogleCloudPlatform/kubernetes/releases).
|
||||
|
||||
Next, add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Next, add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
# OS X
|
||||
|
@@ -89,7 +89,7 @@ can tweak some of these parameters by editing `cluster/azure/config-default.sh`.
|
||||
The [kubectl](../../docs/user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
|
||||
You will use it to look at your new cluster and bring up example apps.
|
||||
|
||||
Add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
# OS X
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
@@ -85,7 +85,7 @@ To setup CentOS PXELINUX environment there is a complete [guide here](http://doc
|
||||
|
||||
sudo yum install tftp-server dhcp syslinux
|
||||
|
||||
2. ```vi /etc/xinetd.d/tftp``` to enable tftp service and change disable to 'no'
|
||||
2. `vi /etc/xinetd.d/tftp` to enable tftp service and change disable to 'no'
|
||||
disable = no
|
||||
|
||||
3. Copy over the syslinux images we will need.
|
||||
@@ -108,7 +108,7 @@ To setup CentOS PXELINUX environment there is a complete [guide here](http://doc
|
||||
mkdir /tftpboot/pxelinux.cfg
|
||||
touch /tftpboot/pxelinux.cfg/default
|
||||
|
||||
5. Edit the menu ```vi /tftpboot/pxelinux.cfg/default```
|
||||
5. Edit the menu `vi /tftpboot/pxelinux.cfg/default`
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
@@ -129,7 +129,7 @@ Now you should have a working PXELINUX setup to image CoreOS nodes. You can veri
|
||||
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
|
||||
|
||||
1. Find or create the TFTP root directory that everything will be based off of.
|
||||
* For this document we will assume ```/tftpboot/``` is our root directory.
|
||||
* For this document we will assume `/tftpboot/` is our root directory.
|
||||
2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
|
||||
3. Download the CoreOS PXE files provided by the CoreOS team.
|
||||
|
||||
@@ -143,7 +143,7 @@ This section describes how to setup the CoreOS images to live alongside a pre-ex
|
||||
gpg --verify coreos_production_pxe.vmlinuz.sig
|
||||
gpg --verify coreos_production_pxe_image.cpio.gz.sig
|
||||
|
||||
4. Edit the menu ```vi /tftpboot/pxelinux.cfg/default``` again
|
||||
4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
@@ -176,7 +176,7 @@ This configuration file will now boot from local drive but have the option to PX
|
||||
|
||||
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
|
||||
|
||||
1. Add the ```filename``` to the _host_ or _subnet_ sections.
|
||||
1. Add the `filename` to the _host_ or _subnet_ sections.
|
||||
|
||||
filename "/tftpboot/pxelinux.0";
|
||||
|
||||
@@ -217,17 +217,17 @@ We will be specifying the node configuration later in the guide.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
To deploy our configuration we need to create an ```etcd``` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
|
||||
2. Have a service discovery protocol running in our stack to do auto discovery.
|
||||
|
||||
This demo we just make a static single ```etcd``` server to host our Kubernetes and ```etcd``` master servers.
|
||||
This demo we just make a static single `etcd` server to host our Kubernetes and `etcd` master servers.
|
||||
|
||||
Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
|
||||
|
||||
An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
|
||||
|
||||
To get this up and running we are going to setup a simple ```apache``` server to serve our binaries needed to bootstrap Kubernetes.
|
||||
To get this up and running we are going to setup a simple `apache` server to serve our binaries needed to bootstrap Kubernetes.
|
||||
|
||||
This is on the PXE server from the previous section:
|
||||
|
||||
@@ -265,7 +265,7 @@ To make the setup work, you need to replace a few placeholders:
|
||||
|
||||
### master.yml
|
||||
|
||||
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-master.yml```.
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-master.yml`.
|
||||
|
||||
|
||||
#cloud-config
|
||||
@@ -486,7 +486,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-
|
||||
|
||||
### node.yml
|
||||
|
||||
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-slave.yml```.
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
|
||||
|
||||
#cloud-config
|
||||
---
|
||||
@@ -621,7 +621,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-
|
||||
|
||||
## New pxelinux.cfg file
|
||||
|
||||
Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-slave```
|
||||
Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
@@ -634,7 +634,7 @@ Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/c
|
||||
kernel images/coreos/coreos_production_pxe.vmlinuz
|
||||
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
|
||||
|
||||
And one for the _master_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-master```
|
||||
And one for the _master_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-master`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
|
@@ -48,7 +48,7 @@ Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/n
|
||||
|
||||
### AWS
|
||||
|
||||
*Attention:* Replace ```<ami_image_id>``` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
|
||||
*Attention:* Replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
@@ -94,7 +94,7 @@ aws ec2 run-instances \
|
||||
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
*Attention:* Replace ```<gce_image_id>``` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
|
@@ -66,10 +66,10 @@ Here's a diagram of what the final result will look like:
|
||||
### Bootstrap Docker
|
||||
|
||||
This guide also uses a pattern of running two instances of the Docker daemon
|
||||
1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd```
|
||||
1) A _bootstrap_ Docker instance which is used to start system daemons like `flanneld` and `etcd`
|
||||
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
|
||||
|
||||
This pattern is necessary because the ```flannel``` daemon is responsible for setting up and managing the network that interconnects
|
||||
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
|
||||
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
|
||||
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
|
||||
|
||||
|
@@ -33,10 +33,10 @@ Documentation for other releases can be found at
|
||||
|
||||
## Installing a Kubernetes Master Node via Docker
|
||||
|
||||
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}```
|
||||
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is `${MASTER_IP}`
|
||||
|
||||
There are two main phases to installing the master:
|
||||
* [Setting up ```flanneld``` and ```etcd```](#setting-up-flanneld-and-etcd)
|
||||
* [Setting up `flanneld` and `etcd`](#setting-up-flanneld-and-etcd)
|
||||
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
|
||||
|
||||
|
||||
@@ -48,9 +48,9 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
|
||||
|
||||
### Setup Docker-Bootstrap
|
||||
|
||||
We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
|
||||
We're going to use `flannel` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
|
||||
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
|
||||
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.
|
||||
`--iptables=false` so that it can only run containers with `--net=host`. That's sufficient to bootstrap our system.
|
||||
|
||||
Run:
|
||||
|
||||
@@ -122,7 +122,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
@@ -132,14 +132,14 @@ Regardless, you need to add the following to the docker command line:
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named ```docker0``` by default. You need to remove this:
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
@@ -190,7 +190,7 @@ NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running.
|
||||
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
|
||||
If all else fails, ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers).
|
||||
|
||||
|
||||
|
@@ -47,8 +47,8 @@ NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
|
||||
[```#google-containers```](http://webchat.freenode.net/?channels=google-containers) for advice.
|
||||
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
|
||||
[`#google-containers`](http://webchat.freenode.net/?channels=google-containers) for advice.
|
||||
|
||||
### Run an application
|
||||
|
||||
@@ -56,7 +56,7 @@ If the status of any node is ```Unknown``` or ```NotReady``` your cluster is bro
|
||||
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
|
@@ -37,10 +37,10 @@ Documentation for other releases can be found at
|
||||
|
||||
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
|
||||
You need to repeat these instructions for each node you want to join the cluster.
|
||||
We will assume that the IP address of this node is ```${NODE_IP}``` and you have the IP address of the master in ```${MASTER_IP}``` that you created in the [master instructions](master.md).
|
||||
We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master.md).
|
||||
|
||||
For each worker node, there are three steps:
|
||||
* [Set up ```flanneld``` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
|
||||
|
||||
@@ -106,7 +106,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
@@ -116,14 +116,14 @@ Regardless, you need to add the following to the docker command line:
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named ```docker0``` by default. You need to remove this:
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
@@ -143,7 +143,7 @@ systemctl start docker
|
||||
|
||||
#### Run the kubelet
|
||||
|
||||
Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning.
|
||||
Again this is similar to the above, but the `--api_servers` now points to the master we set up in the beginning.
|
||||
|
||||
```sh
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
|
||||
@@ -151,7 +151,7 @@ sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.
|
||||
|
||||
#### Run the service proxy
|
||||
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services```
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
|
||||
|
||||
```sh
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
|
||||
|
@@ -105,7 +105,7 @@ NAME LABELS STATUS
|
||||
127.0.0.1 <none> Ready
|
||||
```
|
||||
|
||||
If you are running different kubernetes clusters, you may need to specify ```-s http://localhost:8080``` to select the local cluster.
|
||||
If you are running different kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
|
||||
|
||||
### Run an application
|
||||
|
||||
@@ -113,7 +113,7 @@ If you are running different kubernetes clusters, you may need to specify ```-s
|
||||
kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
@@ -138,10 +138,10 @@ Note that you will need run this curl command on your boot2docker VM if you are
|
||||
|
||||
### A note on turning down your cluster
|
||||
|
||||
Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
|
||||
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
|
||||
the cluster, you need to first kill the kubelet container, and then any other containers.
|
||||
|
||||
You may use ```docker ps -a | awk '{print $1}' | xargs docker kill```, note this removes _all_ containers running under Docker, so use with caution.
|
||||
You may use `docker ps -a | awk '{print $1}' | xargs docker kill`, note this removes _all_ containers running under Docker, so use with caution.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -111,13 +111,13 @@ The next few steps will show you:
|
||||
|
||||
### Installing the kubernetes command line tools on your workstation
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a ```kubernetes``` directory on your workstation.
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
The next step is to make sure the `kubectl` tool is in your path.
|
||||
|
||||
The [kubectl](../user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
|
||||
You will use it to look at your new cluster and bring up example apps.
|
||||
|
||||
Add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
# OS X
|
||||
@@ -127,7 +127,7 @@ export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
**Note**: gcloud also ships with ```kubectl```, which by default is added to your path.
|
||||
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
|
||||
However the gcloud bundled kubectl version may be older than the one downloaded by the
|
||||
get.k8s.io install script. We recommend you use the downloaded binary to avoid
|
||||
potential issues with client/server version skew.
|
||||
|
@@ -180,7 +180,7 @@ disown -a
|
||||
|
||||
#### Validate KM Services
|
||||
|
||||
Add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
|
@@ -61,7 +61,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
|
||||
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
|
||||
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
|
||||
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
|
||||
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use ```yum install vagrant-libvirt```
|
||||
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt`
|
||||
|
||||
### Setup
|
||||
|
||||
@@ -170,7 +170,7 @@ vagrant destroy
|
||||
|
||||
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
|
||||
|
||||
You may need to build the binaries first, you can do this with ```make```
|
||||
You may need to build the binaries first, you can do this with `make`
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
@@ -375,7 +375,7 @@ export KUBERNETES_MINION_MEMORY=2048
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
|
||||
```vagrant suspend``` seems to mess up the network. This is not supported at this time.
|
||||
`vagrant suspend` seems to mess up the network. This is not supported at this time.
|
||||
|
||||
#### I want vagrant to sync folders via nfs!
|
||||
|
||||
|
Reference in New Issue
Block a user