Use example syncer tags instead of hard-coded examples in doc

This commit is contained in:
Janet Kuo
2015-07-20 15:46:20 -07:00
parent 2bd53119b1
commit 180798cfa4
22 changed files with 306 additions and 104 deletions

View File

@@ -52,44 +52,57 @@ This is a somewhat long tutorial. If you want to jump straight to the "do it no
In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
In this simple case, we define a single container running Cassandra for our pod:
<!-- BEGIN MUNGE: EXAMPLE cassandra-controller.yaml -->
```yaml
apiVersion: v1
kind: Pod
kind: ReplicationController
metadata:
labels:
name: cassandra
name: cassandra
spec:
containers:
- name: cassandra
image: gcr.io/google_containers/cassandra:v5
args:
- /run.sh
resources:
limits:
cpu: "0.5"
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
volumeMounts:
- name: data
mountPath: /cassandra_data
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: data
emptyDir: {}
replicas: 1
selector:
name: cassandra
template:
metadata:
labels:
name: cassandra
spec:
containers:
- command:
- /run.sh
resources:
limits:
cpu: 0.1
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: gcr.io/google_containers/cassandra:v6
name: cassandra
ports:
- containerPort: 9042
name: cql
- containerPort: 9160
name: thrift
volumeMounts:
- mountPath: /cassandra_data
name: data
volumes:
- name: data
emptyDir: {}
```
[Download example](cassandra-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later)
You may also note that we are setting some Cassandra parameters (```MAX_HEAP_SIZE``` and ```HEAP_NEWSIZE```) and adding information about the [namespace](../../docs/user-guide/namespaces.md). We also tell Kubernetes that the container exposes both the ```CQL``` and ```Thrift``` API ports. Finally, we tell the cluster manager that we need 0.5 cpu (0.5 core).
@@ -102,6 +115,8 @@ In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set o
Here is the service description:
<!-- BEGIN MUNGE: EXAMPLE cassandra-service.yaml -->
```yaml
apiVersion: v1
kind: Service
@@ -116,6 +131,9 @@ spec:
name: cassandra
```
[Download example](cassandra-service.yaml)
<!-- END MUNGE: EXAMPLE -->
The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
Create this service as follows:
@@ -175,6 +193,8 @@ In Kubernetes a _[Replication Controller](../../docs/user-guide/replication-cont
Replication controllers will "adopt" existing pods that match their selector query, so let's create a replication controller with a single replica to adopt our existing Cassandra pod.
<!-- BEGIN MUNGE: EXAMPLE cassandra-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
@@ -192,13 +212,11 @@ spec:
name: cassandra
spec:
containers:
- name: cassandra
image: gcr.io/google_containers/cassandra:v5
command:
- command:
- /run.sh
resources:
limits:
cpu: 0.5
cpu: 0.1
env:
- name: MAX_HEAP_SIZE
value: 512M
@@ -208,6 +226,8 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: gcr.io/google_containers/cassandra:v6
name: cassandra
ports:
- containerPort: 9042
name: cql
@@ -221,6 +241,9 @@ spec:
emptyDir: {}
```
[Download example](cassandra-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
Create this controller:

View File

@@ -64,7 +64,7 @@ You should already have turned up a Kubernetes cluster. To get the most of this
The Celery task queue will need to communicate with the RabbitMQ broker. RabbitMQ will eventually appear on a separate pod, but since pods are ephemeral we need a service that can transparently route requests to RabbitMQ.
Use the file [`examples/celery-rabbitmq/rabbitmq-service.yaml`](rabbitmq-service.yaml):
<!-- BEGIN MUNGE: EXAMPLE rabbitmq-service.yaml -->
```yaml
apiVersion: v1
@@ -81,6 +81,9 @@ spec:
component: rabbitmq
```
[Download example](rabbitmq-service.yaml)
<!-- END MUNGE: EXAMPLE -->
To start the service, run:
```sh
@@ -94,6 +97,8 @@ This service allows other pods to connect to the rabbitmq. To them, it will be s
A RabbitMQ broker can be turned up using the file [`examples/celery-rabbitmq/rabbitmq-controller.yaml`](rabbitmq-controller.yaml):
<!-- BEGIN MUNGE: EXAMPLE rabbitmq-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
@@ -121,6 +126,9 @@ spec:
cpu: 100m
```
[Download example](rabbitmq-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
Running `$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml` brings up a replication controller that ensures one pod exists which is running a RabbitMQ instance.
Note that bringing up the pod includes pulling down a docker image, which may take a few moments. This applies to all other pods in this example.
@@ -130,6 +138,8 @@ Note that bringing up the pod includes pulling down a docker image, which may ta
Bringing up the celery worker is done by running `$ kubectl create -f examples/celery-rabbitmq/celery-controller.yaml`, which contains this:
<!-- BEGIN MUNGE: EXAMPLE celery-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
@@ -157,6 +167,9 @@ spec:
cpu: 100m
```
[Download example](celery-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
There are several things to point out here...
Like the RabbitMQ controller, this controller ensures that there is always a pod is running a Celery worker instance. The celery-app-add Docker image is an extension of the standard Celery image. This is the Dockerfile:
@@ -207,6 +220,8 @@ Flower is a web-based tool for monitoring and administrating Celery clusters. By
First, start the flower service with `$ kubectl create -f examples/celery-rabbitmq/flower-service.yaml`. The service is defined as below:
<!-- BEGIN MUNGE: EXAMPLE flower-service.yaml -->
```yaml
apiVersion: v1
kind: Service
@@ -223,6 +238,9 @@ spec:
type: LoadBalancer
```
[Download example](flower-service.yaml)
<!-- END MUNGE: EXAMPLE -->
It is marked as external (LoadBalanced). However on many platforms you will have to add an explicit firewall rule to open port 5555.
On GCE this can be done with:
@@ -234,6 +252,8 @@ Please remember to delete the rule after you are done with the example (on GCE:
To bring up the pods, run this command `$ kubectl create -f examples/celery-rabbitmq/flower-controller.yaml`. This controller is defined as so:
<!-- BEGIN MUNGE: EXAMPLE flower-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
@@ -259,6 +279,9 @@ spec:
cpu: 100m
```
[Download example](flower-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
This will bring up a new pod with Flower installed and port 5555 (Flower's default port) exposed through the service endpoint. This image uses the following command to start Flower:
```sh

View File

@@ -46,8 +46,9 @@ server to get a list of matching Elasticsearch pods. To enable authenticated
communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret`
with the basic authentication username and password.
Here is an example replication controller specification that creates 4 instances of Elasticsearch which is in the file
[music-rc.yaml](music-rc.yaml).
Here is an example replication controller specification that creates 4 instances of Elasticsearch.
<!-- BEGIN MUNGE: EXAMPLE music-rc.yaml -->
```yaml
apiVersion: v1
@@ -91,6 +92,9 @@ spec:
secretName: apiserver-secret
```
[Download example](music-rc.yaml)
<!-- END MUNGE: EXAMPLE -->
The `CLUSTER_NAME` variable gives a name to the cluster and allows multiple separate clusters to
exist in the same namespace.
The `SELECTOR` variable should be set to a label query that identifies the Elasticsearch
@@ -101,7 +105,9 @@ to be used to search for Elasticsearch pods and this should be the same as the n
for the replication controller (in this case `mytunes`).
Before creating pods with the replication controller a secret containing the bearer authentication token
should be set up. A template is provided in the file [apiserver-secret.yaml](apiserver-secret.yaml):
should be set up.
<!-- BEGIN MUNGE: EXAMPLE apiserver-secret.yaml -->
```yaml
apiVersion: v1
@@ -113,6 +119,9 @@ data:
token: "TOKEN"
```
[Download example](apiserver-secret.yaml)
<!-- END MUNGE: EXAMPLE -->
Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
versions of the bearer token reported by `kubectl config view` e.g.
@@ -154,7 +163,9 @@ replicationcontrollers/music-db
```
It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
cluster which can be found in the file [music-service.yaml](music-service.yaml).
cluster.
<!-- BEGIN MUNGE: EXAMPLE music-service.yaml -->
```yaml
apiVersion: v1
@@ -174,6 +185,9 @@ spec:
type: LoadBalancer
```
[Download example](music-service.yaml)
<!-- END MUNGE: EXAMPLE -->
Let's create the service with an external load balancer:
```console

View File

@@ -75,7 +75,7 @@ To start the redis master, use the file `examples/guestbook/redis-master-control
Although we have a single instance of our redis master, we are using a [replication controller](../../docs/user-guide/replication-controller.md) to enforce that exactly one pod keeps running. E.g., if the node were to go down, the replication controller will ensure that the redis master gets restarted on a healthy node. (In our simplified example, this could result in data loss.)
Here is `redis-master-controller.yaml`:
<!-- BEGIN MUNGE: EXAMPLE redis-master-controller.yaml -->
```yaml
apiVersion: v1
@@ -100,6 +100,9 @@ spec:
- containerPort: 6379
```
[Download example](redis-master-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
Change to the `<kubernetes>/examples/guestbook` directory if you're not already there. Create the redis master pod in your Kubernetes cluster by running:
```console
@@ -200,6 +203,8 @@ The selector field of the service description determines which pods will receive
The file `examples/guestbook/redis-master-service.yaml` defines the redis master service:
<!-- BEGIN MUNGE: EXAMPLE redis-master-service.yaml -->
```yaml
apiVersion: v1
kind: Service
@@ -216,6 +221,9 @@ spec:
name: redis-master
```
[Download example](redis-master-service.yaml)
<!-- END MUNGE: EXAMPLE -->
Create the service by running:
```console
@@ -262,6 +270,8 @@ In Kubernetes, a replication controller is responsible for managing multiple ins
To create the replicated pod, use the file `examples/guestbook/redis-slave-controller.yaml`, which looks like this:
<!-- BEGIN MUNGE: EXAMPLE redis-slave-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
@@ -285,6 +295,9 @@ spec:
- containerPort: 6379
```
[Download example](redis-slave-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
and create the replication controller by running:
```console
@@ -316,6 +329,8 @@ Just like the master, we want to have a service to proxy connections to the redi
The service specification for the slaves is in `examples/guestbook/redis-slave-service.yaml`:
<!-- BEGIN MUNGE: EXAMPLE redis-slave-service.yaml -->
```yaml
apiVersion: v1
kind: Service
@@ -331,6 +346,9 @@ spec:
name: redis-slave
```
[Download example](redis-slave-service.yaml)
<!-- END MUNGE: EXAMPLE -->
This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `kubectl get services -l "label=value"` command.
Now that you have created the service specification, create it in your cluster by running:
@@ -354,6 +372,8 @@ Again we'll create a set of replicated frontend pods instantiated by a replicati
The pod is described in the file `examples/guestbook/frontend-controller.yaml`:
<!-- BEGIN MUNGE: EXAMPLE frontend-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
@@ -377,6 +397,9 @@ spec:
- containerPort: 80
```
[Download example](frontend-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
Using this file, you can turn up your frontend with:
```console
@@ -457,6 +480,8 @@ Note the use of the `redis-master` and `redis-slave` host names-- we're finding
As with the other pods, we now want to create a service to group your frontend pods.
The service is described in the file `frontend-service.yaml`:
<!-- BEGIN MUNGE: EXAMPLE frontend-service.yaml -->
```yaml
apiVersion: v1
kind: Service
@@ -470,11 +495,14 @@ spec:
# type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
- port: 80
selector:
name: frontend
```
[Download example](frontend-service.yaml)
<!-- END MUNGE: EXAMPLE -->
#### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)
For supported cloud providers, such as Google Compute Engine or Google Container Engine, you can specify to use an external load balancer

View File

@@ -67,20 +67,25 @@ In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set o
Here is the service description:
<!-- BEGIN MUNGE: EXAMPLE hazelcast-service.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
metadata:
labels:
name: hazelcast
name: hazelcast
spec:
ports:
- port: 5701
selector:
selector:
name: hazelcast
```
[Download example](hazelcast-service.yaml)
<!-- END MUNGE: EXAMPLE -->
The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
Create this service as follows:
@@ -97,6 +102,8 @@ In Kubernetes a _[Replication Controller](../../docs/user-guide/replication-cont
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Hazelcast Pod.
<!-- BEGIN MUNGE: EXAMPLE hazelcast-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
@@ -131,6 +138,9 @@ spec:
name: hazelcast
```
[Download example](hazelcast-controller.yaml)
<!-- END MUNGE: EXAMPLE -->
There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingle. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).
You may also note that we tell Kubernetes that the container exposes the `hazelcast` port. Finally, we tell the cluster manager that we need 1 cpu core.

View File

@@ -95,6 +95,8 @@ Now that the persistent disks are defined, the Kubernetes pods can be launched.
First, **edit [`mysql.yaml`](mysql.yaml)**, the mysql pod definition, to use a database password that you specify.
`mysql.yaml` looks like this:
<!-- BEGIN MUNGE: EXAMPLE mysql.yaml -->
```yaml
apiVersion: v1
kind: Pod
@@ -127,9 +129,11 @@ spec:
# This GCE PD must already exist.
pdName: mysql-disk
fsType: ext4
```
[Download example](mysql.yaml)
<!-- END MUNGE: EXAMPLE -->
Note that we've defined a volume mount for `/var/lib/mysql`, and specified a volume that uses the persistent disk (`mysql-disk`) that you created.
Once you've edited the file to set your database password, create the pod as follows, where `<kubernetes>` is the path to your Kubernetes installation:
@@ -164,6 +168,8 @@ So if we label our Kubernetes mysql service `mysql`, the wordpress pod will be a
The [`mysql-service.yaml`](mysql-service.yaml) file looks like this:
<!-- BEGIN MUNGE: EXAMPLE mysql-service.yaml -->
```yaml
apiVersion: v1
kind: Service
@@ -180,6 +186,9 @@ spec:
name: mysql
```
[Download example](mysql-service.yaml)
<!-- END MUNGE: EXAMPLE -->
Start the service like this:
```sh
@@ -199,6 +208,8 @@ Once the mysql service is up, start the wordpress pod, specified in
[`wordpress.yaml`](wordpress.yaml). Before you start it, **edit `wordpress.yaml`** and **set the database password to be the same as you used in `mysql.yaml`**.
Note that this config file also defines a volume, this one using the `wordpress-disk` persistent disk that you created.
<!-- BEGIN MUNGE: EXAMPLE wordpress.yaml -->
```yaml
apiVersion: v1
kind: Pod
@@ -230,6 +241,9 @@ spec:
fsType: ext4
```
[Download example](wordpress.yaml)
<!-- END MUNGE: EXAMPLE -->
Create the pod:
```sh
@@ -249,6 +263,8 @@ Once the wordpress pod is running, start its service, specified by [`wordpress-s
The service config file looks like this:
<!-- BEGIN MUNGE: EXAMPLE wordpress-service.yaml -->
```yaml
apiVersion: v1
kind: Service
@@ -266,6 +282,9 @@ spec:
type: LoadBalancer
```
[Download example](wordpress-service.yaml)
<!-- END MUNGE: EXAMPLE -->
Note the `type: LoadBalancer` setting. This will set up the wordpress service behind an external IP.
Note also that we've set the service port to 80. We'll return to that shortly.

View File

@@ -56,6 +56,8 @@ In the remaining part of this example we will assume that your instance is named
To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](../../docs/user-guide/replication-controller.md) with a single [pod](../../docs/user-guide/pods.md) running an Apache server with Phabricator PHP source:
<!-- BEGIN MUNGE: EXAMPLE phabricator-controller.json -->
```json
{
"kind": "ReplicationController",
@@ -96,6 +98,9 @@ To start Phabricator server use the file [`examples/phabricator/phabricator-cont
}
```
[Download example](phabricator-controller.json)
<!-- END MUNGE: EXAMPLE -->
Create the phabricator pod in your Kubernetes cluster by running:
```sh
@@ -147,6 +152,8 @@ gcloud sql instances patch phabricator-db --authorized-networks 130.211.141.151
To automate this process and make sure that a proper host is authorized even if pod is rescheduled to a new machine we need a separate pod that periodically lists pods and authorizes hosts. Use the file [`examples/phabricator/authenticator-controller.json`](authenticator-controller.json):
<!-- BEGIN MUNGE: EXAMPLE authenticator-controller.json -->
```json
{
"kind": "ReplicationController",
@@ -172,7 +179,7 @@ To automate this process and make sure that a proper host is authorized even if
"containers": [
{
"name": "authenticator",
"image": "gcr.io.google_containers/cloudsql-authenticator:v1"
"image": "gcr.io/google_containers/cloudsql-authenticator:v1"
}
]
}
@@ -181,6 +188,9 @@ To automate this process and make sure that a proper host is authorized even if
}
```
[Download example](authenticator-controller.json)
<!-- END MUNGE: EXAMPLE -->
To create the pod run:
```sh
@@ -203,6 +213,8 @@ phabricator us-central1 107.178.210.6 RESERVED
Use the file [`examples/phabricator/phabricator-service.json`](phabricator-service.json):
<!-- BEGIN MUNGE: EXAMPLE phabricator-service.json -->
```json
{
"kind": "Service",
@@ -225,6 +237,9 @@ Use the file [`examples/phabricator/phabricator-service.json`](phabricator-servi
}
```
[Download example](phabricator-service.json)
<!-- END MUNGE: EXAMPLE -->
To create the service run:
```sh