Fix trailing whitespace in all docs
This commit is contained in:
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
## Introduction
|
||||
|
||||
Celery is an asynchronous task queue based on distributed message passing. It is used to create execution units (i.e. tasks) which are then executed on one or more worker nodes, either synchronously or asynchronously.
|
||||
Celery is an asynchronous task queue based on distributed message passing. It is used to create execution units (i.e. tasks) which are then executed on one or more worker nodes, either synchronously or asynchronously.
|
||||
|
||||
Celery is implemented in Python.
|
||||
|
||||
@@ -249,7 +249,7 @@ On GCE this can be done with:
|
||||
```
|
||||
|
||||
Please remember to delete the rule after you are done with the example (on GCE: `$ gcloud compute firewall-rules delete kubernetes-minion-5555`)
|
||||
|
||||
|
||||
To bring up the pods, run this command `$ kubectl create -f examples/celery-rabbitmq/flower-controller.yaml`. This controller is defined as so:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE flower-controller.yaml -->
|
||||
|
||||
@@ -34,7 +34,7 @@ Documentation for other releases can be found at
|
||||
# Elasticsearch for Kubernetes
|
||||
|
||||
This directory contains the source for a Docker image that creates an instance
|
||||
of [Elasticsearch](https://www.elastic.co/products/elasticsearch) 1.5.2 which can
|
||||
of [Elasticsearch](https://www.elastic.co/products/elasticsearch) 1.5.2 which can
|
||||
be used to automatically form clusters when used
|
||||
with [replication controllers](../../docs/user-guide/replication-controller.md). This will not work with the library Elasticsearch image
|
||||
because multicast discovery will not find the other pod IPs needed to form a cluster. This
|
||||
@@ -102,10 +102,10 @@ nodes that should participate in this cluster. For our example we specify `name=
|
||||
match all pods that have the label `name` set to the value `music-db`.
|
||||
The `NAMESPACE` variable identifies the namespace
|
||||
to be used to search for Elasticsearch pods and this should be the same as the namespace specified
|
||||
for the replication controller (in this case `mytunes`).
|
||||
for the replication controller (in this case `mytunes`).
|
||||
|
||||
Before creating pods with the replication controller a secret containing the bearer authentication token
|
||||
should be set up.
|
||||
should be set up.
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE apiserver-secret.yaml -->
|
||||
|
||||
@@ -163,7 +163,7 @@ replicationcontrollers/music-db
|
||||
```
|
||||
|
||||
It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
|
||||
cluster.
|
||||
cluster.
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE music-service.yaml -->
|
||||
|
||||
|
||||
@@ -59,7 +59,7 @@ Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
|
||||
|
||||
```
|
||||
|
||||
The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
|
||||
The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
|
||||
|
||||
Create the endpoints,
|
||||
|
||||
@@ -90,11 +90,11 @@ The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustra
|
||||
}
|
||||
```
|
||||
|
||||
The parameters are explained as the followings.
|
||||
The parameters are explained as the followings.
|
||||
|
||||
- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
|
||||
- **path** is the Glusterfs volume name.
|
||||
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
|
||||
- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
|
||||
- **path** is the Glusterfs volume name.
|
||||
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
|
||||
|
||||
Create a pod that has a container using Glusterfs volume,
|
||||
|
||||
|
||||
@@ -37,16 +37,16 @@ This example shows how to build a simple multi-tier web application using Kubern
|
||||
|
||||
If you are running a cluster in Google Container Engine (GKE), instead see the [Guestbook Example for Google Container Engine](https://cloud.google.com/container-engine/docs/tutorials/guestbook).
|
||||
|
||||
##### Table of Contents
|
||||
##### Table of Contents
|
||||
|
||||
* [Step Zero: Prerequisites](#step-zero)
|
||||
* [Step One: Create the Redis master pod](#step-one)
|
||||
* [Step Two: Create the Redis master service](#step-two)
|
||||
* [Step Three: Create the Redis slave pods](#step-three)
|
||||
* [Step Four: Create the Redis slave service](#step-four)
|
||||
* [Step Five: Create the guestbook pods](#step-five)
|
||||
* [Step Six: Create the guestbook service](#step-six)
|
||||
* [Step Seven: View the guestbook](#step-seven)
|
||||
* [Step Zero: Prerequisites](#step-zero)
|
||||
* [Step One: Create the Redis master pod](#step-one)
|
||||
* [Step Two: Create the Redis master service](#step-two)
|
||||
* [Step Three: Create the Redis slave pods](#step-three)
|
||||
* [Step Four: Create the Redis slave service](#step-four)
|
||||
* [Step Five: Create the guestbook pods](#step-five)
|
||||
* [Step Six: Create the guestbook service](#step-six)
|
||||
* [Step Seven: View the guestbook](#step-seven)
|
||||
* [Step Eight: Cleanup](#step-eight)
|
||||
|
||||
### Step Zero: Prerequisites <a id="step-zero"></a>
|
||||
@@ -64,7 +64,7 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook-go/redis-master-controller.json
|
||||
replicationcontrollers/redis-master
|
||||
```
|
||||
```
|
||||
|
||||
<nop>2. To verify that the redis-master-controller is up, list all the replication controllers in the cluster with the `kubectl get rc` command:
|
||||
|
||||
@@ -102,7 +102,7 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r
|
||||
|
||||
### Step Two: Create the Redis master service <a id="step-two"></a>
|
||||
|
||||
A Kubernetes '[service](../../docs/user-guide/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS.
|
||||
A Kubernetes '[service](../../docs/user-guide/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS.
|
||||
|
||||
Services find the containers to load balance based on pod labels. The pod that you created in Step One has the label `app=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service.
|
||||
|
||||
@@ -179,7 +179,7 @@ Just like the master, we want to have a service to proxy connections to the read
|
||||
services/redis-slave
|
||||
```
|
||||
|
||||
<nop>2. To verify that the redis-slave service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
<nop>2. To verify that the redis-slave service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
|
||||
```console
|
||||
$ kubectl get services
|
||||
@@ -189,7 +189,7 @@ Just like the master, we want to have a service to proxy connections to the read
|
||||
...
|
||||
```
|
||||
|
||||
Result: The service is created with labels `app=redis` and `role=slave` to identify that the pods are running the Redis slaves.
|
||||
Result: The service is created with labels `app=redis` and `role=slave` to identify that the pods are running the Redis slaves.
|
||||
|
||||
Tip: It is helpful to set labels on your services themselves--as we've done here--to make it easy to locate them later.
|
||||
|
||||
@@ -264,7 +264,7 @@ You can now play with the guestbook that you just created by opening it in a bro
|
||||
If you are running Kubernetes locally, to view the guestbook, navigate to `http://localhost:3000` in your browser.
|
||||
|
||||
* **Remote Host:**
|
||||
1. To view the guestbook on a remote host, locate the external IP of the load balancer in the **IP** column of the `kubectl get services` output. In our example, the internal IP address is `10.0.217.218` and the external IP address is `146.148.81.8` (*Note: you might need to scroll to see the IP column*).
|
||||
1. To view the guestbook on a remote host, locate the external IP of the load balancer in the **IP** column of the `kubectl get services` output. In our example, the internal IP address is `10.0.217.218` and the external IP address is `146.148.81.8` (*Note: you might need to scroll to see the IP column*).
|
||||
|
||||
2. Append port `3000` to the IP address (for example `http://146.148.81.8:3000`), and then navigate to that address in your browser.
|
||||
|
||||
|
||||
@@ -56,7 +56,7 @@ Source is freely available at:
|
||||
|
||||
### Simple Single Pod Hazelcast Node
|
||||
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
|
||||
In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition.
|
||||
|
||||
|
||||
@@ -38,8 +38,8 @@ Documentation for other releases can be found at
|
||||
If you use Fedora 21 on Kubernetes node, then first install iSCSI initiator on the node:
|
||||
|
||||
# yum -y install iscsi-initiator-utils
|
||||
|
||||
|
||||
|
||||
|
||||
then edit */etc/iscsi/initiatorname.iscsi* and */etc/iscsi/iscsid.conf* to match your iSCSI target configuration.
|
||||
|
||||
I mostly followed these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi) to setup iSCSI target. and these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2) to setup iSCSI initiator.
|
||||
@@ -50,7 +50,7 @@ GCE does not provide preconfigured Fedora 21 image, so I set up the iSCSI target
|
||||
|
||||
## Step 2. Creating the pod with iSCSI persistent storage
|
||||
|
||||
Once you have installed iSCSI initiator and new Kubernetes, you can create a pod based on my example *iscsi.json*. In the pod JSON, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean.
|
||||
Once you have installed iSCSI initiator and new Kubernetes, you can create a pod based on my example *iscsi.json*. In the pod JSON, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean.
|
||||
|
||||
**Note:** If you have followed the instructions in the links above you
|
||||
may have partitioned the device, the iSCSI volume plugin does not
|
||||
|
||||
@@ -43,9 +43,9 @@ This is a follow up to the [Guestbook Example](../guestbook/README.md)'s [Go imp
|
||||
This application will run a web server which returns REDIS records for a petstore application.
|
||||
It is meant to simulate and test high load on Kubernetes or any other docker based system.
|
||||
|
||||
If you are new to Kubernetes, and you haven't run guestbook yet,
|
||||
If you are new to Kubernetes, and you haven't run guestbook yet,
|
||||
|
||||
you might want to stop here and go back and run guestbook app first.
|
||||
you might want to stop here and go back and run guestbook app first.
|
||||
|
||||
The guestbook tutorial will teach you a lot about the basics of Kubernetes, and we've tried not to be redundant here.
|
||||
|
||||
@@ -61,15 +61,15 @@ This project depends on three docker images which you can build for yourself and
|
||||
in your dockerhub "dockerhub-name".
|
||||
|
||||
Since these images are already published under other parties like redis, jayunit100, and so on,
|
||||
so you don't need to build the images to run the app.
|
||||
so you don't need to build the images to run the app.
|
||||
|
||||
If you do want to build the images, you will need to build and push the images in this repository.
|
||||
|
||||
For a list of those images, see the `build-and-push` shell script - it builds and pushes all the images for you, just
|
||||
For a list of those images, see the `build-and-push` shell script - it builds and pushes all the images for you, just
|
||||
|
||||
modify the dockerhub user name in it accordingly.
|
||||
|
||||
## Get started with the WEBAPP
|
||||
## Get started with the WEBAPP
|
||||
|
||||
The web app is written in Go, and borrowed from the original Guestbook example by brendan burns.
|
||||
|
||||
@@ -87,13 +87,13 @@ If that is all working, you can finally run `k8petstore.sh` in any Kubernetes cl
|
||||
|
||||
The web front end provides users an interface for watching pet store transactions in real time as they occur.
|
||||
|
||||
To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a
|
||||
To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a
|
||||
|
||||
shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But thats not nearly
|
||||
|
||||
as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content.
|
||||
as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content.
|
||||
|
||||
Similarly, you can locally run and test the data generator code, which is Java based, you can pull it down directly from
|
||||
Similarly, you can locally run and test the data generator code, which is Java based, you can pull it down directly from
|
||||
|
||||
apache bigtop.
|
||||
|
||||
@@ -101,13 +101,13 @@ Directions for that are here : https://github.com/apache/bigtop/tree/master/bigt
|
||||
|
||||
You will likely want to checkout the branch 2b2392bf135e9f1256bd0b930f05ae5aef8bbdcb, which is the exact commit which the current k8petstore was tested on.
|
||||
|
||||
## Now what?
|
||||
## Now what?
|
||||
|
||||
Once you have done the above 3 steps, you have a working, from source, locally runnable version of the k8petstore app, now, we can try to run it in Kubernetes.
|
||||
|
||||
## Hacking, testing, benchmarking
|
||||
|
||||
Once the app is running, you can access the app in your browser, you should see a chart
|
||||
Once the app is running, you can access the app in your browser, you should see a chart
|
||||
|
||||
and the k8petstore title page, as well as an indicator of transaction throughput, and so on.
|
||||
|
||||
@@ -117,7 +117,7 @@ You can modify the HTML pages, add new REST paths to the Go app, and so on.
|
||||
|
||||
Now that you are done hacking around on the app, you can run it in Kubernetes. To do this, you will want to rebuild the docker images (most likely, for the Go web-server app), but less likely for the other images which you are less likely to need to change. Then you will push those images to dockerhub.
|
||||
|
||||
Now, how to run the entire application in Kubernetes?
|
||||
Now, how to run the entire application in Kubernetes?
|
||||
|
||||
To simplify running this application, we have a single file, k8petstore.sh, which writes out json files on to disk. This allows us to have dynamic parameters, without needing to worry about managing multiple json files.
|
||||
|
||||
@@ -127,13 +127,13 @@ So, to run this app in Kubernetes, simply run [The all in one k8petstore.sh shel
|
||||
|
||||
Note that at the top of the script there are a few self explanatory parameters to set, among which the Public IPs parameter is where you can checkout the web ui (at $PUBLIC_IP:3000), which will show a plot and read outs of transaction throughput.
|
||||
|
||||
In the mean time, because the public IP will be deprecated in Kubernetes v1, we provide other 2 scripts k8petstore-loadbalancer.sh and k8petstore-nodeport.sh. As the names suggest, they rely on LoadBalancer and NodePort respectively. More details can be found [here](../../docs/user-guide/services.md#external-services).
|
||||
In the mean time, because the public IP will be deprecated in Kubernetes v1, we provide other 2 scripts k8petstore-loadbalancer.sh and k8petstore-nodeport.sh. As the names suggest, they rely on LoadBalancer and NodePort respectively. More details can be found [here](../../docs/user-guide/services.md#external-services).
|
||||
|
||||
## Future
|
||||
|
||||
In the future, we plan to add cassandra support. Redis is a fabulous in memory data store, but it is not meant for truly available and resilient storage.
|
||||
In the future, we plan to add cassandra support. Redis is a fabulous in memory data store, but it is not meant for truly available and resilient storage.
|
||||
|
||||
Thus we plan to add another tier of queueing, which empties the REDIS transactions into a cassandra store which persists.
|
||||
Thus we plan to add another tier of queueing, which empties the REDIS transactions into a cassandra store which persists.
|
||||
|
||||
## Questions
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
This container is maintained as part of the apache bigtop project.
|
||||
|
||||
To create it, simply
|
||||
To create it, simply
|
||||
|
||||
`git clone https://github.com/apache/bigtop`
|
||||
|
||||
@@ -43,7 +43,7 @@ and checkout the last exact version (will be updated periodically).
|
||||
|
||||
`git checkout -b aNewBranch 2b2392bf135e9f1256bd0b930f05ae5aef8bbdcb`
|
||||
|
||||
then, cd to bigtop-bigpetstore/bigpetstore-transaction-queue, and run the docker file, i.e.
|
||||
then, cd to bigtop-bigpetstore/bigpetstore-transaction-queue, and run the docker file, i.e.
|
||||
|
||||
`Docker build -t -i jayunit100/bps-transaction-queue`.
|
||||
|
||||
|
||||
@@ -36,16 +36,16 @@ Documentation for other releases can be found at
|
||||
Install Ceph on the Kubernetes host. For example, on Fedora 21
|
||||
|
||||
# yum -y install ceph
|
||||
|
||||
|
||||
If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/rootfs/docker-ceph)
|
||||
|
||||
|
||||
Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*.
|
||||
|
||||
Once you have installed Ceph and new Kubernetes, you can create a pod based on my examples [rbd.json](rbd.json) [rbd-with-secret.json](rbd-with-secret.json). In the pod JSON, you need to provide the following information.
|
||||
|
||||
- *monitors*: Ceph monitors.
|
||||
- *pool*: The name of the RADOS pool, if not provided, default *rbd* pool is used.
|
||||
- *image*: The image name that rbd has created.
|
||||
- *image*: The image name that rbd has created.
|
||||
- *user*: The RADOS user name. If not provided, default *admin* is used.
|
||||
- *keyring*: The path to the keyring file. If not provided, default */etc/ceph/keyring* is used.
|
||||
- *secretName*: The name of the authentication secrets. If provided, *secretName* overrides *keyring*. Note, see below about how to create a secret.
|
||||
@@ -58,7 +58,7 @@ If Ceph authentication secret is provided, the secret should be first be base64
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/rbd/secret/ceph-secret.yaml
|
||||
```
|
||||
```
|
||||
|
||||
# Get started
|
||||
|
||||
|
||||
@@ -130,7 +130,7 @@ We request for an external load balancer in the [admin-service.yaml](admin-servi
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case.
|
||||
The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case.
|
||||
|
||||
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user