Copy edits for typos

This commit is contained in:
Ed Costello
2015-08-09 14:18:06 -04:00
parent 2bfa9a1f98
commit 35a5eda585
33 changed files with 42 additions and 42 deletions

View File

@@ -244,7 +244,7 @@ spec:
[Download example](cassandra-controller.yaml)
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the replication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
Create this controller:

View File

@@ -40,7 +40,7 @@ with [replication controllers](../../docs/user-guide/replication-controller.md).
because multicast discovery will not find the other pod IPs needed to form a cluster. This
image detects other Elasticsearch [pods](../../docs/user-guide/pods.md) running in a specified [namespace](../../docs/user-guide/namespaces.md) with a given
label selector. The detected instances are used to form a list of peer hosts which
are used as part of the unicast discovery mechansim for Elasticsearch. The detection
are used as part of the unicast discovery mechanism for Elasticsearch. The detection
of the peer nodes is done by a program which communicates with the Kubernetes API
server to get a list of matching Elasticsearch pods. To enable authenticated
communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret`

View File

@@ -280,7 +280,7 @@ You can now play with the guestbook that you just created by opening it in a bro
### Step Eight: Cleanup <a id="step-eight"></a>
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kuberentes replication controllers and services.
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kubernetes replication controllers and services.
Delete all the resources by running the following `kubectl delete -f` *`filename`* command:

View File

@@ -141,7 +141,7 @@ spec:
[Download example](hazelcast-controller.yaml)
<!-- END MUNGE: EXAMPLE hazelcast-controller.yaml -->
There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingle. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).
There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingly. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).
You may also note that we tell Kubernetes that the container exposes the `hazelcast` port. Finally, we tell the cluster manager that we need 1 cpu core.

View File

@@ -89,7 +89,7 @@ The web front end provides users an interface for watching pet store transaction
To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a
shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But thats not nearly
shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But that's not nearly
as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content.

View File

@@ -141,7 +141,7 @@ your cluster. Edit [`meteor-controller.json`](meteor-controller.json)
and make sure the `image:` points to the container you just pushed to
the Docker Hub or GCR.
We will need to provide MongoDB a persistent Kuberetes volume to
We will need to provide MongoDB a persistent Kubernetes volume to
store its data. See the [volumes documentation](../../docs/user-guide/volumes.md) for
options. We're going to use Google Compute Engine persistent
disks. Create the MongoDB disk by running:

View File

@@ -98,7 +98,7 @@ $ cluster/kubectl.sh config view --output=yaml --flatten=true --minify=true > ${
The output from this command will contain a single file that has all the required information needed to connect to your Kubernetes cluster that you previously provisioned. This file should be considered sensitive, so do not share this file with untrusted parties.
We will later use this file to tell OpenShift how to bootstap its own configuration.
We will later use this file to tell OpenShift how to bootstrap its own configuration.
### Step 2: Create an External Load Balancer to Route Traffic to OpenShift