Copy edits for typos

This commit is contained in:
Ed Costello
2015-08-09 14:18:06 -04:00
parent 2bfa9a1f98
commit 35a5eda585
33 changed files with 42 additions and 42 deletions

View File

@@ -35,7 +35,7 @@ Documentation for other releases can be found at
This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
upgrading your cluster's
master and worker nodes, performing node maintainence (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
running cluster.
## Creating and configuring a Cluster
@@ -132,7 +132,7 @@ For pods with a replication controller, the pod will eventually be replaced by a
For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
Perform maintainence work on the node.
Perform maintenance work on the node.
Make the node schedulable again:

View File

@@ -41,7 +41,7 @@ objects.
Access Control: give *only* kube-apiserver read/write access to etcd. You do not
want apiserver's etcd exposed to every node in your cluster (or worse, to the
internet at large), because access to etcd is equivilent to root in your
internet at large), because access to etcd is equivalent to root in your
cluster.
Data Reliability: for reasonable safety, either etcd needs to be run as a

View File

@@ -41,7 +41,7 @@ Documentation for other releases can be found at
The kubelet is the primary "node agent" that runs on each
node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object
that describes a pod. The kubelet takes a set of PodSpecs that are provided through
various echanisms (primarily through the apiserver) and ensures that the containers
various mechanisms (primarily through the apiserver) and ensures that the containers
described in those PodSpecs are running and healthy.
Other than from an PodSpec from the apiserver, there are three ways that a container

View File

@@ -84,7 +84,7 @@ TokenController runs as part of controller-manager. It acts asynchronously. It:
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed
- observes secret deleteion and removes a reference from the corresponding ServiceAccount if needed
- observes secret deletion and removes a reference from the corresponding ServiceAccount if needed
#### To create additional API tokens

View File

@@ -87,7 +87,7 @@ Note: If you have write access to the main repository at github.com/GoogleCloudP
git remote set-url --push upstream no_push
```
### Commiting changes to your fork
### Committing changes to your fork
```sh
git commit

View File

@@ -223,7 +223,7 @@ frontend-z9oxo 1/1 Running 0 41s
## Exposing the app to the outside world
There is no native Azure load-ballancer support in Kubernets 1.0, however here is how you can expose the Guestbook app to the Internet.
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf

View File

@@ -87,7 +87,7 @@ cd kubernetes/cluster/docker-multinode
`Master done!`
See [here](docker-multinode/master.md) for detailed instructions explaination.
See [here](docker-multinode/master.md) for detailed instructions explanation.
## Adding a worker node
@@ -104,7 +104,7 @@ cd kubernetes/cluster/docker-multinode
`Worker done!`
See [here](docker-multinode/worker.md) for detailed instructions explaination.
See [here](docker-multinode/worker.md) for detailed instructions explanation.
## Testing your cluster

View File

@@ -74,7 +74,7 @@ parameters as follows:
```
NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kenel by looking at the
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:
```console

View File

@@ -187,7 +187,7 @@ cd ~/kubernetes/contrib/ansible/
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
**Show kubernets nodes**
**Show kubernetes nodes**
Run the following on the kube-master:

View File

@@ -657,7 +657,7 @@ This pod mounts several node file system directories using the `hostPath` volum
authenticate external services, such as a cloud provider.
- This is not required if you do not use a cloud provider (e.g. bare-metal).
- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
node disk. These could instead be stored on a persistend disk, such as a GCE PD, or baked into the image.
node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image.
- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
- Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.

View File

@@ -67,14 +67,14 @@ When a client sends a watch request to apiserver, instead of redirecting it to
etcd, it will cause:
- registering a handler to receive all new changes coming from etcd
- iteratiting though a watch window, starting at the requested resourceVersion
to the head and sending filetered changes directory to the client, blocking
- iterating though a watch window, starting at the requested resourceVersion
to the head and sending filtered changes directory to the client, blocking
the above until this iteration has caught up
This will be done be creating a go-routine per watcher that will be responsible
for performing the above.
The following section describes the proposal in more details, analizes some
The following section describes the proposal in more details, analyzes some
corner cases and divides the whole design in more fine-grained steps.

View File

@@ -238,8 +238,8 @@ Address 1: 10.0.116.146
## Securing the Service
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
* Self signed certificates for https (unless you already have an identitiy certificate)
* An nginx server configured to use the cretificates
* Self signed certificates for https (unless you already have an identity certificate)
* An nginx server configured to use the certificates
* A [secret](secrets.md) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:

View File

@@ -214,7 +214,7 @@ $ kubectl logs -f nginx-app-zibvs
```
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in Kubernetes, do this:
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
```console

View File

@@ -58,7 +58,7 @@ A [Probe](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1
* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
* `HTTPGetAction`: performs an HTTP Get againsts the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
* `HTTPGetAction`: performs an HTTP Get against the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
Each probe will have one of three results:

View File

@@ -61,7 +61,7 @@ Here are some key points:
* **Application-centric management**:
Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. This provides the simplicity of PaaS with the flexibility of IaaS and enables you to run much more than just [12-factor apps](http://12factor.net/).
* **Dev and Ops separation of concerns**:
Provides separatation of build and deployment; therefore, decoupling applications from infrastructure.
Provides separation of build and deployment; therefore, decoupling applications from infrastructure.
* **Agile application creation and deployment**:
Increased ease and efficiency of container image creation compared to VM image use.
* **Continuous development, integration, and deployment**: