Copy edits for typos

This commit is contained in:
Ed Costello
2015-10-29 14:36:29 -04:00
parent d20ab89bd6
commit f968c593e3
21 changed files with 30 additions and 30 deletions

View File

@@ -243,7 +243,7 @@ __Default Backends__: An Ingress with no rules, like the one shown in the previo
### Loadbalancing
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distil loadbalancing patterns that are applicable cross platform into the Ingress resource.
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/production-pods.md#liveness-and-readiness-probes-aka-health-checks) which allow you to achieve the same end result.

View File

@@ -161,7 +161,7 @@ the same schema as a [pod](pods.md), except it is nested and does not have an `a
`kind`.
In addition to required fields for a Pod, a pod template in a job must specify appropriate
lables (see [pod selector](#pod-selector) and an appropriate restart policy.
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
Only a [`RestartPolicy`](pod-states.md) equal to `Never` or `OnFailure` are allowed.
@@ -171,7 +171,7 @@ The `.spec.selector` field is a label query over a set of pods.
The `spec.selector` is an object consisting of two fields:
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](replication-controller.md)
* `matchExpressions` - allows to build more sophisticated selectors by specyfing key,
* `matchExpressions` - allows to build more sophisticated selectors by specifying key,
list of values and an operator that relates the key and values.
When the two are specified the result is ANDed.
@@ -215,7 +215,7 @@ restarted locally, or else specify `.spec.template.containers[].restartPolicy =
See [pods-states](pod-states.md) for more information on `restartPolicy`.
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
(node is upgraded, rebooted, delelted, etc.), or if a container of the Pod fails and the
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
`.spec.template.containers[].restartPolicy = "Never"`. When a Pod fails, then the Job controller
starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like

View File

@@ -207,7 +207,7 @@ mister-red,mister-red,2
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in succesfully, because we are providigin the green-user's client credentials.
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in successfully, because we are providing the green-user's client credentials.
- Similarly, we can operate as the "blue-user" if we choose to change the value of current-context.
In the above scenario, green-user would have to log in by providing certificates, whereas blue-user would just provide the token. All this information would be handled for us by the

View File

@@ -146,7 +146,7 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
* _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend`
* _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29`
Both label selector styles can be used to list or watch resources via a REST client. For example targetting `apiserver` with `kubectl` and using _equality-based_ one may write:
Both label selector styles can be used to list or watch resources via a REST client. For example targeting `apiserver` with `kubectl` and using _equality-based_ one may write:
```console
$ kubectl get pods -l environment=production,tier=frontend

View File

@@ -210,7 +210,7 @@ By default, all deletes are graceful within 30 seconds. The `kubectl delete` com
## Privileged mode for pod containers
From kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as seperate pods that don't need to be compiled into the kubelet.
From kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as separate pods that don't need to be compiled into the kubelet.
If the master is running kubernetes v1.1 or higher, and the nodes are running a version lower than v1.1, then new privileged pods will be accepted by api-server, but will not be launched. They will be pending state.
If user calls `kubectl describe pod FooPodName`, user can see the reason why the pod is in pending state. The events table in the describe command output will say:

View File

@@ -138,7 +138,7 @@ volume is safe across container crashes.
Some uses for an `emptyDir` are:
* scratch space, such as for a disk-based mergesortcw
* scratch space, such as for a disk-based merge sort
* checkpointing a long computation for recovery from crashes
* holding files that a content-manager container fetches while a webserver
container serves the data