Fix trailing whitespace in all docs
This commit is contained in:
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
|
||||
In Kubernetes, authorization happens as a separate step from authentication.
|
||||
See the [authentication documentation](authentication.md) for an
|
||||
See the [authentication documentation](authentication.md) for an
|
||||
overview of authentication.
|
||||
|
||||
Authorization applies to all HTTP accesses on the main (secure) apiserver port.
|
||||
@@ -60,8 +60,8 @@ The following implementations are available, and are selected by flag:
|
||||
A request has 4 attributes that can be considered for authorization:
|
||||
- user (the user-string which a user was authenticated as).
|
||||
- whether the request is readonly (GETs are readonly)
|
||||
- what resource is being accessed
|
||||
- applies only to the API endpoints, such as
|
||||
- what resource is being accessed
|
||||
- applies only to the API endpoints, such as
|
||||
`/api/v1/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
|
||||
resource is the empty string.
|
||||
- the namespace of the object being access, or the empty string if the
|
||||
@@ -95,7 +95,7 @@ interface.
|
||||
A request has attributes which correspond to the properties of a policy object.
|
||||
|
||||
When a request is received, the attributes are determined. Unknown attributes
|
||||
are set to the zero value of its type (e.g. empty string, 0, false).
|
||||
are set to the zero value of its type (e.g. empty string, 0, false).
|
||||
|
||||
An unset property will match any value of the corresponding
|
||||
attribute. An unset attribute will match any value of the corresponding property.
|
||||
|
@@ -36,7 +36,7 @@ Documentation for other releases can be found at
|
||||
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
|
||||
problem you are experiencing. See
|
||||
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
|
||||
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
|
||||
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
|
||||
|
||||
## Listing your cluster
|
||||
|
||||
@@ -73,7 +73,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
|
||||
Root causes:
|
||||
- VM(s) shutdown
|
||||
- Network partition within cluster, or between cluster and users
|
||||
- Crashes in Kubernetes software
|
||||
- Crashes in Kubernetes software
|
||||
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
|
||||
- Operator error, e.g. misconfigured Kubernetes software or application software
|
||||
|
||||
|
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
[etcd](https://coreos.com/etcd/docs/2.0.12/) is a highly-available key value
|
||||
store which Kubernetes uses for persistent storage of all of its REST API
|
||||
objects.
|
||||
objects.
|
||||
|
||||
## Configuration: high-level goals
|
||||
|
||||
|
@@ -102,7 +102,7 @@ to make sure that each automatically restarts when it fails. To achieve this, w
|
||||
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
|
||||
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
|
||||
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
|
||||
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
|
||||
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
|
||||
|
||||
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
|
||||
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
|
||||
|
@@ -90,7 +90,7 @@ project](salt.md).
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
* **Resource Quota** ([resource-quota.md](resource-quota.md))
|
||||
* **Resource Quota** ([resource-quota.md](resource-quota.md))
|
||||
|
||||
## Security
|
||||
|
||||
|
@@ -73,13 +73,13 @@ load and growth.
|
||||
|
||||
To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run
|
||||
on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not
|
||||
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
|
||||
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
|
||||
Call the number of regions to be in `R`.
|
||||
|
||||
Second, decide how many clusters should be able to be unavailable at the same time, while still being available. Call
|
||||
the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice.
|
||||
|
||||
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
|
||||
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
|
||||
you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a
|
||||
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.
|
||||
|
||||
|
@@ -52,7 +52,7 @@ Each user community has its own:
|
||||
|
||||
A cluster operator may create a Namespace for each unique user community.
|
||||
|
||||
The Namespace provides a unique scope for:
|
||||
The Namespace provides a unique scope for:
|
||||
|
||||
1. named resources (to avoid basic naming collisions)
|
||||
2. delegated management authority to trusted users
|
||||
|
@@ -234,7 +234,7 @@ capacity when adding a node.
|
||||
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
|
||||
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
|
||||
includes all containers started by kubelet, but not containers started directly by docker, nor
|
||||
processes not in containers.
|
||||
processes not in containers.
|
||||
|
||||
If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
|
||||
pod. Use the following template:
|
||||
|
@@ -160,14 +160,14 @@ Sometimes more complex policies may be desired, such as:
|
||||
|
||||
Such policies could be implemented using ResourceQuota as a building-block, by
|
||||
writing a 'controller' which watches the quota usage and adjusts the quota
|
||||
hard limits of each namespace according to other signals.
|
||||
hard limits of each namespace according to other signals.
|
||||
|
||||
Note that resource quota divides up aggregate cluster resources, but it creates no
|
||||
restrictions around nodes: pods from several namespaces may run on the same node.
|
||||
|
||||
## Example
|
||||
|
||||
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).
|
||||
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).
|
||||
|
||||
## Read More
|
||||
|
||||
|
@@ -56,7 +56,7 @@ for a number of reasons:
|
||||
- Auditing considerations for humans and service accounts may differ.
|
||||
- A config bundle for a complex system may include definition of various service
|
||||
accounts for components of that system. Because service accounts can be created
|
||||
ad-hoc and have namespaced names, such config is portable.
|
||||
ad-hoc and have namespaced names, such config is portable.
|
||||
|
||||
## Service account automation
|
||||
|
||||
|
Reference in New Issue
Block a user