Fix trailing whitespace in all docs

This commit is contained in:
Eric Paris
2015-07-24 17:52:18 -04:00
parent 3c95bd4ee3
commit 024208e39f
81 changed files with 310 additions and 310 deletions

View File

@@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Kubernetes Design Overview
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.

View File

@@ -104,7 +104,7 @@ type ResourceQuotaList struct {
## AdmissionControl plugin: ResourceQuota
The **ResourceQuota** plug-in introspects all incoming admission requests.
The **ResourceQuota** plug-in introspects all incoming admission requests.
It makes decisions by evaluating the incoming object against all defined **ResourceQuota.Status.Hard** resource limits in the request
namespace. If acceptance of the resource would cause the total usage of a named resource to exceed its hard limit, the request is denied.
@@ -125,7 +125,7 @@ Any resource that is not part of core Kubernetes must follow the resource naming
This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource)
If the incoming request does not cause the total usage to exceed any of the enumerated hard resource limits, the plug-in will post a
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
**ResourceQuota.ResourceVersion**. This keeps incremental usage atomically consistent, but does introduce a bottleneck (intentionally)
into the system.
@@ -184,7 +184,7 @@ resourcequotas 1 1
services 3 5
```
## More information
## More information
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more information.

View File

@@ -47,7 +47,7 @@ Each node runs Docker, of course. Docker takes care of the details of downloadi
### Kubelet
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
### Kube-Proxy

View File

@@ -49,7 +49,7 @@ Event compression should be best effort (not guaranteed). Meaning, in the worst
## Design
Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/HEAD/pkg/api/types.go#L1111) the following fields:
* `FirstTimestamp util.Time`
* `FirstTimestamp util.Time`
* The date/time of the first occurrence of the event.
* `LastTimestamp util.Time`
* The date/time of the most recent occurrence of the event.

View File

@@ -87,7 +87,7 @@ available to subsequent expansions.
### Use Case: Variable expansion in command
Users frequently need to pass the values of environment variables to a container's command.
Users frequently need to pass the values of environment variables to a container's command.
Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a
shell in the container's command and have the shell perform the substitution, or to write a wrapper
script that sets up the environment and runs the command. This has a number of drawbacks:
@@ -130,7 +130,7 @@ The exact syntax for variable expansion has a large impact on how users perceive
feature. We considered implementing a very restrictive subset of the shell `${var}` syntax. This
syntax is an attractive option on some level, because many people are familiar with it. However,
this syntax also has a large number of lesser known features such as the ability to provide
default values for unset variables, perform inline substitution, etc.
default values for unset variables, perform inline substitution, etc.
In the interest of preventing conflation of the expansion feature in Kubernetes with the shell
feature, we chose a different syntax similar to the one in Makefiles, `$(var)`. We also chose not
@@ -239,7 +239,7 @@ The necessary changes to implement this functionality are:
`ObjectReference` and an `EventRecorder`
2. Introduce `third_party/golang/expansion` package that provides:
1. An `Expand(string, func(string) string) string` function
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
3. Make the kubelet expand environment correctly
4. Make the kubelet expand command correctly
@@ -311,7 +311,7 @@ func Expand(input string, mapping func(string) string) string {
#### Kubelet changes
The Kubelet should be made to correctly expand variables references in a container's environment,
The Kubelet should be made to correctly expand variables references in a container's environment,
command, and args. Changes will need to be made to:
1. The `makeEnvironmentVariables` function in the kubelet; this is used by

View File

@@ -52,7 +52,7 @@ Each user community has its own:
A cluster operator may create a Namespace for each unique user community.
The Namespace provides a unique scope for:
The Namespace provides a unique scope for:
1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
@@ -142,7 +142,7 @@ type NamespaceSpec struct {
A *FinalizerName* is a qualified name.
The API Server enforces that a *Namespace* can only be deleted from storage if and only if
The API Server enforces that a *Namespace* can only be deleted from storage if and only if
it's *Namespace.Spec.Finalizers* is empty.
A *finalize* operation is the only mechanism to modify the *Namespace.Spec.Finalizers* field post creation.
@@ -189,12 +189,12 @@ are known to the cluster.
The *namespace controller* enumerates each known resource type in that namespace and deletes it one by one.
Admission control blocks creation of new resources in that namespace in order to prevent a race-condition
where the controller could believe all of a given resource type had been deleted from the namespace,
where the controller could believe all of a given resource type had been deleted from the namespace,
when in fact some other rogue client agent had created new objects. Using admission control in this
scenario allows each of registry implementations for the individual objects to not need to take into account Namespace life-cycle.
Once all objects known to the *namespace controller* have been deleted, the *namespace controller*
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
the *Namespace.Spec.Finalizers* list.
If the *namespace controller* sees a *Namespace* whose *ObjectMeta.DeletionTimestamp* is set, and
@@ -245,13 +245,13 @@ In etcd, we want to continue to still support efficient WATCH across namespaces.
Resources that persist content in etcd will have storage paths as follows:
/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}
/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}
This enables consumers to WATCH /registry/{resourceType} for changes across namespace of a particular {resourceType}.
### Kubelet
The kubelet will register pod's it sources from a file or http source with a namespace associated with the
The kubelet will register pod's it sources from a file or http source with a namespace associated with the
*cluster-id*
### Example: OpenShift Origin managing a Kubernetes Namespace
@@ -362,7 +362,7 @@ This results in the following state:
At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace
has a deletion timestamp and that its list of finalizers is empty. As a result, it knows all
content associated from that namespace has been purged. It performs a final DELETE action
content associated from that namespace has been purged. It performs a final DELETE action
to remove that Namespace from the storage.
At this point, all content associated with that Namespace, and the Namespace itself are gone.

View File

@@ -41,11 +41,11 @@ Two new API kinds:
A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. See [Persistent Volume Guide](../user-guide/persistent-volumes/) for how to use it.
A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.
A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.
One new system component:
`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.
`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.
One new volume:
@@ -69,7 +69,7 @@ Cluster administrators use the API to manage *PersistentVolumes*. A custom stor
PVs are system objects and, thus, have no namespace.
Many means of dynamic provisioning will be eventually be implemented for various storage types.
Many means of dynamic provisioning will be eventually be implemented for various storage types.
##### PersistentVolume API
@@ -116,7 +116,7 @@ TBD
#### Events
The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.
The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.
Events that communicate the state of a mounted volume are left to the volume plugins.
@@ -232,9 +232,9 @@ When a claim holder is finished with their data, they can delete their claim.
$ kubectl delete pvc myclaim-1
```
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.
Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Design Principles
Principles to follow when extending Kubernetes.
Principles to follow when extending Kubernetes.
## API
@@ -44,14 +44,14 @@ See also the [API conventions](../devel/api-conventions.md).
* The control plane should be transparent -- there are no hidden internal APIs.
* The cost of API operations should be proportional to the number of objects intentionally operated upon. Therefore, common filtered lookups must be indexed. Beware of patterns of multiple API calls that would incur quadratic behavior.
* Object status must be 100% reconstructable by observation. Any history kept must be just an optimization and not required for correct operation.
* Cluster-wide invariants are difficult to enforce correctly. Try not to add them. If you must have them, don't enforce them atomically in master components, that is contention-prone and doesn't provide a recovery path in the case of a bug allowing the invariant to be violated. Instead, provide a series of checks to reduce the probability of a violation, and make every component involved able to recover from an invariant violation.
* Cluster-wide invariants are difficult to enforce correctly. Try not to add them. If you must have them, don't enforce them atomically in master components, that is contention-prone and doesn't provide a recovery path in the case of a bug allowing the invariant to be violated. Instead, provide a series of checks to reduce the probability of a violation, and make every component involved able to recover from an invariant violation.
* Low-level APIs should be designed for control by higher-level systems. Higher-level APIs should be intent-oriented (think SLOs) rather than implementation-oriented (think control knobs).
## Control logic
* Functionality must be *level-based*, meaning the system must operate correctly given the desired state and the current/observed state, regardless of how many intermediate state updates may have been missed. Edge-triggered behavior must be just an optimization.
* Assume an open world: continually verify assumptions and gracefully adapt to external events and/or actors. Example: we allow users to kill pods under control of a replication controller; it just replaces them.
* Do not define comprehensive state machines for objects with behaviors associated with state transitions and/or "assumed" states that cannot be ascertained by observation.
* Do not define comprehensive state machines for objects with behaviors associated with state transitions and/or "assumed" states that cannot be ascertained by observation.
* Don't assume a component's decisions will not be overridden or rejected, nor for the component to always understand why. For example, etcd may reject writes. Kubelet may reject pods. The scheduler may not be able to schedule pods. Retry, but back off and/or make alternative decisions.
* Components should be self-healing. For example, if you must keep some state (e.g., cache) the content needs to be periodically refreshed, so that if an item does get erroneously stored or a deletion event is missed etc, it will be soon fixed, ideally on timescales that are shorter than what will attract attention from humans.
* Component behavior should degrade gracefully. Prioritize actions so that the most important activities can continue to function even when overloaded and/or in states of partial failure.
@@ -61,7 +61,7 @@ See also the [API conventions](../devel/api-conventions.md).
* Only the apiserver should communicate with etcd/store, and not other components (scheduler, kubelet, etc.).
* Compromising a single node shouldn't compromise the cluster.
* Components should continue to do what they were last told in the absence of new instructions (e.g., due to network partition or component outage).
* All components should keep all relevant state in memory all the time. The apiserver should write through to etcd/store, other components should write through to the apiserver, and they should watch for updates made by other clients.
* All components should keep all relevant state in memory all the time. The apiserver should write through to etcd/store, other components should write through to the apiserver, and they should watch for updates made by other clients.
* Watch is preferred over polling.
## Extensibility

View File

@@ -51,7 +51,7 @@ The resource model aims to be:
A Kubernetes _resource_ is something that can be requested by, allocated to, or consumed by a pod or container. Examples include memory (RAM), CPU, disk-time, and network bandwidth.
Once resources on a node have been allocated to one pod, they should not be allocated to another until that pod is removed or exits. This means that Kubernetes schedulers should ensure that the sum of the resources allocated (requested and granted) to its pods never exceeds the usable capacity of the node. Testing whether a pod will fit on a node is called _feasibility checking_.
Once resources on a node have been allocated to one pod, they should not be allocated to another until that pod is removed or exits. This means that Kubernetes schedulers should ensure that the sum of the resources allocated (requested and granted) to its pods never exceeds the usable capacity of the node. Testing whether a pod will fit on a node is called _feasibility checking_.
Note that the resource model currently prohibits over-committing resources; we will want to relax that restriction later.
@@ -70,7 +70,7 @@ For future reference, note that some resources, such as CPU and network bandwidt
### Resource quantities
Initially, all Kubernetes resource types are _quantitative_, and have an associated _unit_ for quantities of the associated resource (e.g., bytes for memory, bytes per seconds for bandwidth, instances for software licences). The units will always be a resource type's natural base units (e.g., bytes, not MB), to avoid confusion between binary and decimal multipliers and the underlying unit multiplier (e.g., is memory measured in MiB, MB, or GB?).
Initially, all Kubernetes resource types are _quantitative_, and have an associated _unit_ for quantities of the associated resource (e.g., bytes for memory, bytes per seconds for bandwidth, instances for software licences). The units will always be a resource type's natural base units (e.g., bytes, not MB), to avoid confusion between binary and decimal multipliers and the underlying unit multiplier (e.g., is memory measured in MiB, MB, or GB?).
Resource quantities can be added and subtracted: for example, a node has a fixed quantity of each resource type that can be allocated to pods/containers; once such an allocation has been made, the allocated resources cannot be made available to other pods/containers without over-committing the resources.
@@ -110,7 +110,7 @@ resourceCapacitySpec: [
```
Where:
* _total_: the total allocatable resources of a node. Initially, the resources at a given scope will bound the resources of the sum of inner scopes.
* _total_: the total allocatable resources of a node. Initially, the resources at a given scope will bound the resources of the sum of inner scopes.
#### Notes
@@ -194,7 +194,7 @@ The following are planned future extensions to the resource model, included here
Because resource usage and related metrics change continuously, need to be tracked over time (i.e., historically), can be characterized in a variety of ways, and are fairly voluminous, we will not include usage in core API objects, such as [Pods](../user-guide/pods.md) and Nodes, but will provide separate APIs for accessing and managing that data. See the Appendix for possible representations of usage data, but the representation we'll use is TBD.
Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information:
Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information:
```yaml
resourceStatus: [
@@ -223,7 +223,7 @@ where a `<CPU-info>` or `<memory-info>` structure looks like this:
```
All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_
and predicted
and predicted
## Future resource types

View File

@@ -34,7 +34,7 @@ Documentation for other releases can be found at
## Abstract
A proposal for the distribution of [secrets](../user-guide/secrets.md) (passwords, keys, etc) to the Kubelet and to
containers inside Kubernetes using a custom [volume](../user-guide/volumes.md#secrets) type. See the [secrets example](../user-guide/secrets/) for more information.
containers inside Kubernetes using a custom [volume](../user-guide/volumes.md#secrets) type. See the [secrets example](../user-guide/secrets/) for more information.
## Motivation
@@ -117,7 +117,7 @@ which consumes this type of secret, the Kubelet may take a number of actions:
1. Expose the secret in a `.kubernetes_auth` file in a well-known location in the container's
file system
2. Configure that node's `kube-proxy` to decorate HTTP requests from that pod to the
2. Configure that node's `kube-proxy` to decorate HTTP requests from that pod to the
`kubernetes-master` service with the auth token, e. g. by adding a header to the request
(see the [LOAS Daemon](https://github.com/GoogleCloudPlatform/kubernetes/issues/2209) proposal)
@@ -146,7 +146,7 @@ We should consider what the best way to allow this is; there are a few different
export MY_SECRET_ENV=MY_SECRET_VALUE
The user could `source` the file at `/etc/secrets/my-secret` prior to executing the command for
the image either inline in the command or in an init script,
the image either inline in the command or in an init script,
2. Give secrets an attribute that allows users to express the intent that the platform should
generate the above syntax in the file used to present a secret. The user could consume these

View File

@@ -48,55 +48,55 @@ The problem of securing containers in Kubernetes has come up [before](https://gi
### Container isolation
In order to improve container isolation from host and other containers running on the host, containers should only be
granted the access they need to perform their work. To this end it should be possible to take advantage of Docker
features such as the ability to [add or remove capabilities](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration) and [assign MCS labels](https://docs.docker.com/reference/run/#security-configuration)
In order to improve container isolation from host and other containers running on the host, containers should only be
granted the access they need to perform their work. To this end it should be possible to take advantage of Docker
features such as the ability to [add or remove capabilities](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration) and [assign MCS labels](https://docs.docker.com/reference/run/#security-configuration)
to the container process.
Support for user namespaces has recently been [merged](https://github.com/docker/libcontainer/pull/304) into Docker's libcontainer project and should soon surface in Docker itself. It will make it possible to assign a range of unprivileged uids and gids from the host to each container, improving the isolation between host and container and between containers.
### External integration with shared storage
In order to support external integration with shared storage, processes running in a Kubernetes cluster
should be able to be uniquely identified by their Unix UID, such that a chain of ownership can be established.
In order to support external integration with shared storage, processes running in a Kubernetes cluster
should be able to be uniquely identified by their Unix UID, such that a chain of ownership can be established.
Processes in pods will need to have consistent UID/GID/SELinux category labels in order to access shared disks.
## Constraints and Assumptions
* It is out of the scope of this document to prescribe a specific set
* It is out of the scope of this document to prescribe a specific set
of constraints to isolate containers from their host. Different use cases need different
settings.
* The concept of a security context should not be tied to a particular security mechanism or platform
* The concept of a security context should not be tied to a particular security mechanism or platform
(ie. SELinux, AppArmor)
* Applying a different security context to a scope (namespace or pod) requires a solution such as the one proposed for
[service accounts](service_accounts.md).
## Use Cases
In order of increasing complexity, following are example use cases that would
In order of increasing complexity, following are example use cases that would
be addressed with security contexts:
1. Kubernetes is used to run a single cloud application. In order to protect
nodes from containers:
* All containers run as a single non-root user
* Privileged containers are disabled
* All containers run with a particular MCS label
* All containers run with a particular MCS label
* Kernel capabilities like CHOWN and MKNOD are removed from containers
2. Just like case #1, except that I have more than one application running on
the Kubernetes cluster.
* Each application is run in its own namespace to avoid name collisions
* For each application a different uid and MCS label is used
3. Kubernetes is used as the base for a PAAS with
multiple projects, each project represented by a namespace.
3. Kubernetes is used as the base for a PAAS with
multiple projects, each project represented by a namespace.
* Each namespace is associated with a range of uids/gids on the node that
are mapped to uids/gids on containers using linux user namespaces.
are mapped to uids/gids on containers using linux user namespaces.
* Certain pods in each namespace have special privileges to perform system
actions such as talking back to the server for deployment, run docker
builds, etc.
* External NFS storage is assigned to each namespace and permissions set
using the range of uids/gids assigned to that namespace.
using the range of uids/gids assigned to that namespace.
## Proposed Design
@@ -109,7 +109,7 @@ to mutate Docker API calls in order to apply the security context.
It is recommended that this design be implemented in two phases:
1. Implement the security context provider extension point in the Kubelet
1. Implement the security context provider extension point in the Kubelet
so that a default security context can be applied on container run and creation.
2. Implement a security context structure that is part of a service account. The
default context provider can then be used to apply a security context based
@@ -137,7 +137,7 @@ type SecurityContextProvider interface {
}
```
If the value of the SecurityContextProvider field on the Kubelet is nil, the kubelet will create and run the container as it does today.
If the value of the SecurityContextProvider field on the Kubelet is nil, the kubelet will create and run the container as it does today.
### Security Context

View File

@@ -33,9 +33,9 @@ Documentation for other releases can be found at
## Simple rolling update
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in `kubectl`.
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in `kubectl`.
Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information.
Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information.
### Lightweight rollout