Fix trailing whitespace in all docs
This commit is contained in:
@@ -150,7 +150,7 @@ There are [client libraries](../devel/client-libraries.md) for accessing the API
|
||||
from several languages. The Kubernetes project-supported
|
||||
[Go](http://releases.k8s.io/HEAD/pkg/client/)
|
||||
client library can use the same [kubeconfig file](kubeconfig-file.md)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver.
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver.
|
||||
|
||||
See documentation for other libraries for how they authenticate.
|
||||
|
||||
@@ -241,7 +241,7 @@ at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticse
|
||||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/`*`service_path`*`/`*`service_name`*`/`*`service_endpoint-suffix-parameter`*
|
||||
<!--- TODO: update this part of doc because it doesn't seem to be valid. What
|
||||
about namespaces? 'proxy' verb? -->
|
||||
@@ -297,7 +297,7 @@ There are several different proxies you may encounter when using Kubernetes:
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
1. The [kube proxy](services.md#ips-and-vips):
|
||||
- runs on each node
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
|
@@ -87,7 +87,7 @@ there are insufficient resources of one type or another that prevent scheduling.
|
||||
your pod. Reasons include:
|
||||
|
||||
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](compute-resources.md#my-pods-are-pending-with-event-message-failedscheduling) for more information.
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](compute-resources.md#my-pods-are-pending-with-event-message-failedscheduling) for more information.
|
||||
|
||||
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
@@ -100,7 +100,7 @@ If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker
|
||||
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
|
||||
* Make sure that you have the name of the image correct
|
||||
* Have you pushed the image to the repository?
|
||||
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
|
||||
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
|
||||
|
||||
#### My pod is crashing or otherwise unhealthy
|
||||
|
||||
@@ -139,7 +139,7 @@ feature request on GitHub describing your use case and why these tools are insuf
|
||||
### Debugging Replication Controllers
|
||||
|
||||
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
|
||||
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
|
||||
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
|
||||
|
||||
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
|
||||
controller.
|
||||
@@ -199,11 +199,11 @@ check:
|
||||
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
|
||||
|
||||
#### More information
|
||||
#### More information
|
||||
|
||||
If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services.md) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
|
||||
If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services.md) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
|
||||
|
||||
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
|
||||
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -133,7 +133,7 @@ When using Docker:
|
||||
**TODO: document behavior for rkt**
|
||||
|
||||
If a container exceeds its memory limit, it may be terminated. If it is restartable, it will be
|
||||
restarted by kubelet, as will any other type of runtime failure.
|
||||
restarted by kubelet, as will any other type of runtime failure.
|
||||
|
||||
A container may or may not be allowed to exceed its CPU limit for extended periods of time.
|
||||
However, it will not be killed for excessive CPU usage.
|
||||
@@ -178,7 +178,7 @@ The [resource quota](../admin/resource-quota.md) feature can be configured
|
||||
to limit the total amount of resources that can be consumed. If used in conjunction
|
||||
with namespaces, it can prevent one team from hogging all the resources.
|
||||
|
||||
### My container is terminated
|
||||
### My container is terminated
|
||||
|
||||
Your container may be terminated because it's resource-starved. To check if a container is being killed because it is hitting a resource limit, call `kubectl describe pod`
|
||||
on the pod you are interested in:
|
||||
|
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
This document is meant to highlight and consolidate in one place configuration best practices that are introduced throughout the user-guide and getting-started documentation and examples. This is a living document so if you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
|
||||
|
||||
1. When writing configuration, use the latest stable API version (currently v1).
|
||||
1. When writing configuration, use the latest stable API version (currently v1).
|
||||
1. Configuration should be stored in version control before being pushed to the cluster. This allows configuration to be quickly rolled back if needed and will aid with cluster re-creation and restoration if the worst were to happen.
|
||||
1. Use YAML rather than JSON. They can be used interchangeably in almost all scenarios but YAML tends to be more user-friendly for config.
|
||||
1. Group related objects together in a single file. This is often better than separate files.
|
||||
|
@@ -73,7 +73,7 @@ spec: # specification of the pod’s contents
|
||||
|
||||
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
|
||||
|
||||
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
|
||||
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
|
||||
|
||||
The [`command`](containers.md#containers-and-commands) overrides the Docker container’s `Entrypoint`. Command arguments (corresponding to Docker’s `Cmd`) may be specified using `args`, as follows:
|
||||
|
||||
@@ -142,7 +142,7 @@ However, a shell isn’t necessary just to expand environment variables. Kuberne
|
||||
|
||||
## Viewing pod status
|
||||
|
||||
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
|
||||
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
|
||||
|
||||
If you’re quick, it will look as follows:
|
||||
|
||||
@@ -199,7 +199,7 @@ $ kubectl delete pods/hello-world
|
||||
pods/hello-world
|
||||
```
|
||||
|
||||
Terminated pods aren’t currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.
|
||||
Terminated pods aren’t currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.
|
||||
|
||||
On the other hand, containers and their logs are eventually deleted automatically in order to free up disk space on the nodes.
|
||||
|
||||
|
@@ -52,10 +52,10 @@ Documentation for other releases can be found at
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode). In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
|
||||
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode). In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
|
||||
|
||||
This cluster information makes it possible to build applications that are *cluster aware*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analogous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
This cluster information makes it possible to build applications that are *cluster aware*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analogous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
|
||||
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](images.md) and one or more [volumes](volumes.md).
|
||||
|
||||
@@ -89,7 +89,7 @@ Services have dedicated IP address, and are also surfaced to the container via D
|
||||
|
||||
*NB*: Container hooks are under active development, we anticipate adding additional hooks as the Kubernetes container management system evolves.*
|
||||
|
||||
Container hooks provide information to the container about events in its management lifecycle. For example, immediately after a container is started, it receives a *PostStart* hook. These hooks are broadcast *into* the container with information about the life-cycle of the container. They are different from the events provided by Docker and other systems which are *output* from the container. Output events provide a log of what has already happened. Input hooks provide real-time notification about things that are happening, but no historical log.
|
||||
Container hooks provide information to the container about events in its management lifecycle. For example, immediately after a container is started, it receives a *PostStart* hook. These hooks are broadcast *into* the container with information about the life-cycle of the container. They are different from the events provided by Docker and other systems which are *output* from the container. Output events provide a log of what has already happened. Input hooks provide real-time notification about things that are happening, but no historical log.
|
||||
|
||||
### Hook Details
|
||||
|
||||
|
@@ -48,7 +48,7 @@ we can use:
|
||||
Docker images have metadata associated with them that is used to store information about the image.
|
||||
The image author may use this to define defaults for the command and arguments to run a container
|
||||
when the user does not supply values. Docker calls the fields for commands and arguments
|
||||
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
|
||||
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
|
||||
describe here, mostly due to the fact that the docker API allows users to specify both of these
|
||||
fields as either a string array or a string and there are subtle differences in how those cases are
|
||||
handled. We encourage the curious to check out [docker's documentation]() for this feature.
|
||||
@@ -69,10 +69,10 @@ Here are examples for these rules in table format
|
||||
|
||||
| Image `Entrypoint` | Image `Cmd` | Container `Command` | Container `Args` | Command Run |
|
||||
|--------------------|------------------|---------------------|--------------------|------------------|
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||
|
||||
|
||||
## Capabilities
|
||||
|
@@ -552,7 +552,7 @@ Contact us on
|
||||
|
||||
## More information
|
||||
|
||||
Visit [troubleshooting document](../troubleshooting.md) for more information.
|
||||
Visit [troubleshooting document](../troubleshooting.md) for more information.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -53,7 +53,7 @@ NAME READY REASON RESTARTS AGE
|
||||
redis-master-ft9ex 1/1 Running 0 12s
|
||||
```
|
||||
|
||||
then we can check the environment variables of the pod,
|
||||
then we can check the environment variables of the pod,
|
||||
|
||||
```console
|
||||
$ kubectl exec redis-master-ft9ex env
|
||||
|
@@ -68,7 +68,7 @@ Credentials can be provided in several ways:
|
||||
- Per-cluster
|
||||
- automatically configured on Google Compute Engine or Google Container Engine
|
||||
- all pods can read the project's private registry
|
||||
- Configuring Nodes to Authenticate to a Private Registry
|
||||
- Configuring Nodes to Authenticate to a Private Registry
|
||||
- all pods can read any configured private registries
|
||||
- requires node configuration by cluster administrator
|
||||
- Pre-pulling Images
|
||||
@@ -77,7 +77,7 @@ Credentials can be provided in several ways:
|
||||
- Specifying ImagePullSecrets on a Pod
|
||||
- only pods which provide own keys can access the private registry
|
||||
Each option is described in more detail below.
|
||||
|
||||
|
||||
|
||||
### Using Google Container Registry
|
||||
|
||||
@@ -101,7 +101,7 @@ with credentials for Google Container Registry. You cannot use this approach.
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
|
||||
Docker stores keys for private registries in the `$HOME/.dockercfg` file. If you put this
|
||||
in the `$HOME` of `root` on a kubelet, then docker will use it.
|
||||
|
||||
@@ -109,7 +109,7 @@ Here are the recommended steps to configuring your nodes to use a private regist
|
||||
example, run these on your desktop/laptop:
|
||||
1. run `docker login [server]` for each set of credentials you want to use.
|
||||
1. view `$HOME/.dockercfg` in an editor to ensure it contains just the credentials you want to use.
|
||||
1. get a list of your nodes
|
||||
1. get a list of your nodes
|
||||
- for example: `nodes=$(kubectl get nodes -o template --template='{{range.items}}{{.metadata.name}} {{end}}')`
|
||||
1. copy your local `.dockercfg` to the home directory of root on each node.
|
||||
- for example: `for n in $nodes; do scp ~/.dockercfg root@$n:/root/.dockercfg; done`
|
||||
@@ -218,7 +218,7 @@ secrets/myregistrykey
|
||||
$
|
||||
```
|
||||
|
||||
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
|
||||
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
|
||||
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockercfg]: invalid value ...` it means
|
||||
the data was successfully un-base64 encoded, but could not be parsed as a dockercfg file.
|
||||
|
||||
|
@@ -138,7 +138,7 @@ Lastly, you see a log of recent events related to your Pod. The system compresse
|
||||
|
||||
## Example: debugging Pending Pods
|
||||
|
||||
A common scenario that you can detect using events is when you’ve created a Pod that won’t fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesn’t match any nodes. Let’s say we created the previous Replication Controller with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
|
||||
A common scenario that you can detect using events is when you’ve created a Pod that won’t fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesn’t match any nodes. Let’s say we created the previous Replication Controller with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
|
||||
|
||||
```console
|
||||
$ kubectl get pods
|
||||
|
@@ -157,7 +157,7 @@ my-nginx-svc app=nginx app=nginx 10.0.152.174 80/TCP
|
||||
|
||||
## Using labels effectively
|
||||
|
||||
The examples we’ve used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
|
||||
The examples we’ve used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
|
||||
|
||||
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](../../examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
|
||||
|
||||
@@ -279,7 +279,7 @@ my-nginx-o0ef1 1/1 Running 0 1h
|
||||
|
||||
At some point, you’ll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
|
||||
|
||||
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](../design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.
|
||||
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](../design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.
|
||||
|
||||
Let’s say you were running version 1.7.9 of nginx:
|
||||
|
||||
|
@@ -59,7 +59,7 @@ The Kubelet acts as a bridge between the Kubernetes master and the nodes. It man
|
||||
|
||||
### InfluxDB and Grafana
|
||||
|
||||
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
|
||||
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
|
||||
|
||||
The Grafana container serves Grafana’s UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/storage-schema.md#metrics).
|
||||
|
||||
@@ -88,7 +88,7 @@ Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wi
|
||||
Now that you’ve learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/GoogleCloudPlatform/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues. Heapster and Kubernetes developers hang out in the [#google-containers](http://webchat.freenode.net/?channels=google-containers) IRC channel on freenode.net. You can also reach us on the [google-containers Google Groups mailing list](https://groups.google.com/forum/#!forum/google-containers).
|
||||
|
||||
***
|
||||
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
|
||||
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
|
||||
*This article was originally posted in [Kubernetes blog](http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html).*
|
||||
|
||||
|
||||
|
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. Kubernetes is intended to make deploying containerized/microservice-based applications easy but powerful.
|
||||
|
||||
Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure that the state of the cluster continually matches the user's intentions. An operations user should be able to launch a micro-service, letting the scheduler find the right placement. We also want to improve the tools and experience for how users can roll-out applications through patterns like canary deployments.
|
||||
Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure that the state of the cluster continually matches the user's intentions. An operations user should be able to launch a micro-service, letting the scheduler find the right placement. We also want to improve the tools and experience for how users can roll-out applications through patterns like canary deployments.
|
||||
|
||||
Kubernetes supports [Docker](http://www.docker.io) and [Rocket](https://coreos.com/blog/rocket/) containers, and other container image formats and container runtimes will be supported in the future.
|
||||
|
||||
@@ -45,7 +45,7 @@ In Kubernetes, all containers run inside [pods](pods.md). A pod can host a singl
|
||||
|
||||
Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](replication-controller.md). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails.
|
||||
|
||||
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md).
|
||||
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md).
|
||||
|
||||
Kubernetes supports a unique [networking model](../admin/networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
|
||||
|
||||
|
@@ -65,7 +65,7 @@ Managing storage is a distinct problem from managing compute. The `PersistentVol
|
||||
|
||||
A `PersistentVolume` (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
|
||||
|
||||
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).
|
||||
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).
|
||||
|
||||
Please see the [detailed walkthrough with working examples](persistent-volumes/).
|
||||
|
||||
@@ -113,7 +113,7 @@ A `PersistentVolume's` reclaim policy tells the cluster what to do with the volu
|
||||
|
||||
## Persistent Volumes
|
||||
|
||||
Each PV contains a spec and status, which is the specification and status of the volume.
|
||||
Each PV contains a spec and status, which is the specification and status of the volume.
|
||||
|
||||
|
||||
```yaml
|
||||
|
@@ -38,7 +38,7 @@ nginx serving content from your persistent volume.
|
||||
|
||||
This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running.
|
||||
|
||||
See [Persistent Storage design document](../../design/persistent-storage.md) for more information.
|
||||
See [Persistent Storage design document](../../design/persistent-storage.md) for more information.
|
||||
|
||||
## Provisioning
|
||||
|
||||
@@ -51,7 +51,7 @@ for ease of development and testing. You'll create a local `HostPath` for this
|
||||
> IMPORTANT! For `HostPath` to work, you will need to run a single node cluster. Kubernetes does not
|
||||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides.
|
||||
|
||||
|
||||
|
||||
|
||||
```console
|
||||
# This will be nginx's webroot
|
||||
@@ -70,7 +70,7 @@ pv0001 type=local 10737418240 RWO Available
|
||||
## Requesting storage
|
||||
|
||||
Users of Kubernetes request persistent storage for their pods. They don't know how the underlying cluster is provisioned.
|
||||
They just know they can rely on their claim to storage and can manage its lifecycle independently from the many pods that may use it.
|
||||
They just know they can rely on their claim to storage and can manage its lifecycle independently from the many pods that may use it.
|
||||
|
||||
Claims must be created in the same namespace as the pods that use them.
|
||||
|
||||
@@ -114,7 +114,7 @@ kubernetes component=apiserver,provider=kubernetes <none>
|
||||
|
||||
## Next steps
|
||||
|
||||
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
|
||||
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
|
||||
need to disable SELinux (setenforce 0).
|
||||
|
||||
```console
|
||||
|
@@ -93,22 +93,22 @@ That approach would provide co-location, but would not provide most of the benef
|
||||
|
||||
## Durability of pods (or lack thereof)
|
||||
|
||||
Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
|
||||
Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
|
||||
|
||||
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](replication-controller.md)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
|
||||
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](replication-controller.md)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
|
||||
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
|
||||
Pod is exposed as a primitive in order to facilitate:
|
||||
|
||||
* scheduler and controller pluggability
|
||||
* support for pod-level operations without the need to "proxy" them via controller APIs
|
||||
* support for pod-level operations without the need to "proxy" them via controller APIs
|
||||
* decoupling of pod lifetime from controller lifetime, such as for bootstrapping
|
||||
* decoupling of controllers and services — the endpoint controller just watches pods
|
||||
* clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller"
|
||||
* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](https://github.com/GoogleCloudPlatform/kubernetes/issues/3949)
|
||||
|
||||
The current best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service. If you find this cumbersome, please comment on [issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260).
|
||||
The current best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service. If you find this cumbersome, please comment on [issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260).
|
||||
|
||||
## API Object
|
||||
|
||||
|
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
||||
|
||||
# Kubernetes User Guide: Managing Applications: Prerequisites
|
||||
|
||||
To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](kubectl/kubectl.md). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
|
||||
To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](kubectl/kubectl.md). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
|
||||
|
||||
## Installing kubectl
|
||||
|
||||
|
@@ -90,7 +90,7 @@ In addition to the local disk storage provided by `emptyDir`, Kubernetes support
|
||||
|
||||
## Distributing credentials
|
||||
|
||||
Many applications need credentials, such as passwords, OAuth tokens, and TLS keys, to authenticate with other applications, databases, and services. Storing these credentials in container images or environment variables is less than ideal, since the credentials can then be copied by anyone with access to the image, pod/container specification, host file system, or host Docker daemon.
|
||||
Many applications need credentials, such as passwords, OAuth tokens, and TLS keys, to authenticate with other applications, databases, and services. Storing these credentials in container images or environment variables is less than ideal, since the credentials can then be copied by anyone with access to the image, pod/container specification, host file system, or host Docker daemon.
|
||||
|
||||
Kubernetes provides a mechanism, called [*secrets*](secrets.md), that facilitates delivery of sensitive credentials to applications. A `Secret` is a simple resource containing a map of data. For instance, a simple secret with a username and password might look as follows:
|
||||
|
||||
@@ -245,7 +245,7 @@ More examples can be found in our [blog article](http://blog.kubernetes.io/2015/
|
||||
|
||||
## Resource management
|
||||
|
||||
Kubernetes’s scheduler will place applications only where they have adequate CPU and memory, but it can only do so if it knows how much [resources they require](compute-resources.md). The consequence of specifying too little CPU is that the containers could be starved of CPU if too many other containers were scheduled onto the same node. Similarly, containers could die unpredictably due to running out of memory if no memory were requested, which can be especially likely for large-memory applications.
|
||||
Kubernetes’s scheduler will place applications only where they have adequate CPU and memory, but it can only do so if it knows how much [resources they require](compute-resources.md). The consequence of specifying too little CPU is that the containers could be starved of CPU if too many other containers were scheduled onto the same node. Similarly, containers could die unpredictably due to running out of memory if no memory were requested, which can be especially likely for large-memory applications.
|
||||
|
||||
If no resource requirements are specified, a nominal amount of resources is assumed. (This default is applied via a [LimitRange](limitrange/) for the default [Namespace](namespaces.md). It can be viewed with `kubectl describe limitrange limits`.) You may explicitly specify the amount of resources required as follows:
|
||||
|
||||
@@ -318,7 +318,7 @@ For more details (e.g., how to specify command-based probes), see the [example i
|
||||
|
||||
Of course, nodes and applications may fail at any time, but many applications benefit from clean shutdown, such as to complete in-flight requests, when the termination of the application is deliberate. To support such cases, Kubernetes supports two kinds of notifications:
|
||||
Kubernetes will send SIGTERM to applications, which can be handled in order to effect graceful termination. SIGKILL is sent 10 seconds later if the application does not terminate sooner.
|
||||
Kubernetes supports the (optional) specification of a [*pre-stop lifecycle hook*](container-environment.md#container-hooks), which will execute prior to sending SIGTERM.
|
||||
Kubernetes supports the (optional) specification of a [*pre-stop lifecycle hook*](container-environment.md#container-hooks), which will execute prior to sending SIGTERM.
|
||||
|
||||
The specification of a pre-stop hook is similar to that of probes, but without the timing-related parameters. For example:
|
||||
|
||||
|
@@ -36,7 +36,7 @@ Documentation for other releases can be found at
|
||||
Objects of type `secret` are intended to hold sensitive information, such as
|
||||
passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
|
||||
is safer and more flexible than putting it verbatim in a `pod` definition or in
|
||||
a docker image. See [Secrets design document](../design/secrets.md) for more information.
|
||||
a docker image. See [Secrets design document](../design/secrets.md) for more information.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
||||
|
||||
# Secrets example
|
||||
|
||||
Following this example, you will create a [secret](../secrets.md) and a [pod](../pods.md) that consumes that secret in a [volume](../volumes.md). See [Secrets design document](../../design/secrets.md) for more information.
|
||||
Following this example, you will create a [secret](../secrets.md) and a [pod](../pods.md) that consumes that secret in a [volume](../volumes.md). See [Secrets design document](../../design/secrets.md) for more information.
|
||||
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
@@ -83,7 +83,7 @@ $ kubectl create -f docs/user-guide/secrets/secret-pod.yaml
|
||||
```
|
||||
|
||||
This pod runs a binary that displays the content of one of the pieces of secret data in the secret
|
||||
volume:
|
||||
volume:
|
||||
|
||||
```console
|
||||
$ kubectl logs secret-test-pod
|
||||
|
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
A service account provides an identity for processes that run in a Pod.
|
||||
|
||||
*This is a user introduction to Service Accounts. See also the
|
||||
*This is a user introduction to Service Accounts. See also the
|
||||
[Cluster Admin Guide to Service Accounts](../admin/service-accounts-admin.md).*
|
||||
|
||||
*Note: This document describes how service accounts behave in a cluster set up
|
||||
@@ -111,7 +111,7 @@ field of a pod to the name of the service account you wish to use.
|
||||
|
||||
The service account has to exist at the time the pod is created, or it will be rejected.
|
||||
|
||||
You cannot update the service account of an already created pod.
|
||||
You cannot update the service account of an already created pod.
|
||||
|
||||
You can clean up the service account from this example like this:
|
||||
|
||||
|
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
||||
|
||||
# Kubernetes User Interface
|
||||
|
||||
Kubernetes has a web-based user interface that displays the current cluster state graphically.
|
||||
Kubernetes has a web-based user interface that displays the current cluster state graphically.
|
||||
|
||||
## Accessing the UI
|
||||
|
||||
@@ -50,34 +50,34 @@ Normally, this should be taken care of automatically by the [`kube-addons.sh`](h
|
||||
|
||||
## Using the UI
|
||||
|
||||
The Kubernetes UI can be used to introspect your current cluster, such as checking how resources are used, or looking at error messages. You cannot, however, use the UI to modify your cluster.
|
||||
The Kubernetes UI can be used to introspect your current cluster, such as checking how resources are used, or looking at error messages. You cannot, however, use the UI to modify your cluster.
|
||||
|
||||
### Node Resource Usage
|
||||
### Node Resource Usage
|
||||
|
||||
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
|
||||
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
|
||||

|
||||
|
||||
### Dashboard Views
|
||||
|
||||
Click on the "Views" button in the top-right of the page to see other views available, which include: Explore, Pods, Nodes, Replication Controllers, Services, and Events.
|
||||
Click on the "Views" button in the top-right of the page to see other views available, which include: Explore, Pods, Nodes, Replication Controllers, Services, and Events.
|
||||
|
||||
#### Explore View
|
||||
#### Explore View
|
||||
|
||||
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
|
||||
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
|
||||

|
||||
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
|
||||

|
||||
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
|
||||

|
||||
To see more details of each resource instance, simply click on it.
|
||||
To see more details of each resource instance, simply click on it.
|
||||

|
||||
|
||||
### Other Views
|
||||
|
||||
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
|
||||
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
|
||||

|
||||
|
||||
## More Information
|
||||
## More Information
|
||||
|
||||
For more information, see the [Kubernetes UI development document](http://releases.k8s.io/HEAD/www/README.md) in the www directory.
|
||||
|
||||
|
@@ -49,7 +49,7 @@ limitations under the License.
|
||||
|
||||
# Rolling update example
|
||||
|
||||
This example demonstrates the usage of Kubernetes to perform a [rolling update](../kubectl/kubectl_rolling-update.md) on a running group of [pods](../../../docs/user-guide/pods.md). See [here](../managing-deployments.md#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](../../design/simple-rolling-update.md) for more information.
|
||||
This example demonstrates the usage of Kubernetes to perform a [rolling update](../kubectl/kubectl_rolling-update.md) on a running group of [pods](../../../docs/user-guide/pods.md). See [here](../managing-deployments.md#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](../../design/simple-rolling-update.md) for more information.
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
|
@@ -36,7 +36,7 @@ Documentation for other releases can be found at
|
||||
*This document is aimed at users who have worked through some of the examples,
|
||||
and who want to learn more about using kubectl to manage resources such
|
||||
as pods and services. Users who want to access the REST API directly,
|
||||
and developers who want to extend the Kubernetes API should
|
||||
and developers who want to extend the Kubernetes API should
|
||||
refer to the [api conventions](../devel/api-conventions.md) and
|
||||
the [api document](../api.md).*
|
||||
|
||||
@@ -68,7 +68,7 @@ $ wc -l /tmp/original.yaml /tmp/current.yaml
|
||||
60 total
|
||||
```
|
||||
|
||||
The resource we posted had only 9 lines, but the one we got back had 51 lines.
|
||||
The resource we posted had only 9 lines, but the one we got back had 51 lines.
|
||||
If you `diff -u /tmp/original.yaml /tmp/current.yaml`, you can see the fields added to the pod.
|
||||
The system adds fields in several ways:
|
||||
- Some fields are added synchronously with creation of the resource and some are set asynchronously.
|
||||
|
Reference in New Issue
Block a user