Remove all docs which are moving to http://kubernetes.github.io

All .md files now are only a pointer to where they likely are on the new
site.

All other files are untouched.
This commit is contained in:
Eric Paris
2016-03-04 12:49:17 -05:00
parent a20258efae
commit f334fc4179
134 changed files with 136 additions and 23433 deletions

View File

@@ -32,103 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications](#kubernetes-user-guide-managing-applications)
- [Quick walkthrough](#quick-walkthrough)
- [Thorough walkthrough](#thorough-walkthrough)
- [Concept guide](#concept-guide)
- [Further reading](#further-reading)
<!-- END MUNGE: GENERATED_TOC -->
The user guide is intended for anyone who wants to run programs and services on an existing Kubernetes cluster. Setup and administration of a Kubernetes cluster is described in the [Cluster Admin Guide](../../docs/admin/README.md). The [Developer Guide](../../docs/devel/README.md) is for anyone wanting to either write code which directly accesses the Kubernetes API, or to contribute directly to the Kubernetes project.
Please ensure you have completed the [prerequisites for running examples from the user guide](prereqs.md).
## Quick walkthrough
1. [Kubernetes 101](walkthrough/README.md)
1. [Kubernetes 201](walkthrough/k8s201.md)
## Thorough walkthrough
If you don't have much familiarity with Kubernetes, we recommend you read the following sections in order:
1. [Quick start: launch and expose an application](quick-start.md)
1. [Configuring and launching containers: configuring common container parameters](configuring-containers.md)
1. [Deploying continuously running applications](deploying-applications.md)
1. [Connecting applications: exposing applications to clients and users](connecting-applications.md)
1. [Working with containers in production](production-pods.md)
1. [Managing deployments](managing-deployments.md)
1. [Application introspection and debugging](introspection-and-debugging.md)
1. [Using the Kubernetes web user interface](ui.md)
1. [Logging](logging.md)
1. [Monitoring](monitoring.md)
1. [Getting into containers via `exec`](getting-into-containers.md)
1. [Connecting to containers via proxies](connecting-to-applications-proxy.md)
1. [Connecting to containers via port forwarding](connecting-to-applications-port-forward.md)
## Concept guide
[**Overview**](overview.md)
: A brief overview of Kubernetes concepts.
[**Cluster**](../admin/README.md)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications.
[**Node**](../admin/node.md)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
[**Pod**](pods.md)
: A pod is a co-located group of containers and volumes.
[**Label**](labels.md)
: A label is a key/value pair that is attached to a resource, such as a pod, to convey a user-defined identifying attribute. Labels can be used to organize and to select subsets of resources.
[**Selector**](labels.md#label-selectors)
: A selector is an expression that matches labels in order to identify related resources, such as which pods are targeted by a load-balanced service.
[**Replication Controller**](replication-controller.md)
: A replication controller ensures that a specified number of pod replicas are running at any one time. It both allows for easy scaling of replicated systems and handles re-creation of a pod when the machine it is on reboots or otherwise fails.
[**Service**](services.md)
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
[**Volume**](volumes.md)
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the volume directory and/or device.
[**Secret**](secrets.md)
: A secret stores sensitive data, such as authentication tokens, which can be made available to containers upon request.
[**Name**](identifiers.md)
: A user- or client-provided name for a resource.
[**Namespace**](namespaces.md)
: A namespace is like a prefix to the name of a resource. Namespaces help different projects, teams, or customers to share a cluster, such as by preventing name collisions between unrelated teams.
[**Annotation**](annotations.md)
: A key/value pair that can hold larger (compared to a label), and possibly not human-readable, data, intended to store non-identifying auxiliary data, especially data manipulated by tools and system extensions. Efficient filtering by annotation values is not supported.
## Further reading
* API resources
* [Working with resources](working-with-resources.md)
* Pods and containers
* [Pod lifecycle and restart policies](pod-states.md)
* [Lifecycle hooks](container-environment.md)
* [Compute resources, such as cpu and memory](compute-resources.md)
* [Specifying commands and requesting capabilities](containers.md)
* [Downward API: accessing system configuration from a pod](downward-api.md)
* [Images and registries](images.md)
* [Migrating from docker-cli to kubectl](docker-cli-to-kubectl.md)
* [Configuration Best Practices and Tips](config-best-practices.md)
* [Assign pods to selected nodes](node-selection/)
* [Perform a rolling update on a running group of pods](update-demo/)
This file has moved to: http://kubernetes.github.io/docs/user-guide/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,298 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# User Guide to Accessing the Cluster
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [User Guide to Accessing the Cluster](#user-guide-to-accessing-the-cluster)
- [Accessing the cluster API](#accessing-the-cluster-api)
- [Accessing for the first time with kubectl](#accessing-for-the-first-time-with-kubectl)
- [Directly accessing the REST API](#directly-accessing-the-rest-api)
- [Using kubectl proxy](#using-kubectl-proxy)
- [Without kubectl proxy](#without-kubectl-proxy)
- [Programmatic access to the API](#programmatic-access-to-the-api)
- [Accessing the API from a Pod](#accessing-the-api-from-a-pod)
- [Accessing services running on the cluster](#accessing-services-running-on-the-cluster)
- [Ways to connect](#ways-to-connect)
- [Discovering builtin services](#discovering-builtin-services)
- [Manually constructing apiserver proxy URLs](#manually-constructing-apiserver-proxy-urls)
- [Examples](#examples)
- [Using web browsers to access services running on the cluster](#using-web-browsers-to-access-services-running-on-the-cluster)
- [Requesting redirects](#requesting-redirects)
- [So Many Proxies](#so-many-proxies)
<!-- END MUNGE: GENERATED_TOC -->
## Accessing the cluster API
### Accessing for the first time with kubectl
When accessing the Kubernetes API for the first time, we suggest using the
Kubernetes CLI, `kubectl`.
To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
though a [Getting started guide](../getting-started-guides/README.md),
or someone else setup the cluster and provided you with credentials and a location.
Check the location and credentials that kubectl knows about with this command:
```console
$ kubectl config view
```
Many of the [examples](../../examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl.md).
### Directly accessing the REST API
Kubectl handles locating and authenticating to the apiserver.
If you want to directly access the REST API with an http client like
curl or wget, or a browser, there are several ways to locate and authenticate:
- Run kubectl in proxy mode.
- Recommended approach.
- Uses stored apiserver location.
- Verifies identity of apiserver using self-signed cert. No MITM possible.
- Authenticates to apiserver.
- In future, may do intelligent client-side load-balancing and failover.
- Provide the location and credentials directly to the http client.
- Alternate approach.
- Works with some types of client code that are confused by using a proxy.
- Need to import a root cert into your browser to protect against MITM.
#### Using kubectl proxy
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
locating the apiserver and authenticating.
Run it like this:
```console
$ kubectl proxy --port=8080 &
```
See [kubectl proxy](kubectl/kubectl_proxy.md) for more details.
Then you can explore the API with curl, wget, or a browser, like so:
```console
$ curl http://localhost:8080/api/
{
"versions": [
"v1"
]
}
```
#### Without kubectl proxy
It is also possible to avoid using kubectl proxy by passing an authentication token
directly to the apiserver, like this:
```console
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
{
"versions": [
"v1"
]
}
```
The above example uses the `--insecure` flag. This leaves it subject to MITM
attacks. When kubectl accesses the cluster it uses a stored root certificate
and client certificates to access the server. (These are installed in the
`~/.kube` directory). Since cluster certificates are typically self-signed, it
may take special configuration to get your http client to use root
certificate.
On some clusters, the apiserver does not require authentication; it may serve
on localhost, or be protected by a firewall. There is not a standard
for this. [Configuring Access to the API](../admin/accessing-the-api.md)
describes how a cluster admin can configure this. Such approaches may conflict
with future high-availability support.
### Programmatic access to the API
There are [client libraries](../devel/client-libraries.md) for accessing the API
from several languages. The Kubernetes project-supported
[Go](http://releases.k8s.io/HEAD/pkg/client/)
client library can use the same [kubeconfig file](kubeconfig-file.md)
as the kubectl CLI does to locate and authenticate to the apiserver.
See documentation for other libraries for how they authenticate.
### Accessing the API from a Pod
When accessing the API from a pod, locating and authenticating
to the api server are somewhat different.
The recommended way to locate the apiserver within the pod is with
the `kubernetes` DNS name, which resolves to a Service IP which in turn
will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a
[service account](service-accounts.md) credential. By kube-system, a pod
is associated with a service account, and a credential (token) for that
service account is placed into the filesystem tree of each container in that pod,
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
If available, a certificate bundle is placed into the filesystem tree of each
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
used to verify the serving certificate of the apiserver.
Finally, the default namespace to be used for namespaced API operations is placed in a file
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
From within a pod the recommended ways to connect to API are:
- run a kubectl proxy as one of the containers in the pod, or as a background
process within a container. This proxies the
Kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](../../examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
This handles locating and authenticating to the apiserver.
In each case, the credentials of the pod are used to communicate securely with the apiserver.
## Accessing services running on the cluster
The previous section was about connecting the Kubernetes API server. This section is about
connecting to other services running on Kubernetes cluster. In Kubernetes, the
[nodes](../admin/node.md), [pods](pods.md) and [services](services.md) all have
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
routable, so they will not be reachable from a machine outside the cluster,
such as your desktop machine.
### Ways to connect
You have several options for connecting to nodes, pods and services from outside the cluster:
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](services.md) and
[kubectl expose](kubectl/kubectl_expose.md) documentation.
- Depending on your cluster environment, this may just expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
place a unique label on the pod it and create a new service which selects this label.
- In most cases, it should not be necessary for application developer to directly access
nodes via their nodeIPs.
- Access services, nodes, or pods using the Proxy Verb.
- Does apiserver authentication and authorization prior to accessing the remote service.
Use this if the services are not secure enough to expose to the internet, or to gain
access to ports on the node IP, or for debugging.
- Proxies may cause problems for some web applications.
- Only works for HTTP/HTTPS.
- Described [here](#discovering-builtin-services).
- Access from a node or pod in the cluster.
- Run a pod, and then connect to a shell in it using [kubectl exec](kubectl/kubectl_exec.md).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
### Discovering builtin services
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
with the `kubectl cluster-info` command:
```console
$ kubectl cluster-info
Kubernetes master is running at https://104.197.5.247
elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kibana-logging
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kube-dns
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
```
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
`http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/`.
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
#### Manually constructing apiserver proxy URLs
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/proxy/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL
##### Examples
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?q=user:kimchy`
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_cluster/health?pretty=true`
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5
}
```
#### Using web browsers to access services running on the cluster
You may be able to put an apiserver proxy url into the address bar of a browser. However:
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
but your cluster may not be configured to accept basic auth.
- Some web apps may not work, particularly those with client side javascript that construct urls in a
way that is unaware of the proxy path prefix.
## Requesting redirects
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
## So Many Proxies
There are several different proxies you may encounter when using Kubernetes:
1. The [kubectl proxy](#directly-accessing-the-rest-api):
- runs on a user's desktop or in a pod
- proxies from a localhost address to the Kubernetes apiserver
- client to proxy uses HTTP
- proxy to apiserver uses HTTPS
- locates apiserver
- adds authentication headers
1. The [apiserver proxy](#discovering-builtin-services):
- is a bastion built into the apiserver
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
- runs in the apiserver processes
- client to proxy uses HTTPS (or http if apiserver so configured)
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
- can be used to reach a Node, Pod, or Service
- does load balancing when used to reach a Service
1. The [kube proxy](services.md#ips-and-vips):
- runs on each node
- proxies UDP and TCP
- does not understand HTTP
- provides load balancing
- is just used to reach services
1. A Proxy/Load-balancer in front of apiserver(s):
- existence and implementation varies from cluster to cluster (e.g. nginx)
- sits between all clients and one or more apiservers
- acts as load balancer if there are several apiservers.
1. Cloud Load Balancers on external services:
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
- are created automatically when the Kubernetes service has type `LoadBalancer`
- use UDP/TCP only
- implementation varies by cloud provider.
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
will typically ensure that the latter types are setup correctly.
This file has moved to: http://kubernetes.github.io/docs/user-guide/accessing-the-cluster/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,32 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Annotations
We have [labels](labels.md) for identifying metadata.
It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels.
Like labels, annotations are key-value maps.
```json
"annotations": {
"key1" : "value1",
"key2" : "value2"
}
```
Possible information that could be recorded in annotations:
* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging
* build/release/image information (timestamps, release ids, git branch, PR numbers, image hashes, registry address, etc.)
* pointers to logging/monitoring/analytics/audit repos
* client library/tool information (e.g. for debugging purposes -- name, version, build info)
* other user and/or tool/system provenance info, such as URLs of related objects from other ecosystem components
* lightweight rollout tool metadata (config and/or checkpoints)
* phone/pager number(s) of person(s) responsible, or directory entry where that info could be found, such as a team website
Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc.
This file has moved to: http://kubernetes.github.io/docs/user-guide/annotations/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,210 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Application Troubleshooting
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
This is *not* a guide for people who want to debug their cluster. For that you should check out
[this guide](../admin/cluster-troubleshooting.md)
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Application Troubleshooting](#application-troubleshooting)
- [FAQ](#faq)
- [Diagnosing the problem](#diagnosing-the-problem)
- [Debugging Pods](#debugging-pods)
- [My pod stays pending](#my-pod-stays-pending)
- [My pod stays waiting](#my-pod-stays-waiting)
- [My pod is crashing or otherwise unhealthy](#my-pod-is-crashing-or-otherwise-unhealthy)
- [My pod is running but not doing what I told it to do](#my-pod-is-running-but-not-doing-what-i-told-it-to-do)
- [Debugging Replication Controllers](#debugging-replication-controllers)
- [Debugging Services](#debugging-services)
- [My service is missing endpoints](#my-service-is-missing-endpoints)
- [Network traffic is not forwarded](#network-traffic-is-not-forwarded)
- [More information](#more-information)
<!-- END MUNGE: GENERATED_TOC -->
## FAQ
Users are highly encouraged to check out our [FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ)
## Diagnosing the problem
The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or
your Service?
* [Debugging Pods](#debugging-pods)
* [Debugging Replication Controllers](#debugging-replication-controllers)
* [Debugging Services](#debugging-services)
### Debugging Pods
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
```console
$ kubectl describe pods ${POD_NAME}
```
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
Continue debugging depending on the state of the pods.
#### My pod stays pending
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
your pod. Reasons include:
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](compute-resources.md#my-pods-are-pending-with-event-message-failedscheduling) for more information.
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
#### My pod stays waiting
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
* Make sure that you have the name of the image correct
* Have you pushed the image to the repository?
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
#### My pod is crashing or otherwise unhealthy
First, take a look at the logs of
the current container:
```console
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
```
If your container has previously crashed, you can access the previous container's crash log with:
```console
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
```
Alternately, you can run commands inside that container with `exec`:
```console
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
```
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for Pods that only contain a single container.
As an example, to look at the logs from a running Cassandra pod, you might run
```console
$ kubectl exec cassandra -- cat /var/log/cassandra/system.log
```
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
but this should generally not be necessary given tools in the Kubernetes API. Therefore, if you find yourself needing to ssh into a machine, please file a
feature request on GitHub describing your use case and why these tools are insufficient.
#### My pod is running but not doing what I told it to do
If your pod is not behaving as you expected, it may be that there was an error in your
pod description (e.g. `mypod.yaml` file on your local machine), and that the error
was silently ignored when you created the pod. Often a section of the pod description
is nested incorrectly, or a key name is typed incorrectly, and so the key is ignored.
For example, if you misspelled `command` as `commnd` then the pod will be created but
will not use the command line you intended it to use.
The first thing to do is to delete your pod and try creating it again with the `--validate` option.
For example, run `kubectl create --validate -f mypod.yaml`.
If you misspelled `command` as `commnd` then will give an error like this:
```
I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd
I0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842
pods/mypod
```
<!-- TODO: Now that #11914 is merged, this advice may need to be updated -->
The next thing to check is whether the pod on the apiserver
matches the pod you meant to create (e.g. in a yaml file on your local machine).
For example, run `kubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml` and then
manually compare the original pod description, `mypod.yaml` with the one you got
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
lines on the "apiserver" version that are not on the original version. This is
expected. However, if there are lines on the original that are not on the apiserver
version, then this may indicate a problem with your pod spec.
### Debugging Replication Controllers
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
controller.
### Debugging Services
Services provide load balancing across a set of pods. There are several common problems that can make Services
not work properly. The following instructions should help debug Service problems.
First, verify that there are endpoints for the service. For every Service object, the apiserver makes an `endpoints` resource available.
You can view this resource with:
```console
$ kubectl get endpoints ${SERVICE_NAME}
```
Make sure that the endpoints match up with the number of containers that you expect to be a member of your service.
For example, if your Service is for an nginx container with 3 replicas, you would expect to see three different
IP addresses in the Service's endpoints.
#### My service is missing endpoints
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
a Service where the labels are:
```yaml
...
spec:
- selector:
name: nginx
type: frontend
```
You can use:
```console
$ kubectl get pods --selector=name=nginx,type=frontend
```
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
have the right ports exposed. If your service has a `containerPort` specified, but the Pods that are
selected don't have that port listed, then they won't be added to the endpoints list.
Verify that the pod's `containerPort` matches up with the Service's `containerPort`
#### Network traffic is not forwarded
If you can connect to the service, but the connection is immediately dropped, and there are endpoints
in the endpoints list, it's likely that the proxy can't contact your pods.
There are three things to
check:
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods)
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
#### More information
If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services.md) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
This file has moved to: http://kubernetes.github.io/docs/user-guide/application-troubleshooting/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,265 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Compute Resources
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Compute Resources](#compute-resources)
- [Resource Requests and Limits of Pod and Container](#resource-requests-and-limits-of-pod-and-container)
- [How Pods with Resource Requests are Scheduled](#how-pods-with-resource-requests-are-scheduled)
- [How Pods with Resource Limits are Run](#how-pods-with-resource-limits-are-run)
- [Monitoring Compute Resource Usage](#monitoring-compute-resource-usage)
- [Troubleshooting](#troubleshooting)
- [My pods are pending with event message failedScheduling](#my-pods-are-pending-with-event-message-failedscheduling)
- [My container is terminated](#my-container-is-terminated)
- [Planned Improvements](#planned-improvements)
<!-- END MUNGE: GENERATED_TOC -->
When specifying a [pod](pods.md), you can optionally specify how much CPU and memory (RAM) each
container needs. When containers have their resource requests specified, the scheduler is
able to make better decisions about which nodes to place pods on; and when containers have their
limits specified, contention for resources on a node can be handled in a specified manner. For
more details about the difference between requests and limits, please refer to
[Resource QoS](../proposals/resource-qos.md).
*CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified
in units of cores. Memory is specified in units of bytes.
CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute
resources are measureable quantities which can be requested, allocated, and consumed. They are
distinct from [API resources](working-with-resources.md). API resources, such as pods and
[services](services.md) are objects that can be written to and retrieved from the Kubernetes API
server.
## Resource Requests and Limits of Pod and Container
Each container of a Pod can optionally specify `spec.container[].resources.limits.cpu` and/or
`spec.container[].resources.limits.memory` and/or `spec.container[].resources.requests.cpu`
and/or `spec.container[].resources.requests.memory`.
Specifying resource requests and/or limits is optional. In some clusters, unset limits or requests
may be replaced with default values when a pod is created or updated. The default value depends on
how the cluster is configured. If value of requests is not specified, they are set to be equal
to limits by default. Please note that resource limits must be greater than or equal to resource
requests.
Although requests/limits can only be specified on individual containers, it is convenient to talk
about pod resource requests/limits. A *pod resource request/limit* for a particular resource
type is the sum of the resource requests/limits of that type for each container in the pod, with
unset values treated as zero (or equal to default values in some cluster configurations).
The following pod has two containers. Each has a request of 0.25 core of cpu and 64MiB
(2<sup>20</sup> bytes) of memory and a limit of 0.5 core of cpu and 128MiB of memory. The pod can
be said to have a request of 0.5 core and 128 MiB of memory and a limit of 1 core and 256MiB of
memory.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
```
## How Pods with Resource Requests are Scheduled
When a pod is created, the Kubernetes scheduler selects a node for the pod to
run on. Each node has a maximum capacity for each of the resource types: the
amount of CPU and memory it can provide for pods. The scheduler ensures that,
for each resource type (CPU and memory), the sum of the resource requests of the
containers scheduled to the node is less than the capacity of the node. Note
that although actual memory or CPU resource usage on nodes is very low, the
scheduler will still refuse to place pods onto nodes if the capacity check
fails. This protects against a resource shortage on a node when resource usage
later increases, such as due to a daily peak in request rate.
## How Pods with Resource Limits are Run
When kubelet starts a container of a pod, it passes the CPU and memory limits to the container
runner (Docker or rkt).
When using Docker:
- The `spec.container[].resources.limits.cpu` is multiplied by 1024, converted to an integer, and
used as the value of the [`--cpu-shares`](
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
command.
- The `spec.container[].resources.limits.memory` is converted to an integer, and used as the value
of the [`--memory`](https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag
to the `docker run` command.
**TODO: document behavior for rkt**
If a container exceeds its memory limit, it may be terminated. If it is restartable, it will be
restarted by kubelet, as will any other type of runtime failure.
A container may or may not be allowed to exceed its CPU limit for extended periods of time.
However, it will not be killed for excessive CPU usage.
To determine if a container cannot be scheduled or is being killed due to resource limits, see the
"Troubleshooting" section below.
## Monitoring Compute Resource Usage
The resource usage of a pod is reported as part of the Pod status.
If [optional monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
then pod resource usage can be retrieved from the monitoring system.
## Troubleshooting
### My pods are pending with event message failedScheduling
If the scheduler cannot find any node where a pod can fit, then the pod will remain unscheduled
until a place can be found. An event will be produced each time the scheduler fails to find a
place for the pod, like this:
```console
$ kubectl describe pod frontend | grep -A 3 Events
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
```
In the case shown above, the pod "frontend" fails to be scheduled due to insufficient
CPU resource on the node. Similar error messages can also suggest failure due to insufficient
memory (PodExceedsFreeMemory). In general, if a pod or pods are pending with this message and
alike, then there are several things to try:
- Add more nodes to the cluster.
- Terminate unneeded pods to make room for pending pods.
- Check that the pod is not larger than all the nodes. For example, if all the nodes
have a capacity of `cpu: 1`, then a pod with a limit of `cpu: 1.1` will never be scheduled.
You can check node capacities and amounts allocated with the `kubectl describe nodes` command.
For example:
```console
$ kubectl describe nodes gke-cluster-4-386701dd-node-ww4p
Name: gke-cluster-4-386701dd-node-ww4p
[ ... lines removed for clarity ...]
Capacity:
cpu: 1
memory: 464Mi
pods: 40
Allocated resources (total requests):
cpu: 910m
memory: 2370Mi
pods: 4
[ ... lines removed for clarity ...]
Pods: (4 in total)
Namespace Name CPU(milliCPU) Memory(bytes)
frontend webserver-ffj8j 500 (50% of total) 2097152000 (50% of total)
kube-system fluentd-cloud-logging-gke-cluster-4-386701dd-node-ww4p 100 (10% of total) 209715200 (5% of total)
kube-system kube-dns-v8-qopgw 310 (31% of total) 178257920 (4% of total)
TotalResourceLimits:
CPU(milliCPU): 910 (91% of total)
Memory(bytes): 2485125120 (59% of total)
[ ... lines removed for clarity ...]
```
Here you can see from the `Allocated resources` section that that a pod which ask for more than
90 millicpus or more than 1341MiB of memory will not be able to fit on this node.
Looking at the `Pods` section, you can see which pods are taking up space on the node.
The [resource quota](../admin/resource-quota.md) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.
### My container is terminated
Your container may be terminated because it's resource-starved. To check if a container is being killed because it is hitting a resource limit, call `kubectl describe pod`
on the pod you are interested in:
```console
[12:54:41] $ ./cluster/kubectl.sh describe pod simmemleak-hra99
Name: simmemleak-hra99
Namespace: default
Image(s): saadali/simmemleak
Node: kubernetes-node-tf0f/10.240.216.66
Labels: name=simmemleak
Status: Running
Reason:
Message:
IP: 10.244.2.75
Replication Controllers: simmemleak (1/1 replicas created)
Containers:
simmemleak:
Image: saadali/simmemleak
Limits:
cpu: 100m
memory: 50Mi
State: Running
Started: Tue, 07 Jul 2015 12:54:41 -0700
Last Termination State: Terminated
Exit Code: 1
Started: Fri, 07 Jul 2015 12:54:30 -0700
Finished: Fri, 07 Jul 2015 12:54:33 -0700
Ready: False
Restart Count: 5
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
```
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
You can call `get pod` with the `-o go-template=...` option to fetch the status of previously terminated containers:
```console
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
```
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
## Planned Improvements
The current system only allows resource quantities to be specified on a container.
It is planned to improve accounting for resources which are shared by all containers in a pod,
such as [EmptyDir volumes](volumes.md#emptydir).
The current system only supports container requests and limits for CPU and Memory.
It is planned to add new resource types, including a node disk space
resource, and a framework for adding custom [resource types](../design/resources.md#resource-types).
Kubernetes supports overcommitment of resources by supporting multiple levels of [Quality of Service](http://issue.k8s.io/168).
Currently, one unit of CPU means different things on different cloud providers, and on different
machine types within the same cloud providers. For example, on AWS, the capacity of a node
is reported in [ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
cores. We plan to revise the definition of the cpu resource to allow for more consistency
across providers and platforms.
This file has moved to: http://kubernetes.github.io/docs/user-guide/compute-resources/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,131 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Configuration Best Practices and Tips
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Configuration Best Practices and Tips](#configuration-best-practices-and-tips)
- [General Config Tips](#general-config-tips)
- ["Naked" Pods vs Replication Controllers and Jobs](#naked-pods-vs-replication-controllers-and-jobs)
- [Services](#services)
- [Using Labels](#using-labels)
- [Container Images](#container-images)
- [Using kubectl](#using-kubectl)
<!-- END MUNGE: GENERATED_TOC -->
This document is meant to highlight and consolidate in one place configuration best practices that are introduced throughout the user-guide and getting-started documentation and examples. This is a living document, so if you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
## General Config Tips
- When defining configurations, specify the latest stable API version (currently v1).
- Configuration files should be stored in version control before being pushed to the cluster. This allows a configuration to be quickly rolled back if needed, and will aid with cluster re-creation and restoration if necessary.
- Write your configuration files using YAML rather than JSON. They can be used interchangeably in almost all scenarios, but YAML tends to be more user-friendly for config.
- Group related objects together in a single file where this makes sense. This format is often easier to manage than separate files. See the [guestbook-all-in-one.yaml](../../examples/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
(Note also that many `kubectl` commands can be called on a directory, and so you can also call
`kubectl create` on a directory of config files— see below for more detail).
- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to
reduce error. For example, omit the selector and labels in a `ReplicationController` if you want
them to be the same as the labels in its `podTemplate`, since those fields are populated from the
`podTemplate` labels by default. See the [guestbook app's](../../examples/guestbook/) .yaml files for some [examples](../../examples/guestbook/frontend-controller.yaml) of this.
- Put an object description in an annotation to allow better introspection.
## "Naked" Pods vs Replication Controllers and Jobs
- If there is a viable alternative to naked pods (i.e., pods not bound to a [replication controller
](replication-controller.md)), go with the alternative. Naked pods will not be rescheduled in the
event of node failure.
Replication controllers are almost always preferable to creating pods, except for some explicit
[`restartPolicy: Never`](pod-states.md#restartpolicy) scenarios. A
[Job](jobs.md) object (currently in Beta), may also be appropriate.
## Services
- It's typically best to create a [service](services.md) before corresponding [replication
controllers](replication-controller.md), so that the scheduler can spread the pods comprising the
service. You can also create a replication controller without specifying replicas (this will set
replicas=1), create a service, then scale up the replication controller. This can be useful in
ensuring that one replica works before creating lots of them.
- Don't use `hostPort` (which specifies the port number to expose on the host) unless absolutely
necessary, e.g., for a node daemon. When you bind a Pod to a `hostPort`, there are a limited
number of places that pod can be scheduled, due to port conflicts— you can only schedule as many
such Pods as there are nodes in your Kubernetes cluster.
If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](connecting-to-applications-proxy.md) or [kubectl port-forward](connecting-to-applications-port-forward.md).
You can use a [Service](services.md) object for external service access.
If you do need to expose a pod's port on the host machine, consider using a [NodePort](services.md#type-nodeport) service before resorting to `hostPort`.
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
- Use _headless services_ for easy service discovery when you don't need kube-proxy load balancing.
See [headless services](services.md#headless-services).
## Using Labels
- Define and use [labels](labels.md) that identify __semantic attributes__ of your application or
deployment. For example, instead of attaching a label to a set of pods to explicitly represent
some service (e.g., `service: myservice`), or explicitly representing the replication
controller managing the pods (e.g., `controller: mycontroller`), attach labels that identify
semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This
will let you select the object groups appropriate to the context— e.g., a service for all "tier:
frontend" pods, or all "test" phase components of app "myapp". See the
[guestbook](../../examples/guestbook/) app for an example of this approach.
A service can be made to span multiple deployments, such as is done across [rolling updates](kubectl/kubectl_rolling-update.md), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.
- To facilitate rolling updates, include version info in replication controller names, e.g. as a
suffix to the name. It is useful to set a 'version' label as well. The rolling update creates a
new controller as opposed to modifying the existing controller. So, there will be issues with
version-agnostic controller names. See the [documentation](kubectl/kubectl_rolling-update.md) on
the rolling-update command for more detail.
Note that the [Deployment](deployments.md) object obviates the need to manage replication
controller 'version names'. A desired state of an object is described by a Deployment, and if
changes to that spec are _applied_, the deployment controller changes the actual state to the
desired state at a controlled rate. (Deployment objects are currently part of the [`extensions`
API Group](../api.md#api- groups), and are not enabled by default.)
- You can manipulate labels for debugging. Because Kubernetes replication controllers and services
match to pods using labels, this allows you to remove a pod from being considered by a
controller, or served traffic by a service, by removing the relevant selector labels. If you
remove the labels of an existing pod, its controller will create a new pod to take its place.
This is a useful way to debug a previously "live" pod in a quarantine environment. See the
[`kubectl label`](kubectl/kubectl_label.md) command.
## Container Images
- The [default container image pull policy](images.md) is `IfNotPresent`, which causes the
[Kubelet](../admin/kubelet.md) to not pull an image if it already exists. If you would like to
always force a pull, you must specify a pull image policy of `Always` in your .yaml file
(`imagePullPolicy: Always`) or specify a `:latest` tag on your image.
That is, if you're specifying an image with other than the `:latest` tag, e.g. `myimage:v1`, and
there is an image update to that same tag, the Kubelet won't pull the updated image. You can
address this by ensuring that any updates to an image bump the image tag as well (e.g.
`myimage:v2`), and ensuring that your configs point to the correct version.
## Using kubectl
- Use `kubectl create -f <directory>` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes them to `create`.
- Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated.
- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](labels.md#label-selectors) and [using labels effectively](managing-deployments.md#using-labels-effectively).
- Use `kubectl run` and `expose` to quickly create and expose single container replication controllers. See the [quick start guide](quick-start.md) for an example.
This file has moved to: http://kubernetes.github.io/docs/user-guide/config-best-practices/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -27,566 +27,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# ConfigMap
This file has moved to: http://kubernetes.github.io/docs/user-guide/configmap/
Many applications require configuration via some combination of config files, command line
arguments, and environment variables. These configuration artifacts should be decoupled from image
content in order to keep containerized applications portable. The ConfigMap API resource provides
mechanisms to inject containers with configuration data while keeping containers agnostic of
Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or
coarse-grained information like entire config files or JSON blobs.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [ConfigMap](#configmap)
- [Overview of ConfigMap](#overview-of-configmap)
- [Creating ConfigMaps](#creating-configmaps)
- [Creating from directories](#creating-from-directories)
- [Creating from files](#creating-from-files)
- [Creating from literal values](#creating-from-literal-values)
- [Consuming ConfigMap in pods](#consuming-configmap-in-pods)
- [Use-Case: Consume ConfigMap in environment variables](#use-case-consume-configmap-in-environment-variables)
- [Use-Case: Set command-line arguments with ConfigMap](#use-case-set-command-line-arguments-with-configmap)
- [Use-Case: Consume ConfigMap via volume plugin](#use-case-consume-configmap-via-volume-plugin)
- [Real World Example: Configuring Redis](#real-world-example-configuring-redis)
- [Restrictions](#restrictions)
<!-- END MUNGE: GENERATED_TOC -->
## Overview of ConfigMap
The ConfigMap API resource holds key-value pairs of configuration data that can be consumed in pods
or used to store configuration data for system components such as controllers. ConfigMap is similar
to [Secrets](secrets.md), but designed to more conveniently support working with strings that do not
contain sensitive information.
Let's look at a made-up example:
```yaml
kind: ConfigMap
apiVersion: v1
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: example-config
namespace: default
data:
example.property.1: hello
example.property.2: world
example.property.file: |-
property.1=value-1
property.2=value-2
property.3=value-3
```
The `data` field contains the configuration data. As you can see, ConfigMaps can be used to hold
fine-grained information like individual properties or coarse-grained information like the contents
of configuration files.
Configuration data can be consumed in pods in a variety of ways. ConfigMaps can be used to:
1. Populate the value of environment variables
2. Set command-line arguments in a container
3. Populate config files in a volume
Both users and system components may store configuration data in ConfigMap.
## Creating ConfigMaps
You can use the `kubectl create configmap` command to create configmaps easily from literal values,
files, or directories.
Let's take a look at some different ways to create a ConfigMap:
### Creating from directories
Say that we have a directory with some files that already contain the data we want to populate a ConfigMap with:
```console
$ ls docs/user-guide/configmap/kubectl/
game.properties
ui.properties
$ cat docs/user-guide/configmap/kubectl/game.properties
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
$ cat docs/user-guide/configmap/kubectl/ui.properties
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
```
The `kubectl create configmap` command can be used to create a ConfigMap holding the content of each
file in this directory:
```console
$ kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
```
When `--from-file` points to a directory, each file directly in that directory is used to populate a
key in the ConfigMap, where the name of the key is the filename, and the value of the key is the
content of the file.
Let's take a look at the ConfigMap that this command created:
```console
$ cluster/kubectl.sh describe configmaps game-config
Name: game-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties: 121 bytes
ui.properties: 83 bytes
```
You can see the two keys in the map are created from the filenames in the directory we pointed
kubectl to. Since the content of those keys may be large, in the output of `kubectl describe`,
you'll see only the names of the keys and their sizes.
If we want to see the values of the keys, we can simply `kubectl get` the resource:
```console
$ kubectl get configmaps game-config -o yaml
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:34:05Z
name: game-config
namespace: default
resourceVersion: "407"-
selfLink: /api/v1/namespaces/default/configmaps/game-config
uid: 30944725-d66e-11e5-8cd0-68f728db1985
```
### Creating from files
We can also pass `--from-file` a specific file, and pass it multiple times to kubectl. The
following command yields equivalent results to the above example:
```console
$ kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
$ cluster/kubectl.sh get configmaps game-config-2 -o yaml
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:52:05Z
name: game-config-2
namespace: default
resourceVersion: "516"
selfLink: /api/v1/namespaces/default/configmaps/game-config-2
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
```
We can also set the key to use for an individual file with `--from-file` by passing an expression
of `key=value`: `--from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties`:
```console
$ kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
$ kubectl get configmaps game-config-3 -o yaml
apiVersion: v1
data:
game-special-key: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:54:22Z
name: game-config-3
namespace: default
resourceVersion: "530"
selfLink: /api/v1/namespaces/default/configmaps/game-config-3
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
```
### Creating from literal values
It is also possible to supply literal values for ConfigMaps using `kubectl create configmap`. The
`--from-literal` option takes a `key=value` syntax that allows literal values to be supplied
directly on the command line:
```console
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
$ kubectl get configmaps special-config -o yaml
apiVersion: v1
data:
special.how: very
special.type: charm
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: special-config
namespace: default
resourceVersion: "651"
selfLink: /api/v1/namespaces/default/configmaps/special-config
uid: dadce046-d673-11e5-8cd0-68f728db1985
```
## Consuming ConfigMap in pods
### Use-Case: Consume ConfigMap in environment variables
ConfigMaps can be used to populate the value of command line arguments. As an example, consider
the following ConfigMap:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
```
We can consume the keys of this ConfigMap in a pod like so:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-configmap
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: data-1
restartPolicy: Never
```
When this pod is run, its output will include the lines:
```console
SPECIAL_LEVEL_KEY=very
SPECIAL_TYPE_KEY=charm
```
### Use-Case: Set command-line arguments with ConfigMap
ConfigMaps can also be used to set the value of the command or arguments in a container. This is
accomplished using the kubernetes substitution syntax `$(VAR_NAME)`. Consider the ConfigMap:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
```
In order to inject values into the command line, we must consume the keys we want to use as
environment variables, as in the last example. Then we can refer to them in a container's command
using the `$(VAR_NAME)` syntax.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-configmap
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: data-1
restartPolicy: Never
```
When this pod is run, the output from the `test-container` container will be:
```console
very charm
```
### Use-Case: Consume ConfigMap via volume plugin
ConfigMaps can also be consumed in volumes. Returning again to our example ConfigMap:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
```
We have a couple different options for consuming this ConfigMap in a volume. The most basic
way is to populate the volume with files where the key is the filename and the content of the file
is the value of the key:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "cat", "/etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
```
When this pod is run, the output will be:
```console
very
```
We can also control the paths within the volume where ConfigMap keys are projected:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "cat", "/etc/config/path/to/special-key" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: path/to/special-key
restartPolicy: Never
```
When this pod is run, the output will be:
```console
very
```
## Real World Example: Configuring Redis
Let's take a look at a real-world example: configuring redis using ConfigMap. Say we want to inject
redis with the recommendation configuration for using redis as a cache. The redis config file
should contain:
```
maxmemory 2mb
maxmemory-policy allkeys-lru
```
Such a file is in `docs/user-guide/configmap/redis`; we can use the following command to create a
ConfigMap instance with it:
```console
$ kubectl create configmap example-redis-config --from-file=docs/user-guide/configmap/redis/redis-config
$ kubectl get configmap redis-config -o yaml
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "example-redis-config",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/example-redis-config",
"uid": "07fd0419-d97b-11e5-b443-68f728db1985",
"resourceVersion": "15",
"creationTimestamp": "2016-02-22T15:43:34Z"
},
"data": {
"redis-config": "maxmemory 2mb\nmaxmemory-policy allkeys-lru\n"
}
}
```
Now, let's create a pod that uses this config:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
```
Notice that this pod has a ConfigMap volume that places the `redis-config` key of the
`example-redis-config` ConfigMap into a file called `redis.conf`. This volume is mounted into the
`/redis-master` directory in the redis container, placing our config file at
`/redis-master/redis.conf`, which is where the image looks for the redis config file for the master.
```console
$ kubectl create -f docs/user-guide/configmap/redis/redis-pod.yaml
```
If we `kubectl exec` into this pod and run the `redis-cli` tool, we can check that our config was
applied correctly:
```console
$ kubectl exec -it redis redis-cli
127.0.0.1:6379> CONFIG GET maxmemory
1) "maxmemory"
2) "2097152"
127.0.0.1:6379> CONFIG GET maxmemory-policy
1) "maxmemory-policy"
2) "allkeys-lru"
```
## Restrictions
ConfigMaps must be created before they are consumed in pods. Controllers may be written to tolerate
missing configuration data; consult individual components configured via ConfigMap on a case-by-case
basis.
ConfigMaps reside in a namespace. They can only be referenced by pods in the same namespace.
Quota for ConfigMap size is a planned feature.
Kubelet only supports use of ConfigMap for pods it gets from the API server. This includes any pods
created using kubectl, or indirectly via a replication controller. It does not include pods created
via the Kubelet's `--manifest-url` flag, its `--config` flag, or its REST API (these are not common
ways to create pods.)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configmap.md?pixel)]()

View File

@@ -27,108 +27,9 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# ConfigMap example
This file has moved to: http://kubernetes.github.io/docs/user-guide/configmap/README/
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the ConfigMap
A ConfigMap contains a set of named strings.
Use the [`examples/configmap/configmap.yaml`](configmap.yaml) file to create a ConfigMap:
```console
$ kubectl create -f docs/user-guide/configmap/configmap.yaml
```
You can use `kubectl` to see information about the ConfigMap:
```console
$ kubectl get configmap
NAME DATA
test-secret 2
$ kubectl describe configMap test-configmap
Name: test-configmap
Labels: <none>
Annotations: <none>
Data
====
data-1: 7 bytes
data-2: 7 bytes
```
View the values of the keys with `kubectl get`:
```console
$ cluster/kubectl.sh get configmaps test-configmap -o yaml
apiVersion: v1
data:
data-1: value-1
data-2: value-2
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T20:28:50Z
name: test-configmap
namespace: default
resourceVersion: "1090"
selfLink: /api/v1/namespaces/default/configmaps/test-configmap
uid: 384bd365-d67e-11e5-8cd0-68f728db1985
```
## Step Two: Create a pod that consumes a configMap in environment variables
Use the [`examples/configmap/env-pod.yaml`](env-pod.yaml) file to create a Pod that consumes the
ConfigMap in environment variables.
```console
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
```
This pod runs the `env` command to display the environment of the container:
```console
$ kubectl logs secret-test-pod
KUBE_CONFIG_1=value-1
KUBE_CONFIG_2=value-2
```
## Step Three: Create a pod that sets the command line using ConfigMap
Use the [`examples/configmap/command-pod.yaml`](env-pod.yaml) file to create a Pod with a container
whose command is injected with the keys of a ConfigMap
```console
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
```
This pod runs an `echo` command to display the keys:
```console
value-1 value-2
```
## Step Four: Create a pod that consumes a configMap in a volume
Pods can also consume ConfigMaps in volumes. Use the [`examples/configmap/volume-pod.yaml`](volume-pod.yaml) file to create a Pod that consume the ConfigMap in a volume.
```console
$ kubectl create -f docs/user-guide/configmap/volume-pod.yaml
```
This pod runs a `cat` command to print the value of one of the keys in the volume:
```console
value-1
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configmap/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,175 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Configuring and launching containers
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Configuring and launching containers](#kubernetes-user-guide-managing-applications-configuring-and-launching-containers)
- [Configuration in Kubernetes](#configuration-in-kubernetes)
- [Launching a container using a configuration file](#launching-a-container-using-a-configuration-file)
- [Validating configuration](#validating-configuration)
- [Environment variables and variable expansion](#environment-variables-and-variable-expansion)
- [Viewing pod status](#viewing-pod-status)
- [Viewing pod output](#viewing-pod-output)
- [Deleting pods](#deleting-pods)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Configuration in Kubernetes
In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](quick-start.md), Kubernetes supports declarative configuration. Often times, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system.
In the declarative style, all configuration is stored in YAML or JSON configuration files using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently “v1”), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation.
## Launching a container using a configuration file
Kubernetes executes containers in [*Pods*](pods.md). A pod containing a simple Hello World container can be specified in YAML as follows:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pods contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo","hello”,”world"]
```
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
The [`command`](containers.md#containers-and-commands) overrides the Docker containers `Entrypoint`. Command arguments (corresponding to Dockers `Cmd`) may be specified using `args`, as follows:
```yaml
command: ["/bin/echo"]
args: ["hello","world"]
```
This pod can be created using the `create` command:
```console
$ kubectl create -f ./hello-world.yaml
pods/hello-world
```
`kubectl` prints the resource type and name of the resource created when successful.
## Validating configuration
We enable validation by default in `kubectl` since v1.1.
Lets say you specified `entrypoint` instead of `command`. Youd see output as follows:
```console
error validating "./hello-world.yaml": error validating data: found invalid field Entrypoint for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
```
Using `kubectl create --validate=false` to turn validation off, it creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
View the [Pod API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_pod)
to see the list of valid fields.
## Environment variables and variable expansion
Kubernetes [does not automatically run commands in a shell](https://github.com/kubernetes/kubernetes/wiki/User-FAQ#use-of-environment-variables-on-the-command-line) (not all images contain shells). If you would like to run your command in a shell, such as to expand environment variables (specified using `env`), you could do the following:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pods contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
```
However, a shell isnt necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](../../docs/design/expansion.md):
```yaml
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
```
## Viewing pod status
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
If youre quick, it will look as follows:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 0s
```
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldnt see pods in an unscheduled state unless theres a problem.
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadnt been pulled already. After a few seconds, you should see the container running:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 5s
```
The `READY` column shows how many containers in the pod are running.
Almost immediately after it starts running, this command will terminate. `kubectl` shows that the container is no longer running and displays the exit status:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 ExitCode:0 0 15s
```
## Viewing pod output
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/), `kubectl logs` will show you the output:
```console
$ kubectl logs hello-world
hello world
```
## Deleting pods
When youre done looking at the output, you should delete the pod:
```console
$ kubectl delete pod hello-world
pods/hello-world
```
As with `create`, `kubectl` prints the resource type and name of the resource deleted when successful.
You can also use the resource/name format to specify the pod:
```console
$ kubectl delete pods/hello-world
pods/hello-world
```
Terminated pods arent currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.
On the other hand, containers and their logs are eventually deleted automatically in order to free up disk space on the nodes.
## What's next?
[Learn about deploying continuously running applications.](deploying-applications.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/configuring-containers/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,414 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Connecting applications
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Connecting applications](#kubernetes-user-guide-managing-applications-connecting-applications)
- [The Kubernetes model for connecting containers](#the-kubernetes-model-for-connecting-containers)
- [Exposing pods to the cluster](#exposing-pods-to-the-cluster)
- [Creating a Service](#creating-a-service)
- [Accessing the Service](#accessing-the-service)
- [Environment Variables](#environment-variables)
- [DNS](#dns)
- [Securing the Service](#securing-the-service)
- [Exposing the Service](#exposing-the-service)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
# The Kubernetes model for connecting containers
Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker.
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, they must be allocated ports on the machine's own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or else be allocated ports dynamically.
Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each others ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model.
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html).
## Exposing pods to the cluster
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
```yaml
$ cat nginxrc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
```console
$ kubectl create -f ./nginxrc.yaml
$ kubectl get pods -l app=nginx -o wide
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-node-93ly
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-node-93ly
```
Check your pods' IPs:
```console
$ kubectl get pods -l app=nginx -o json | grep podIP
"podIP": "10.245.0.15",
"podIP": "10.245.0.14",
```
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
You can read more about [how we achieve this](../admin/networking.md#how-to-achieve-this) if youre curious.
## Creating a Service
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the replication controller will create new ones, with different IPs. This is the problem a Service solves.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
You can create a Service for your 2 nginx replicas with the following yaml:
```yaml
$ cat nginxsvc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginxsvc
labels:
app: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx
```
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_service) to see the list of supported fields in service definition.
Check your Service:
```console
$ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.179.240.1 <none> 443/TCP <none> 8d
nginxsvc 10.179.252.126 122.222.183.144 80/TCP,81/TCP,82/TCP run=nginx2 11m
```
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Services selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
```console
$ kubectl describe svc nginxsvc
Name: nginxsvc
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.0.116.146
Port: <unnamed> 80/TCP
Endpoints: 10.245.0.14:80,10.245.0.15:80
Session Affinity: None
No events.
$ kubectl get ep
NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
```
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if youre curious about how this works you can read more about the [service proxy](services.md#virtual-ips-and-service-proxies).
## Accessing the Service
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md).
### Environment Variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
```console
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
```
Note theres no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the replication controller to recreate them. This time around the Service exists *before* the replicas. This will give you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
```console
$ kubectl scale rc my-nginx --replicas=0; kubectl scale rc my-nginx --replicas=2;
$ kubectl get pods -l app=nginx -o wide
NAME READY STATUS RESTARTS AGE NODE
my-nginx-5j8ok 1/1 Running 0 2m node1
my-nginx-90vaf 1/1 Running 0 2m node2
$ kubectl exec my-nginx-5j8ok -- printenv | grep SERVICE
KUBERNETES_SERVICE_PORT=443
NGINXSVC_SERVICE_HOST=10.0.116.146
KUBERNETES_SERVICE_HOST=10.0.0.1
NGINXSVC_SERVICE_PORT=80
```
### DNS
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if its running on your cluster:
```console
$ kubectl get services kube-dns --namespace=kube-system
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kube-dns 10.179.240.10 <none> 53/UDP,53/TCP k8s-app=kube-dns 8d
```
If it isnt running, you can [enable it](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Lets create another pod to test this:
```yaml
$ cat curlpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: curlpod
spec:
containers:
- image: radial/busyboxplus:curl
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: curlcontainer
restartPolicy: Always
```
And perform a lookup of the nginx Service
```console
$ kubectl create -f ./curlpod.yaml
default/curlpod
$ kubectl get pods curlpod
NAME READY STATUS RESTARTS AGE
curlpod 1/1 Running 0 18s
$ kubectl exec curlpod -- nslookup nginxsvc
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: nginxsvc
Address 1: 10.0.116.146
```
## Securing the Service
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
* Self signed certificates for https (unless you already have an identity certificate)
* An nginx server configured to use the certificates
* A [secret](secrets.md) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:
```console
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
$ kubectl create -f /tmp/secret.json
secrets/nginxsecret
$ kubectl get secrets
NAME TYPE DATA
default-token-il9rc kubernetes.io/service-account-token 1
nginxsecret Opaque 2
```
Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
```yaml
$ cat nginx-app.yaml
apiVersion: v1
kind: Service
metadata:
name: nginxsvc
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: nginx
---
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
containers:
- name: nginxhttps
image: bprashanth/nginxhttps:1.0
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
```
Noteworthy points about the nginx-app manifest:
- It contains both rc and service specification in the same file
- The [nginx server](../../examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
```console
$ kubectl delete rc,svc -l app=nginx; kubectl create -f ./nginx-app.yaml
replicationcontrollers/my-nginx
services/nginxsvc
services/nginxsvc
replicationcontrollers/my-nginx
```
At this point you can reach the nginx server from any node.
```console
$ kubectl get pods -o json | grep -i podip
"podIP": "10.1.0.80",
node $ curl -k https://10.1.0.80
...
<h1>Welcome to nginx!</h1>
```
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
```console
$ cat curlpod.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: curlrc
spec:
replicas: 1
template:
metadata:
labels:
app: curlpod
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
containers:
- name: curlpod
command:
- sh
- -c
- while true; do sleep 1; done
image: radial/busyboxplus:curl
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
$ kubectl create -f ./curlpod.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
curlpod 1/1 Running 0 2m
my-nginx-7006w 1/1 Running 0 24m
$ kubectl exec curlpod -- curl https://nginxsvc --cacert /etc/nginx/ssl/nginx.crt
...
<title>Welcome to nginx!</title>
...
```
## Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
```console
$ kubectl get svc nginxsvc -o json | grep -i nodeport -C 5
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 80,
"nodePort": 32188
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 443,
"nodePort": 30645
}
$ kubectl get nodes -o json | grep ExternalIP -C 2
{
"type": "ExternalIP",
"address": "104.197.63.17"
}
--
},
{
"type": "ExternalIP",
"address": "104.154.89.170"
}
$ curl https://104.197.63.17:30645 -k
...
<h1>Welcome to nginx!</h1>
```
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
```console
$ kubectl delete rc,svc -l app=nginx
$ kubectl create -f ./nginx-app.yaml
$ kubectl get svc nginxsvc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
nginxsvc 10.179.252.126 162.222.184.144 80/TCP,81/TCP,82/TCP run=nginx2 13m
$ curl https://162.22.184.144 -k
...
<title>Welcome to nginx!</title>
```
The IP address in the `EXTERNAL_IP` column is the one that is available on the public internet. The `CLUSTER_IP` is only available inside your
cluster/private cloud network.
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`
output, in fact, so you'll need to do `kubectl describe service nginxsvc` to
see it. You'll see something like this:
```
> kubectl describe service nginxsvc
...
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
...
```
## What's next?
[Learn about more Kubernetes features that will help you run containers reliably in production.](production-pods.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/connecting-applications/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,53 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Connecting to applications: kubectl port-forward
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward.md). Compared to [kubectl proxy](accessing-the-cluster.md#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
## Creating a Redis master
```console
$ kubectl create examples/redis/redis-master.yaml
pods/redis-master
```
wait until the Redis master pod is Running and Ready,
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master 2/2 Running 0 41s
```
## Connecting to the Redis master[a]
The Redis master is listening on port 6397, to verify this,
```console
$ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379
```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
```console
$ kubectl port-forward redis-master 6379:6379
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
```
To verify the connection is successful, we run a redis-cli on the local workstation,
```console
$ redis-cli
127.0.0.1:6379> ping
PONG
```
Now one can debug the database from the local workstation.
This file has moved to: http://kubernetes.github.io/docs/user-guide/connecting-to-applications-port-forward/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,41 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Connecting to applications: kubectl proxy and apiserver proxy
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Connecting to applications: kubectl proxy and apiserver proxy](#connecting-to-applications-kubectl-proxy-and-apiserver-proxy)
- [Getting the apiserver proxy URL of kube-ui](#getting-the-apiserver-proxy-url-of-kube-ui)
- [Connecting to the kube-ui service from your local workstation](#connecting-to-the-kube-ui-service-from-your-local-workstation)
<!-- END MUNGE: GENERATED_TOC -->
You have seen the [basics](accessing-the-cluster.md) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui.md)) running on the Kubernetes cluster from your workstation.
## Getting the apiserver proxy URL of kube-ui
kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
```console
$ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
```
if this command does not find the URL, try the steps [here](ui.md#accessing-the-ui).
## Connecting to the kube-ui service from your local workstation
The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication.
```console
$ kubectl proxy --port=8001
Starting to serve on localhost:8001
```
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)
This file has moved to: http://kubernetes.github.io/docs/user-guide/connecting-to-applications-proxy/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,102 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Container Environment
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes Container Environment](#kubernetes-container-environment)
- [Overview](#overview)
- [Cluster Information](#cluster-information)
- [Container Information](#container-information)
- [Cluster Information](#cluster-information)
- [Container Hooks](#container-hooks)
- [Hook Details](#hook-details)
- [Hook Handler Execution](#hook-handler-execution)
- [Hook delivery guarantees](#hook-delivery-guarantees)
- [Hook Handler Implementations](#hook-handler-implementations)
<!-- END MUNGE: GENERATED_TOC -->
## Overview
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode).  In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
This cluster information makes it possible to build applications that are *cluster aware*.
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers.  Container hooks are somewhat analogous to operating system signals in a traditional process model.   However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster.  Containers that participate in this cluster lifecycle become *cluster native*.
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](images.md) and one or more [volumes](volumes.md).
The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system.
## Cluster Information
There are two types of information that are available within the container environment.  There is information about the container itself, and there is information about other objects in the system.
### Container Information
Currently, the Pod name for the pod in which the container is running is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc), but this is planned to change in the future and should not be used.
The Pod name and namespace are also available as environment variables via the [downward API](downward-api.md). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image.
In the future, we anticipate expanding this information with richer information about the container.  Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server.
### Cluster Information
Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables.  The set of environment variables matches the syntax of Docker links.
For a service named **foo** that maps to a container port named **bar**, the following variables are defined:
```sh
FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/HEAD/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
## Container Hooks
Container hooks provide information to the container about events in its management lifecycle.  For example, immediately after a container is started, it receives a *PostStart* hook.  These hooks are broadcast *into* the container with information about the life-cycle of the container.  They are different from the events provided by Docker and other systems which are *output* from the container.  Output events provide a log of what has already happened.  Input hooks provide real-time notification about things that are happening, but no historical log.
### Hook Details
There are currently two container hooks that are surfaced to containers:
*PostStart*
This hook is sent immediately after a container is created.  It notifies the container that it has been created.  No parameters are passed to the handler.
*PreStop*
This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](pods.md#termination-of-pods).
### Hook Handler Execution
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop).
### Hook delivery guarantees
Hook delivery is intended to be "at least once", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
correctly.
We expect double delivery to be rare, but in some cases if the Kubelet restarts in the middle of sending a hook, the hook may be resent after the Kubelet comes back up.
Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend.
Currently, there are (hopefully rare) scenarios where PostStart hooks may not be delivered.
### Hook Handler Implementations
Hook handlers are the way that hooks are surfaced to containers.  Containers can select the type of hook handler they would like to implement.  Kubernetes currently supports two different hook handler types:
* Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroups and namespaces of the container.  Resources consumed by the command are counted against the container.
* HTTP - Executes an HTTP request against a specific endpoint on the container.
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
This file has moved to: http://kubernetes.github.io/docs/user-guide/container-environment/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,95 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Containers with Kubernetes
## Containers and commands
So far the Pods we've seen have all used the `image` field to indicate what process Kubernetes
should run in a container. In this case, Kubernetes runs the image's default command. If we want
to run a particular command or override the image's defaults, there are two additional fields that
we can use:
1. `Command`: Controls the actual command run by the image
2. `Args`: Controls the arguments passed to the command
### How docker handles command and arguments
Docker images have metadata associated with them that is used to store information about the image.
The image author may use this to define defaults for the command and arguments to run a container
when the user does not supply values. Docker calls the fields for commands and arguments
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
describe here, mostly due to the fact that the docker API allows users to specify both of these
fields as either a string array or a string and there are subtle differences in how those cases are
handled. We encourage the curious to check out [docker's documentation]() for this feature.
Kubernetes allows you to override both the image's default command (docker `Entrypoint`) and args
(docker `Cmd`) with the `Command` and `Args` fields of `Container`. The rules are:
1. If you do not supply a `Command` or `Args` for a container, the defaults defined by the image
will be used
2. If you supply a `Command` but no `Args` for a container, only the supplied `Command` will be
used; the image's default arguments are ignored
3. If you supply only `Args`, the image's default command will be used with the arguments you
supply
4. If you supply a `Command` **and** `Args`, the image's defaults will be ignored and the values
you supply will be used
Here are examples for these rules in table format
| Image `Entrypoint` | Image `Cmd` | Container `Command` | Container `Args` | Command Run |
|--------------------|------------------|---------------------|--------------------|------------------|
| `[/ep-1]` | `[foo bar]` | &lt;not set&gt; | &lt;not set&gt; | `[ep-1 foo bar]` |
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | &lt;not set&gt; | `[ep-2]` |
| `[/ep-1]` | `[foo bar]` | &lt;not set&gt; | `[zoo boo]` | `[ep-1 zoo boo]` |
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
## Capabilities
By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. We can have fine grain control over the capabilities using cap-add and cap-drop.More details [here](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration).
The relationship between Docker's capabilities and [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html)
| Docker's capabilities | Linux capabilities |
| ---- | ---- |
| SETPCAP | CAP_SETPCAP |
| SYS_MODULE | CAP_SYS_MODULE |
| SYS_RAWIO | CAP_SYS_RAWIO |
| SYS_PACCT | CAP_SYS_PACCT |
| SYS_ADMIN | CAP_SYS_ADMIN |
| SYS_NICE | CAP_SYS_NICE |
| SYS_RESOURCE | CAP_SYS_RESOURCE |
| SYS_TIME | CAP_SYS_TIME |
| SYS_TTY_CONFIG | CAP_SYS_TTY_CONFIG |
| MKNOD | CAP_MKNOD |
| AUDIT_WRITE | CAP_AUDIT_WRITE |
| AUDIT_CONTROL | CAP_AUDIT_CONTROL |
| MAC_OVERRIDE | CAP_MAC_OVERRIDE |
| MAC_ADMIN | CAP_MAC_ADMIN |
| NET_ADMIN | CAP_NET_ADMIN |
| SYSLOG | CAP_SYSLOG |
| CHOWN | CAP_CHOWN |
| NET_RAW | CAP_NET_RAW |
| DAC_OVERRIDE | CAP_DAC_OVERRIDE |
| FOWNER | CAP_FOWNER |
| DAC_READ_SEARCH | CAP_DAC_READ_SEARCH |
| FSETID | CAP_FSETID |
| KILL | CAP_KILL |
| SETGID | CAP_SETGID |
| SETUID | CAP_SETUID |
| LINUX_IMMUTABLE | CAP_LINUX_IMMUTABLE |
| NET_BIND_SERVICE | CAP_NET_BIND_SERVICE |
| NET_BROADCAST | CAP_NET_BROADCAST |
| IPC_LOCK | CAP_IPC_LOCK |
| IPC_OWNER | CAP_IPC_OWNER |
| SYS_CHROOT | CAP_SYS_CHROOT |
| SYS_PTRACE | CAP_SYS_PTRACE |
| SYS_BOOT | CAP_SYS_BOOT |
| LEASE | CAP_LEASE |
| SETFCAP | CAP_SETFCAP |
| WAKE_ALARM | CAP_WAKE_ALARM |
| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND |
This file has moved to: http://kubernetes.github.io/docs/user-guide/containers/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,550 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# My Service is not working - how to debug
An issue that comes up rather frequently for new installations of Kubernetes is
that `Services` are not working properly. You've run all your `Pod`s and
`ReplicationController`s, but you get no response when you try to access them.
This document will hopefully help you to figure out what's going wrong.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [My Service is not working - how to debug](#my-service-is-not-working---how-to-debug)
- [Conventions](#conventions)
- [Running commands in a Pod](#running-commands-in-a-pod)
- [Setup](#setup)
- [Does the Service exist?](#does-the-service-exist)
- [Does the Service work by DNS?](#does-the-service-work-by-dns)
- [Does any Service exist in DNS?](#does-any-service-exist-in-dns)
- [Does the Service work by IP?](#does-the-service-work-by-ip)
- [Is the Service correct?](#is-the-service-correct)
- [Does the Service have any Endpoints?](#does-the-service-have-any-endpoints)
- [Are the Pods working?](#are-the-pods-working)
- [Is the kube-proxy working?](#is-the-kube-proxy-working)
- [Is kube-proxy running?](#is-kube-proxy-running)
- [Is kube-proxy writing iptables rules?](#is-kube-proxy-writing-iptables-rules)
- [Userspace](#userspace)
- [Iptables](#iptables)
- [Is kube-proxy proxying?](#is-kube-proxy-proxying)
- [Seek help](#seek-help)
- [More information](#more-information)
<!-- END MUNGE: GENERATED_TOC -->
## Conventions
Throughout this doc you will see various commands that you can run. Some
commands need to be run within `Pod`, others on a Kubernetes `Node`, and others
can run anywhere you have `kubectl` and credentials for the cluster. To make it
clear what is expected, this document will use the following conventions.
If the command "COMMAND" is expected to run in a `Pod` and produce "OUTPUT":
```console
u@pod$ COMMAND
OUTPUT
```
If the command "COMMAND" is expected to run on a `Node` and produce "OUTPUT":
```console
u@node$ COMMAND
OUTPUT
```
If the command is "kubectl ARGS":
```console
$ kubectl ARGS
OUTPUT
```
## Running commands in a Pod
For many steps here you will want to see what a `Pod` running in the cluster
sees. Kubernetes does not directly support interactive `Pod`s (yet), but you can
approximate it:
```console
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
EOF
pods/busybox-sleep
```
Now, when you need to run a command (even an interactive shell) in a `Pod`-like
context, use:
```console
$ kubectl exec busybox-sleep -- <COMMAND>
```
or
```console
$ kubectl exec -ti busybox-sleep sh
/ #
```
## Setup
For the purposes of this walk-through, let's run some `Pod`s. Since you're
probably debugging your own `Service` you can substitute your own details, or you
can follow along and get a second data point.
```console
$ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
--port=9376 \
--replicas=3
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hostnames hostnames gcr.io/google_containers/serve_hostname app=hostnames 3
```
Note that this is the same as if you had started the `ReplicationController` with
the following YAML:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: hostnames
spec:
selector:
app: hostnames
replicas: 3
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: gcr.io/google_containers/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
```
Confirm your `Pod`s are running:
```console
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 12s
hostnames-bvc05 1/1 Running 0 12s
hostnames-yp2kp 1/1 Running 0 12s
```
## Does the Service exist?
The astute reader will have noticed that we did not actually create a `Service`
yet - that is intentional. This is a step that sometimes gets forgotten, and
is the first thing to check.
So what would happen if I tried to access a non-existent `Service`? Assuming you
have another `Pod` that consumes this `Service` by name you would get something
like:
```console
u@pod$ wget -qO- hostnames
wget: bad address 'hostname'
```
or:
```console
u@pod$ echo $HOSTNAMES_SERVICE_HOST
```
So the first thing to check is whether that `Service` actually exists:
```console
$ kubectl get svc hostnames
Error from server: service "hostnames" not found
```
So we have a culprit, let's create the `Service`. As before, this is for the
walk-through - you can use your own `Service`'s details here.
```console
$ kubectl expose rc hostnames --port=80 --target-port=9376
service "hostnames" exposed
```
And read it back, just to be sure:
```console
$ kubectl get svc hostnames
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hostnames 10.0.0.1 <none> 80/TCP run=hostnames 1h
```
As before, this is the same as if you had started the `Service` with YAML:
```yaml
apiVersion: v1
kind: Service
metadata:
name: hostnames
spec:
selector:
app: hostnames
ports:
- name: default
protocol: TCP
port: 80
targetPort: 9376
```
Now you can confirm that the `Service` exists.
## Does the Service work by DNS?
From a `Pod` in the same `Namespace`:
```console
u@pod$ nslookup hostnames
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames
Address: 10.0.1.175
```
If this fails, perhaps your `Pod` and `Service` are in different
`Namespace`s, try a namespace-qualified name:
```console
u@pod$ nslookup hostnames.default
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames.default
Address: 10.0.1.175
```
If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
`Namespace`. If this still fails, try a fully-qualified name:
```console
u@pod$ nslookup hostnames.default.svc.cluster.local
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
Note the suffix here: "default.svc.cluster.local". The "default" is the
`Namespace` we're operating in. The "svc" denotes that this is a `Service`.
The "cluster.local" is your cluster domain.
You can also try this from a `Node` in the cluster (note: 10.0.0.10 is my DNS
`Service`):
```console
u@node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
If you are able to do a fully-qualified name lookup but not a relative one, you
need to check that your `kubelet` is running with the right flags.
The `--cluster-dns` flag needs to point to your DNS `Service`'s IP and the
`--cluster-domain` flag needs to be your cluster's domain - we assumed
"cluster.local" in this document, but yours might be different, in which case
you should change that in all of the commands above.
### Does any Service exist in DNS?
If the above still fails - DNS lookups are not working for your `Service` - we
can take a step back and see what else is not working. The Kubernetes master
`Service` should always work:
```console
u@pod$ nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: kubernetes
Address 1: 10.0.0.1
```
If this fails, you might need to go to the kube-proxy section of this doc, or
even go back to the top of this document and start over, but instead of
debugging your own `Service`, debug DNS.
## Does the Service work by IP?
The next thing to test is whether your `Service` works at all. From a
`Node` in your cluster, access the `Service`'s IP (from `kubectl get` above).
```console
u@node$ curl 10.0.1.175:80
hostnames-0uton
u@node$ curl 10.0.1.175:80
hostnames-yp2kp
u@node$ curl 10.0.1.175:80
hostnames-bvc05
```
If your `Service` is working, you should get correct responses. If not, there
are a number of things that could be going wrong. Read on.
## Is the Service correct?
It might sound silly, but you should really double and triple check that your
`Service` is correct and matches your `Pods`. Read back your `Service` and
verify it:
```console
$ kubectl get service hostnames -o json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "hostnames",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/hostnames",
"uid": "428c8b6c-24bc-11e5-936d-42010af0a9bc",
"resourceVersion": "347189",
"creationTimestamp": "2015-07-07T15:24:29Z",
"labels": {
"app": "hostnames"
}
},
"spec": {
"ports": [
{
"name": "default",
"protocol": "TCP",
"port": 80,
"targetPort": 9376,
"nodePort": 0
}
],
"selector": {
"app": "hostnames"
},
"clusterIP": "10.0.1.175",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
```
Is the port you are trying to access in `spec.ports[]`? Is the `targetPort`
correct for your `Pod`s? If you meant it to be a numeric port, is it a number
(9376) or a string "9376"? If you meant it to be a named port, do your `Pod`s
expose a port with the same name? Is the port's `protocol` the same as the
`Pod`'s?
## Does the Service have any Endpoints?
If you got this far, we assume that you have confirmed that your `Service`
exists and resolves by DNS. Now let's check that the `Pod`s you ran are
actually being selected by the `Service`.
Earlier we saw that the `Pod`s were running. We can re-check that:
```console
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 1h
hostnames-bvc05 1/1 Running 0 1h
hostnames-yp2kp 1/1 Running 0 1h
```
The "AGE" column says that these `Pod`s are about an hour old, which implies that
they are running fine and not crashing.
The `-l app=hostnames` argument is a label selector - just like our `Service`
has. Inside the Kubernetes system is a control loop which evaluates the
selector of every `Service` and save the results into an `Endpoints` object.
```console
$ kubectl get endpoints hostnames
NAME ENDPOINTS
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
```
This confirms that the control loop has found the correct `Pod`s for your
`Service`. If the `hostnames` row is blank, you should check that the
`spec.selector` field of your `Service` actually selects for `metadata.labels`
values on your `Pod`s.
## Are the Pods working?
At this point, we know that your `Service` exists and has selected your `Pod`s.
Let's check that the `Pod`s are actually working - we can bypass the `Service`
mechanism and go straight to the `Pod`s.
```console
u@pod$ wget -qO- 10.244.0.5:9376
hostnames-0uton
pod $ wget -qO- 10.244.0.6:9376
hostnames-bvc05
u@pod$ wget -qO- 10.244.0.7:9376
hostnames-yp2kp
```
We expect each `Pod` in the `Endpoints` list to return its own hostname. If
this is not what happens (or whatever the correct behavior is for your own
`Pod`s), you should investigate what's happening there. You might find
`kubectl logs` to be useful or `kubectl exec` directly to your `Pod`s and check
service from there.
## Is the kube-proxy working?
If you get here, your `Service` is running, has `Endpoints`, and your `Pod`s
are actually serving. At this point, the whole `Service` proxy mechanism is
suspect. Let's confirm it, piece by piece.
### Is kube-proxy running?
Confirm that `kube-proxy` is running on your `Node`s. You should get something
like the below:
```console
u@node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
```
Next, confirm that it is not failing something obvious, like contacting the
master. To do this, you'll have to look at the logs. Accessing the logs
depends on your `Node` OS. On some OSes it is a file, such as
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
should see something like:
```console
I1027 22:14:53.995134 5063 server.go:200] Running in resource-only container "/kube-proxy"
I1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.
I1027 22:14:53.999055 5063 server.go:255] Tearing down userspace rules. Errors here are acceptable.
I1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns-tcp" to [10.244.1.3:53]
I1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns" to [10.244.1.3:53]
I1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for "default/kubernetes:https" to [10.240.0.2:443]
I1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received from master
I1027 22:14:54.040048 5063 proxier.go:294] Adding new service "default/kubernetes:https" at 10.0.0.1:443/TCP
I1027 22:14:54.040154 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns" at 10.0.0.10:53/UDP
I1027 22:14:54.040223 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns-tcp" at 10.0.0.10:53/TCP
```
If you see error messages about not being able to contact the master, you
should double-check your `Node` configuration and installation steps.
### Is kube-proxy writing iptables rules?
One of the main responsibilities of `kube-proxy` is to write the `iptables`
rules which implement `Service`s. Let's check that those rules are getting
written.
The kube-proxy can run in either "userspace" mode or "iptables" mode.
Hopefully you are using the newer, faster, more stable "iptables" mode. You
should see one of the following cases.
#### Userspace
```console
u@node$ iptables-save | grep hostnames
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
```
There should be 2 rules for each port on your `Service` (just one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.
#### Iptables
```console
u@node$ iptables-save | grep hostnames
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.1.7:9376
-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.2.3:9376
-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBR
```
There should be 1 rule in `KUBE-SERVICES`, 1 or 2 rules per endpoint in
`KUBE-SVC-(hash)` (depending on `SessionAffinity`), one `KUBE-SEP-(hash)` chain
per endpoint, and a few rules in each `KUBE-SEP-(hash)` chain. The exact rules
will vary based on your exact config (including node-ports and load-balancers).
### Is kube-proxy proxying?
Assuming you do see the above rules, try again to access your `Service` by IP:
```console
u@node$ curl 10.0.1.175:80
hostnames-0uton
```
If this fails and you are using the userspace proxy, you can try accessing the
proxy directly. If you are using the iptables proxy, skip this section.
Look back at the `iptables-save` output above, and extract the
port number that `kube-proxy` is using for your `Service`. In the above
examples it is "48577". Now connect to that:
```console
u@node$ curl localhost:48577
hostnames-yp2kp
```
If this still fails, look at the `kube-proxy` logs for specific lines like:
```console
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
```
If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.
## Seek help
If you get this far, something very strange is happening. Your `Service` is
running, has `Endpoints`, and your `Pod`s are actually serving. You have DNS
working, `iptables` rules installed, and `kube-proxy` does not seem to be
misbehaving. And yet your `Service` is not working. You should probably let
us know, so we can help investigate!
Contact us on
[Slack](../troubleshooting.md#slack) or
[email](https://groups.google.com/forum/#!forum/google-containers) or
[GitHub](https://github.com/kubernetes/kubernetes).
## More information
Visit [troubleshooting document](../troubleshooting.md) for more information.
This file has moved to: http://kubernetes.github.io/docs/user-guide/debugging-services/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,128 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Deploying continuously running applications
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Deploying continuously running applications](#kubernetes-user-guide-managing-applications-deploying-continuously-running-applications)
- [Launching a set of replicas using a configuration file](#launching-a-set-of-replicas-using-a-configuration-file)
- [Viewing replication controller status](#viewing-replication-controller-status)
- [Deleting replication controllers](#deleting-replication-controllers)
- [Labels](#labels)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start.md) and how to configure and launch single-run containers using pods ([Configuring containers](configuring-containers.md)). Here youll use the configuration-based approach to deploy a continuously running, replicated application.
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods.md)) using [*Replication Controllers*](replication-controller.md).
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Its analogous to Google Compute Engines [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWSs [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).
The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start.md) could be specified using YAML as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods dont need to be specified explicitly because they are generated from the name of the replication controller.
View the [replication controller API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_replicationcontroller)
to view the list of supported fields.
This replication controller can be created using `create`, just as with pods:
```console
$ kubectl create -f ./nginx-rc.yaml
replicationcontrollers/my-nginx
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
## Viewing replication controller status
You can view the replication controller you created using `get`:
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
```
This tells you that your controller will ensure that you have two nginx replicas.
You can see those replicas using `get`, just as with pods you created directly:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-065jq 1/1 Running 0 51s
my-nginx-buaiq 1/1 Running 0 51s
```
## Deleting replication controllers
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start.md):
```console
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
```
By default, this will also cause the pods managed by the replication controller to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running, specify `--cascade=false`.
If you try to delete the pods before deleting the replication controller, it will just replace them, as it is supposed to do.
## Labels
Kubernetes uses user-defined key-value attributes called [*labels*](labels.md) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
```console
$ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
my-nginx-afv12 0/1 Running 0 3s nginx
my-nginx-lg99z 0/1 Running 0 3s nginx
```
The labels from the pod template are copied to the replication controllers labels by default, as well -- all resources in Kubernetes support labels:
```console
$ kubectl get rc my-nginx -L app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
```
More importantly, the pod templates labels are used to create a [`selector`](labels.md#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get.md):
```console
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didnt want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it wont match pods created by other replication controllers. The most straightforward way to ensure the latter is to create a unique label value for the replication controller, and to specify it in both the pod templates labels and in the selector.
## What's next?
[Learn about exposing applications to users and clients, and connecting tiers of your application together.](connecting-applications.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/deploying-applications/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,357 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Deployments
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Deployments](#deployments)
- [What is a _Deployment_?](#what-is-a-deployment)
- [Creating a Deployment](#creating-a-deployment)
- [Updating a Deployment](#updating-a-deployment)
- [Multiple Updates](#multiple-updates)
- [Writing a Deployment Spec](#writing-a-deployment-spec)
- [Pod Template](#pod-template)
- [Replicas](#replicas)
- [Selector](#selector)
- [Strategy](#strategy)
- [Recreate Deployment](#recreate-deployment)
- [Rolling Update Deployment](#rolling-update-deployment)
- [Max Unavailable](#max-unavailable)
- [Max Surge](#max-surge)
- [Min Ready Seconds](#min-ready-seconds)
- [Alternative to Deployments](#alternative-to-deployments)
- [kubectl rolling update](#kubectl-rolling-update)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _Deployment_?
A _Deployment_ provides declarative updates for Pods and ReplicationControllers.
Users describe the desired state in a Deployment object, and the deployment
controller changes the actual state to the desired state at a controlled rate.
Users can define Deployments to create new resources, or replace existing ones
by new ones.
A typical use case is:
* Create a Deployment to bring up a replication controller and pods.
* Later, update that Deployment to recreate the pods (for example, to use a new image).
## Creating a Deployment
Here is an example Deployment. It creates a replication controller to
bring up 3 nginx pods.
<!-- BEGIN MUNGE: EXAMPLE nginx-deployment.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
[Download example](nginx-deployment.yaml?raw=true)
<!-- END MUNGE: EXAMPLE nginx-deployment.yaml -->
Run the example by downloading the example file and then running this command:
```console
$ kubectl create -f docs/user-guide/nginx-deployment.yaml
deployment "nginx-deployment" created
```
Running
```console
$ kubectl get deployments
```
immediately will give:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 8s
```
This indicates that the Deployment is trying to update 3 replicas, and has not updated any of them yet.
Running the `get` again after a minute, should give:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 1m
```
This indicates that the Deployment has created all three replicas.
Running ```kubectl get rc``` and ```kubectl get pods``` will show the replication controller (RC) and pods created.
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
REPLICAS AGE
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 3 2m
```
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1975012602-4f2tb 1/1 Running 0 1m
deploymentrc-1975012602-j975u 1/1 Running 0 1m
deploymentrc-1975012602-uashb 1/1 Running 0 1m
```
The created RC will ensure that there are three nginx pods at all times.
## Updating a Deployment
Suppose that we now want to update the nginx pods to start using the `nginx:1.9.1` image
instead of the `nginx:1.7.9` image.
For this, we update our deployment file as follows:
<!-- BEGIN MUNGE: EXAMPLE new-nginx-deployment.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
ports:
- containerPort: 80
```
[Download example](new-nginx-deployment.yaml?raw=true)
<!-- END MUNGE: EXAMPLE new-nginx-deployment.yaml -->
We can then `apply` the Deployment:
```console
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
deployment "nginx-deployment" configured
```
Running a `get` immediately will still give:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 8s
```
This indicates that deployment status has not been updated yet (it is still
showing old status).
Running a `get` again after a minute, should show:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 1/3 1m
```
This indicates that the Deployment has updated one of the three pods that it needs
to update.
Eventually, it will update all the pods.
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 3m
```
We can run ```kubectl get rc``` to see that the Deployment updated the pods by creating a new RC,
which it scaled up to 3 replicas, and has scaled down the old RC to 0 replicas.
```console
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
deploymentrc-1562004724 nginx nginx:1.9.1 pod-template-hash=1562004724,app=nginx 3 5m
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 0 7m
```
Running `get pods` should now show only the new pods:
```console
kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1562004724-0tgk5 1/1 Running 0 9m
deploymentrc-1562004724-1rkfl 1/1 Running 0 8m
deploymentrc-1562004724-6v702 1/1 Running 0 8m
```
Next time we want to update these pods, we can just update and re-apply the Deployment again.
Deployment ensures that not all pods are down while they are being updated. By
default, it ensures that minimum of 1 less than the desired number of pods are
up. For example, if you look at the above deployment closely, you will see that
it first created a new pod, then deleted some old pods and created new ones. It
does not kill old pods until a sufficient number of new pods have come up.
```console
$ kubectl describe deployments
Name: nginx-deployment
Namespace: default
CreationTimestamp: Thu, 22 Oct 2015 17:58:49 -0700
Labels: app=nginx-deployment
Selector: app=nginx
Replicas: 3 updated / 3 total
StrategyType: RollingUpdate
RollingUpdateStrategy: 1 max unavailable, 1 max surge, 0 min ready seconds
OldReplicationControllers: deploymentrc-1562004724 (3/3 replicas created)
NewReplicationController: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
10m 10m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1975012602 to 3
2m 2m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 1
2m 2m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 1
1m 1m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 3
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0
```
Here we see that when we first created the Deployment, it created an RC and scaled it up to 3 replicas directly.
When we updated the Deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.
It then scaled up the new RC to 3 and when those pods were ready, it scaled down the old RC to 0.
### Multiple Updates
Each time a new deployment object is observed, a replication controller is
created to bring up the desired pods if there is no existing RC doing so.
Existing RCs controlling pods whose labels match `.spec.selector` but whose
template does not match `.spec.template` are scaled down.
Eventually, the new RC will be scaled to `.spec.replicas` and all old RCs will
be scaled to 0.
If the user updates a Deployment while an existing deployment is in progress,
the Deployment will create a new RC as per the update and start scaling that up, and
will roll the RC that it was scaling up previously-- it will add it to its list of old RCs and will
start scaling it down.
For example, suppose the user creates a deployment to create 5 replicas of `nginx:1.7.9`,
but then updates the deployment to create 5 replicas of `nginx:1.9.1`, when only 3
replicas of `nginx:1.7.9` had been created. In that case, deployment will immediately start
killing the 3 `nginx:1.7.9` pods that it had created, and will start creating
`nginx:1.9.1` pods. It will not wait for 5 replicas of `nginx:1.7.9` to be created
before changing course.
## Writing a Deployment Spec
As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and
`metadata` fields. For general information about working with config files,
see [here](deploying-applications.md), [here](configuring-containers.md), and [here](working-with-resources.md).
A Deployment also needs a [`.spec` section](../devel/api-conventions.md#spec-and-status).
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller.md#pod-template). It has exactly
the same schema as a [pod](pods.md), except that it is nested and does not have an
`apiVersion` or `kind`.
### Replicas
`.spec.replicas` is an optional field that specifies the number of desired pods. It defaults
to 1.
### Selector
`.spec.selector` is an optional field that specifies label selectors for pods
targeted by this deployment. Deployment kills some of these pods, if their
template is different than `.spec.template` or if the total number of such pods
exceeds `.spec.replicas`. It will bring up new pods with `.spec.template` if
number of pods are less than the desired number.
### Strategy
`.spec.strategy` specifies the strategy used to replace old pods by new ones.
`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is
the default value.
#### Recreate Deployment
All existing pods are killed before new ones are created when
`.spec.strategy.type==Recreate`.
__Note: This is not implemented yet__.
#### Rolling Update Deployment
The Deployment updates pods in a [rolling update](update-demo/) fashion
when `.spec.strategy.type==RollingUpdate`.
Users can specify `maxUnavailable`, `maxSurge` and `minReadySeconds` to control
the rolling update process.
##### Max Unavailable
`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the
maximum number of pods that can be unavailable during the update process.
The value can be an absolute number (e.g. 5) or a percentage of desired pods
(e.g. 10%).
The absolute number is calculated from percentage by rounding up.
This can not be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0.
By default, a fixed value of 1 is used.
For example, when this value is set to 30%, the old RC can be scaled down to
70% of desired pods immediately when the rolling update starts. Once new pods are
ready, old RC can be scaled down further, followed by scaling up the new RC,
ensuring that the total number of pods available at all times during the
update is at least 70% of the desired pods.
##### Max Surge
`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the
maximum number of pods that can be created above the desired number of pods.
Value can be an absolute number (e.g. 5) or a percentage of desired pods
(e.g. 10%).
This can not be 0 if `MaxUnavailable` is 0.
The absolute number is calculated from percentage by rounding up.
By default, a value of 1 is used.
For example, when this value is set to 30%, the new RC can be scaled up immediately when
the rolling update starts, such that the total number of old and new pods do not exceed
130% of desired pods. Once old pods have been killed,
the new RC can be scaled up further, ensuring that the total number of pods running
at any time during the update is at most 130% of desired pods.
##### Min Ready Seconds
`.spec.minReadySeconds` is an optional field that specifies the
minimum number of seconds for which a newly created pod should be ready
without any of its containers crashing, for it to be considered available.
This defaults to 0 (the pod will be considered available as soon as it is ready).
To learn more about when a pod is considered ready, see [Container Probes](pod-states.md#container-probes).
## Alternative to Deployments
### kubectl rolling update
[Kubectl rolling update](kubectl/kubectl_rolling-update.md) also updates pods and replication controllers in a similar fashion.
But Deployments is declarative and is server side.
This file has moved to: http://kubernetes.github.io/docs/user-guide/deployments/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,313 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# kubectl for docker users
In this doc, we introduce the Kubernetes command line for interacting with the api to docker-cli users. The tool, kubectl, is designed to be familiar to docker-cli users but there are a few necessary differences. Each section of this doc highlights a docker subcommand explains the kubectl equivalent.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [kubectl for docker users](#kubectl-for-docker-users)
- [docker run](#docker-run)
- [docker ps](#docker-ps)
- [docker attach](#docker-attach)
- [docker exec](#docker-exec)
- [docker logs](#docker-logs)
- [docker stop and docker rm](#docker-stop-and-docker-rm)
- [docker login](#docker-login)
- [docker version](#docker-version)
- [docker info](#docker-info)
<!-- END MUNGE: GENERATED_TOC -->
#### docker run
How do I run an nginx container and expose it to the world? Checkout [kubectl run](kubectl/kubectl_run.md).
With docker:
```console
$ docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
a9ec34d9878748d2f33dc20cb25c714ff21da8d40558b45bfaec9955859075d0
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```console
# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
replicationcontroller "nginx-app" created
# expose a port through with a service
$ kubectl expose rc nginx-app --port=80 --name=nginx-http
```
With kubectl, we create a [replication controller](replication-controller.md) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services.md) with a selector that matches the replication controller's selector. See the [Quick start](quick-start.md) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
```console
kubectl run [-i] [--tty] --attach <name> --image=<image>
```
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
Because we start a replication controller for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different than `docker run -it`.
To destroy the replication controller (and it's pods) you need to run `kubectl delete rc <name>`
#### docker ps
How do I list what is currently running? Checkout [kubectl get](kubectl/kubectl_get.md).
With docker:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```console
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 1h
```
#### docker attach
How do I attach to a process that is already running in a container? Checkout [kubectl attach](kubectl/kubectl_attach.md)
With docker:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker attach -it a9ec34d98787
...
```
With kubectl:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl attach -it nginx-app-5jyvm
...
```
#### docker exec
How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubectl_exec.md).
With docker:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker exec a9ec34d98787 cat /etc/hostname
a9ec34d98787
```
With kubectl:
```console
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm
```
What about interactive commands?
With docker:
```console
$ docker exec -ti a9ec34d98787 /bin/sh
# exit
```
With kubectl:
```console
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
For more information see [Getting into containers](getting-into-containers.md).
#### docker logs
How do I follow stdout/stderr of a running process? Checkout [kubectl logs](kubectl/kubectl_logs.md).
With docker:
```console
$ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
With kubectl:
```console
$ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
```console
$ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
See [Logging](logging.md) for more information.
#### docker stop and docker rm
How do I stop and delete a running process? Checkout [kubectl delete](kubectl/kubectl_delete.md).
With docker
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker stop a9ec34d98787
a9ec34d98787
$ docker rm a9ec34d98787
a9ec34d98787
```
With kubectl:
```console
$ kubectl get rc nginx-app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
nginx-app nginx-app nginx run=nginx-app 1
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-aualv 1/1 Running 0 16s
$ kubectl delete rc nginx-app
NAME READY STATUS RESTARTS AGE
nginx-app-aualv 1/1 Running 0 16s
$ kubectl get po
NAME READY STATUS RESTARTS AGE
```
Notice that we don't delete the pod directly. With kubectl we want to delete the replication controller that owns the pod. If we delete the pod directly, the replication controller will recreate the pod.
#### docker login
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](images.md#using-a-private-registry).
#### docker version
How do I get the version of my client and server? Checkout [kubectl version](kubectl/kubectl_version.md).
With docker:
```console
$ docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
```
With kubectl:
```console
$ kubectl version
Client Version: version.Info{Major:"0", Minor:"20.1", GitVersion:"v0.20.1", GitCommit:"", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32699e873ae1ca-dirty", GitCommit:"32699e873ae1caa01812e41de7eab28df4358ee4", GitTreeState:"dirty"}
```
#### docker info
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](kubectl/kubectl_cluster-info.md).
With docker:
```console
$ docker info
Containers: 40
Images: 168
Storage Driver: aufs
Root Dir: /usr/local/google/docker/aufs
Backing Filesystem: extfs
Dirs: 248
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-53-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 12
Total Memory: 31.32 GiB
Name: k8s-is-fun.mtv.corp.google.com
ID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO
WARNING: No swap limit support
```
With kubectl:
```console
$ kubectl cluster-info
Kubernetes master is running at https://108.59.85.141
KubeDNS is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
Heapster is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
```
This file has moved to: http://kubernetes.github.io/docs/user-guide/docker-cli-to-kubectl/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,159 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Downward API
It is sometimes useful for a container to have information about itself, but we
want to be careful not to over-couple containers to Kubernetes. The downward
API allows containers to consume information about themselves or the system and
expose that information how they want it, without necessarily coupling to the
Kubernetes client or REST API.
An example of this is a "legacy" app that is already written assuming
that a particular environment variable will hold a unique identifier. While it
is often possible to "wrap" such applications, this is tedious and error prone,
and violates the goal of low coupling. Instead, the user should be able to use
the Pod's name, for example, and inject it into this well-known variable.
## Capabilities
The following information is available to a `Pod` through the downward API:
* The pod's name
* The pod's namespace
* The pod's IP
More information will be exposed through this same API over time.
## Exposing pod information into a container
Containers consume information from the downward API using environment
variables or using a volume plugin.
### Environment variables
Most environment variables in the Kubernetes API use the `value` field to carry
simple values. However, the alternate `valueFrom` field allows you to specify
a `fieldRef` to select fields from the pod's definition. The `fieldRef` field
is a structure that has an `apiVersion` field and a `fieldPath` field. The
`fieldPath` field is an expression designating a field of the pod. The
`apiVersion` field is the version of the API schema that the `fieldPath` is
written in terms of. If the `apiVersion` field is not specified it is
defaulted to the API version of the enclosing object.
The `fieldRef` is evaluated and the resulting value is used as the value for
the environment variable. This allows users to publish their pod's name in any
environment variable they want.
## Example
This is an example of a pod that consumes its name and namespace via the
downward API:
<!-- BEGIN MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
restartPolicy: Never
```
[Download example](downward-api/dapi-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
### Downward API volume
Using a similar syntax it's possible to expose pod information to containers using plain text files.
Downward API are dumped to a mounted volume. This is achieved using a `downwardAPI`
volume type and the different items represent the files to be created. `fieldPath` references the field to be exposed.
Downward API volume permits to store more complex data like [`metadata.labels`](labels.md) and [`metadata.annotations`](annotations.md). Currently key/value pair set fields are saved using `key="value"` format:
```
key1="value1"
key2="value2"
```
In future, it will be possible to specify an output format option.
Downward API volumes can expose:
* The pod's name
* The pod's namespace
* The pod's labels
* The pod's annotations
The downward API volume refreshes its data in step with the kubelet refresh loop. When labels will be modifiable on the fly without respawning the pod containers will be able to detect changes through mechanisms such as [inotify](https://en.wikipedia.org/wiki/Inotify).
In future, it will be possible to specify a specific annotation or label.
## Example
This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations are dumped in `/etc/labels` and in `/etc/annotations`, respectively:
<!-- BEGIN MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
zone: us-est-coast
cluster: test-cluster1
rack: rack-22
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: gcr.io/google_containers/busybox
command: ["sh", "-c", "while true; do if [[ -e /etc/labels ]]; then cat /etc/labels; fi; if [[ -e /etc/annotations ]]; then cat /etc/annotations; fi; sleep 5; done"]
volumeMounts:
- name: podinfo
mountPath: /etc
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
```
[Download example](downward-api/volume/dapi-volume.yaml?raw=true)
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
Some more thorough examples:
* [environment variables](environment-guide/)
* [downward API](downward-api/)
This file has moved to: http://kubernetes.github.io/docs/user-guide/downward-api/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,40 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Downward API example
Following this example, you will create a pod with a container that consumes the pod's name and
namespace using the [downward API](../downward-api.md).
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the pod
Containers consume the downward API using environment variables. The downward API allows
containers to be injected with the name and namespace of the pod the container is in.
Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
downward API.
```console
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
through the pod logs to see that the pod was injected with the correct values:
```console
$ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
```
This file has moved to: http://kubernetes.github.io/docs/user-guide/downward-api/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,73 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Downward API volume plugin
Following this example, you will create a pod with a downward API volume.
A downward API volume is a k8s volume plugin with the ability to save some pod information in a plain text file. The pod information can be for example some [metadata](../../../../docs/devel/api-conventions.md#metadata).
Supported metadata fields:
1. `metadata.annotations`
2. `metadata.namespace`
3. `metadata.name`
4. `metadata.labels`
### Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and the ```kubectl``` command line tool somewhere in your path. Please see the [gettingstarted](../../../../docs/getting-started-guides/) for installation instructions for your platform.
### Step One: Create the pod
Use the `docs/user-guide/downward-api/dapi-volume.yaml` file to create a Pod with a  downward API volume which stores pod labels and pod annotations to `/etc/labels` and  `/etc/annotations` respectively.
```shell
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
```
### Step Two: Examine pod/container output
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
```shell
$ kubectl logs kubernetes-downwardapi-volume-example
cluster="test-cluster1"
rack="rack-22"
zone="us-est-coast"
build="two"
builder="john-doe"
kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
kubernetes.io/config.source="api"
```
### Internals
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
```shell
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
/ # ls -laR /etc
/etc:
total 32
drwxrwxrwt 3 0 0 180 Aug 24 13:03 .
drwxr-xr-x 1 0 0 4096 Aug 24 13:05 ..
drwx------ 2 0 0 80 Aug 24 13:03 ..2015_08_24_13_03_44259413923
lrwxrwxrwx 1 0 0 30 Aug 24 13:03 ..downwardapi -> ..2015_08_24_13_03_44259413923
lrwxrwxrwx 1 0 0 25 Aug 24 13:03 annotations -> ..downwardapi/annotations
lrwxrwxrwx 1 0 0 20 Aug 24 13:03 labels -> ..downwardapi/labels
/etc/..2015_08_24_13_03_44259413923:
total 8
drwx------ 2 0 0 80 Aug 24 13:03 .
drwxrwxrwt 3 0 0 180 Aug 24 13:03 ..
-rw-r--r-- 1 0 0 115 Aug 24 13:03 annotations
-rw-r--r-- 1 0 0 53 Aug 24 13:03 labels
/ #
```
The file `labels` is stored in a temporary directory (`..2015_08_24_13_03_44259413923` in the example above) which is symlinked to by `..downwardapi`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..downwardapi` indirection.  This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..downwardapi` symlink is updated atomically using `rename(2)`.
This file has moved to: http://kubernetes.github.io/docs/user-guide/downward-api/volume/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -31,95 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Environment Guide Example
=========================
This example demonstrates running pods, replication controllers, and
services. It shows two types of pods: frontend and backend, with
services on top of both. Accessing the frontend pod will return
environment information about itself, and a backend pod that it has
accessed through the service. The goal is to illuminate the
environment metadata available to running containers inside the
Kubernetes cluster. The documentation for the Kubernetes environment
is [here](../../../docs/user-guide/container-environment.md).
![Diagram](diagram.png)
Prerequisites
-------------
This example assumes that you have a Kubernetes cluster installed and
running, and that you have installed the `kubectl` command line tool
somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions
for your platform.
Optional: Build your own containers
-----------------------------------
The code for the containers is under
[containers/](containers/)
Get everything running
----------------------
kubectl create -f ./backend-rc.yaml
kubectl create -f ./backend-srv.yaml
kubectl create -f ./show-rc.yaml
kubectl create -f ./show-srv.yaml
Query the service
-----------------
Use `kubectl describe service show-srv` to determine the public IP of
your service.
> Note: If your platform does not support external load balancers,
you'll need to open the proper port and direct traffic to the
internal IP shown for the frontend service with the above command
Run `curl <public ip>:80` to query the service. You should get
something like this back:
```
Pod Name: show-rc-xxu6i
Pod Namespace: default
USER_VAR: important information
Kubernetes environment variables
BACKEND_SRV_SERVICE_HOST = 10.147.252.185
BACKEND_SRV_SERVICE_PORT = 5000
KUBERNETES_RO_SERVICE_HOST = 10.147.240.1
KUBERNETES_RO_SERVICE_PORT = 80
KUBERNETES_SERVICE_HOST = 10.147.240.2
KUBERNETES_SERVICE_PORT = 443
KUBE_DNS_SERVICE_HOST = 10.147.240.10
KUBE_DNS_SERVICE_PORT = 53
Found backend ip: 10.147.252.185 port: 5000
Response from backend
Backend Container
Backend Pod Name: backend-rc-6qiya
Backend Namespace: default
```
First the frontend pod's information is printed. The pod name and
[namespace](../../../docs/design/namespaces.md) are retrieved from the
[Downward API](../../../docs/user-guide/downward-api.md). Next, `USER_VAR` is the name of
an environment variable set in the [pod
definition](show-rc.yaml). Then, the dynamic Kubernetes environment
variables are scanned and printed. These are used to find the backend
service, named `backend-srv`. Finally, the frontend pod queries the
backend service and prints the information returned. Again the backend
pod returns its own pod name and namespace.
Try running the `curl` command a few times, and notice what
changes. Ex: `watch -n 1 curl -s <ip>` Firstly, the frontend service
is directing your request to different frontend pods each time. The
frontend pods are always contacting the backend through the backend
service. This results in a different backend pod servicing each
request as well.
Cleanup
-------
kubectl delete rc,service -l type=show-type
kubectl delete rc,service -l type=backend-type
This file has moved to: http://kubernetes.github.io/docs/user-guide/environment-guide/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -31,26 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Building
--------
For each container, the build steps are the same. The examples below
are for the `show` container. Replace `show` with `backend` for the
backend container.
Google Container Registry ([GCR](https://cloud.google.com/tools/container-registry/))
---
docker build -t gcr.io/<project-name>/show .
gcloud docker push gcr.io/<project-name>/show
Docker Hub
----------
docker build -t <username>/show .
docker push <username>/show
Change Pod Definitions
----------------------
Edit both `show-rc.yaml` and `backend-rc.yaml` and replace the
specified `image:` with the one that you built.
This file has moved to: http://kubernetes.github.io/docs/user-guide/environment-guide/containers/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,77 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Getting into containers: kubectl exec
Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases.
## Using kubectl exec to check the environment variables of a container
Kubernetes exposes [services](services.md#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
We first create a pod and a service,
```console
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml
```
wait until the pod is Running and Ready,
```console
$ kubectl get pod
NAME READY REASON RESTARTS AGE
redis-master-ft9ex 1/1 Running 0 12s
```
then we can check the environment variables of the pod,
```console
$ kubectl exec redis-master-ft9ex env
...
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_SERVICE_HOST=10.0.0.219
...
```
We can use these environment variables in applications to find the service.
## Using kubectl exec to check the mounted volumes
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
We first create a Pod with a volume mounted at /data/redis,
```console
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
```
wait until the pod is Running and Ready,
```console
$ kubectl get pods
NAME READY REASON RESTARTS AGE
storage 1/1 Running 0 1m
```
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
```console
$ kubectl exec storage ls /data
redis
```
## Using kubectl exec to open a bash terminal in a pod
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
```console
$ kubectl exec -ti storage -- bash
root@storage:/data#
```
This gets you a terminal.
This file has moved to: http://kubernetes.github.io/docs/user-guide/getting-into-containers/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,98 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Horizontal Pod Autoscaler
This document describes the current state of Horizontal Pod Autoscaler in Kubernetes.
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Horizontal Pod Autoscaler](#horizontal-pod-autoscaler)
- [What is Horizontal Pod Autoscaler?](#what-is-horizontal-pod-autoscaler)
- [How does Horizontal Pod Autoscaler work?](#how-does-horizontal-pod-autoscaler-work)
- [API Object](#api-object)
- [Support for horizontal pod autoscaler in kubectl](#support-for-horizontal-pod-autoscaler-in-kubectl)
- [Autoscaling during rolling update](#autoscaling-during-rolling-update)
- [Further reading](#further-reading)
<!-- END MUNGE: GENERATED_TOC -->
## What is Horizontal Pod Autoscaler?
Horizontal pod autoscaling allows the number of pods in a replication controller or deployment
to scale automatically based on observed CPU utilization.
It is a [beta](../api.md#api-versioning) feature in Kubernetes 1.1.
The autoscaler is implemented as a Kubernetes API resource and a controller.
The resource describes behavior of the controller.
The controller periodically adjusts the number of replicas in a replication controller or deployment
to match the observed average CPU utilization to the target specified by user.
## How does Horizontal Pod Autoscaler work?
![Horizontal Pod Autoscaler diagram](horizontal-pod-autoscaler.png)
The autoscaler is implemented as a control loop.
It periodically queries CPU utilization for the pods it targets.
(The period of the autoscaler is controlled by `--horizontal-pod-autoscaler-sync-period` flag of controller manager.
The default value is 30 seconds).
Then, it compares the arithmetic mean of the pods' CPU utilization with the target and adjust the number of replicas if needed.
CPU utilization is the recent CPU usage of a pod divided by the sum of CPU requested by the pod's containers.
Please note that if some of the pod's containers do not have CPU request set,
CPU utilization for the pod will not be defined and the autoscaler will not take any action.
Further details of the autoscaling algorithm are given [here](../design/horizontal-pod-autoscaler.md#autoscaling-algorithm).
Autoscaler uses heapster to collect CPU utilization.
Therefore, it is required to deploy heapster monitoring in your cluster for autoscaling to work.
Autoscaler accesses corresponding replication controller or deployment by scale sub-resource.
Scale is an interface which allows to dynamically set the number of replicas and to learn the current state of them.
More details on scale sub-resource can be found [here](../design/horizontal-pod-autoscaler.md#scale-subresource).
## API Object
Horizontal pod autoscaler is a top-level resource in the Kubernetes REST API (currently in [beta](../api.md#api-versioning)).
More details about the API object can be found at
[HorizontalPodAutoscaler Object](../design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
## Support for horizontal pod autoscaler in kubectl
Horizontal pod autoscaler, like every API resource, is supported in a standard way by `kubectl`.
We can create a new autoscaler using `kubectl create` command.
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
Finally, we can delete an autoscaler using `kubectl delete hpa`.
In addition, there is a special `kubectl autoscale` command that allows for easy creation of horizontal pod autoscaler.
For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
The detailed documentation of `kubectl autoscale` can be found [here](kubectl/kubectl_autoscale.md).
## Autoscaling during rolling update
Currently in Kubernetes, it is possible to perform a rolling update by managing replication controllers directly,
or by using the deployment object, which manages the underlying replication controllers for you.
Horizontal pod autoscaler only supports the latter approach: the horizontal pod autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replication controllers.
Horizontal pod autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a horizontal pod autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
The reason this doesn't work is that when rolling update creates a new replication controller,
the horizontal pod autoscaler will not be bound to the new replication controller.
## Further reading
* Design documentation: [Horizontal Pod Autoscaling](../design/horizontal-pod-autoscaler.md).
* Manual of autoscale command in kubectl: [kubectl autoscale](kubectl/kubectl_autoscale.md).
* Usage example of [Horizontal Pod Autoscaler](horizontal-pod-autoscaling/README.md).
This file has moved to: http://kubernetes.github.io/docs/user-guide/horizontal-pod-autoscaler/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,190 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Horizontal Pod Autoscaler
Horizontal pod autoscaling is a [beta](../../../docs/api.md#api-versioning) feature in Kubernetes 1.1.
It allows the number of pods in a replication controller or deployment to scale automatically based on observed CPU usage.
In the future also other metrics will be supported.
In this document we explain how this feature works by walking you through an example of enabling horizontal pod autoscaling with the php-apache server.
## Prerequisites
This example requires a running Kubernetes cluster and kubectl in the version at least 1.1.
[Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster
as horizontal pod autoscaler uses it to collect metrics
(if you followed [getting started on GCE guide](../../../docs/getting-started-guides/gce.md),
heapster monitoring will be turned-on by default).
## Step One: Run & expose php-apache server
To demonstrate horizontal pod autoscaler we will use a custom docker image based on php-apache server.
The image can be found [here](image/).
It defines [index.php](image/index.php) page which performs some CPU intensive computations.
First, we will start a replication controller running the image and expose it as an external service:
<a name="kubectl-run"></a>
```console
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m
replicationcontroller "php-apache" created
$ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
```
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
php-apache-wa3t1 1/1 Running 0 12m
$ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
```
We may now check that php-apache server works correctly by calling ``curl`` with the service's IP:
```console
$ curl http://146.148.24.244
OK!
```
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
```console
$ kubectl expose rc php-apache --port=80 --type=ClusterIP
service "php-apache" exposed
$ kubectl cluster-info | grep master
Kubernetes master is running at https://146.148.6.215
$ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/default/services/php-apache/
OK!
```
## Step Two: Create horizontal pod autoscaler
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this:
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleRef:
kind: ReplicationController
name: php-apache
namespace: default
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
```
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
(via the replication controller) so as to maintain an average CPU utilization across all Pods of 50%
(since each pod requests 200 milli-cores by [kubectl run](#kubectl-run), this means average CPU utilization of 100 milli-cores).
See [here](../../../docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
We will create the autoscaler by executing the following command:
```console
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
```
Alternatively, we can create the autoscaler using [kubectl autoscale](../kubectl/kubectl_autoscale.md).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
```
$ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10
replicationcontroller "php-apache" autoscaled
```
We may check the current status of autoscaler by running:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
## Step Three: Increase load
Now, we will see how the autoscaler reacts on the increased load of the server.
We will start an infinite loop of queries to our server (please run it in a different terminal):
```console
$ while true; do curl http://146.148.6.244; done
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
```console
$ while true; do curl http://146.148.6.244; done
```
In the case presented here, it increased the number of serving pods to 10:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 65% 1 10 14m
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 21m
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.
This file has moved to: http://kubernetes.github.io/docs/user-guide/horizontal-pod-autoscaling/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,19 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Identifiers
All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID.
For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md).
## Names
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](../design/identifiers.md) for the precise syntax rules for names.
## UIDs
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
This file has moved to: http://kubernetes.github.io/docs/user-guide/identifiers/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,284 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Images
Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/userguide/dockerimages/).
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Images](#images)
- [Updating Images](#updating-images)
- [Using a Private Registry](#using-a-private-registry)
- [Using Google Container Registry](#using-google-container-registry)
- [Using AWS EC2 Container Registry](#using-aws-ec2-container-registry)
- [Configuring Nodes to Authenticate to a Private Repository](#configuring-nodes-to-authenticate-to-a-private-repository)
- [Pre-pulling Images](#pre-pulling-images)
- [Specifying ImagePullSecrets on a Pod](#specifying-imagepullsecrets-on-a-pod)
- [Creating a Secret with a Docker Config](#creating-a-secret-with-a-docker-config)
- [Bypassing kubectl create secrets](#bypassing-kubectl-create-secrets)
- [Referring to an imagePullSecrets on a Pod](#referring-to-an-imagepullsecrets-on-a-pod)
- [Use Cases](#use-cases)
<!-- END MUNGE: GENERATED_TOC -->
## Updating Images
The default pull policy is `IfNotPresent` which causes the Kubelet to not
pull an image if it already exists. If you would like to always force a pull
you must set a pull image policy of `Always` or specify a `:latest` tag on
your image.
If you did not specify tag of your image, it will be assumed as `:latest`, with
pull image policy of `Always` correspondingly.
## Using a Private Registry
Private registries may require keys to read images from them.
Credentials can be provided in several ways:
- Using Google Container Registry
- Per-cluster
- automatically configured on Google Compute Engine or Google Container Engine
- all pods can read the project's private registry
- Configuring Nodes to Authenticate to a Private Registry
- all pods can read any configured private registries
- requires node configuration by cluster administrator
- Pre-pulling Images
- all pods can use any images cached on a node
- requires root access to all nodes to setup
- Specifying ImagePullSecrets on a Pod
- only pods which provide own keys can access the private registry
Each option is described in more detail below.
### Using Google Container Registry
Kubernetes has native support for the [Google Container
Registry (GCR)](https://cloud.google.com/tools/container-registry/), when running on Google Compute
Engine (GCE). If you are running your cluster on GCE or Google Container Engine (GKE), simply
use the full image name (e.g. gcr.io/my_project/image:tag).
All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance's
Google service account. The service account on the instance
will have a `https://www.googleapis.com/auth/devstorage.read_only`,
so it can pull from the project's GCR, but not push.
### Using AWS EC2 Container Registry
Kubernetes has native support for the [AWS EC2 Container
Registry](https://aws.amazon.com/ecr/), when nodes are AWS instances.
Simply use the full image name (e.g. `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)
in the Pod definition.
All users of the cluster who can create pods will be able to run pods that use any of the
images in the ECR registry.
The kubelet will fetch and periodically refresh ECR credentials. It needs the
`ecr:GetAuthorizationToken` permission to do this.
### Configuring Nodes to Authenticate to a Private Repository
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
with credentials for Google Container Registry. You cannot use this approach.
**Note:** this approach is suitable if you can control node configuration. It
will not work reliably on GCE, and any other cloud provider that does automatic
node replacement.
Docker stores keys for private registries in the `$HOME/.dockercfg` or `$HOME/.docker/config.json` file. If you put this
in the `$HOME` of user `root` on a kubelet, then docker will use it.
Here are the recommended steps to configuring your nodes to use a private registry. In this
example, run these on your desktop/laptop:
1. run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json`.
1. view `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
1. get a list of your nodes, for example:
- if you want the names: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
- if you want to get the IPs: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
1. copy your local `.docker/config.json` to the home directory of root on each node.
- for example: `for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done`
Verify by creating a pod that uses a private image, e.g.:
```yaml
$ cat <<EOF > /tmp/private-image-test-1.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
EOF
$ kubectl create -f /tmp/private-image-test-1.yaml
pods/private-image-test-1
$
```
If everything is working, then, after a few moments, you should see:
```console
$ kubectl logs private-image-test-1
SUCCESS
```
If it failed, then you will see:
```console
$ kubectl describe pods/private-image-test-1 | grep "Failed"
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
```
You must ensure all nodes in the cluster have the same `.docker/config.json`. Otherwise, pods will run on
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
template needs to include the `.docker/config.json` or mount a drive that contains it.
All pods will have read access to images in any private registry once private
registry keys are added to the `.docker/config.json`.
**This was tested with a private docker repository as of 26 June with Kubernetes version v0.19.3.
It should also work for a private registry such as quay.io, but that has not been tested.**
### Pre-pulling Images
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
with credentials for Google Container Registry. You cannot use this approach.
**Note:** this approach is suitable if you can control node configuration. It
will not work reliably on GCE, and any other cloud provider that does automatic
node replacement.
Be default, the kubelet will try to pull each image from the specified registry.
However, if the `imagePullPolicy` property of the container is set to `IfNotPresent` or `Never`,
then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication,
you must ensure all nodes in the cluster have the same pre-pulled images.
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
All pods will have read access to any pre-pulled images.
### Specifying ImagePullSecrets on a Pod
**Note:** This approach is currently the recommended approach for GKE, GCE, and any cloud-providers
where node creation is automated.
Kubernetes supports specifying registry keys on a pod.
#### Creating a Secret with a Docker Config
Run the following command, substituting the appropriate uppercase values:
```console
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
```
If you need access to multiple registries, you can create one secret for each registry.
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
when pulling images for your Pods.
Pods can only reference image pull secrets in their own namespace,
so this process needs to be done one time per namespace.
##### Bypassing kubectl create secrets
If for some reason you need multiple items in a single `.docker/config.json` or need
control not given by the above command, then you can [create a secret using
json or yaml](secrets.md#creating-a-secret-manually).
Be sure to:
- set the name of the data item to `.dockerconfigjson`
- base64 encode the docker file and paste that string, unbroken
as the value for field `data[".dockerconfigjson"]`
- set `type` to `kubernetes.io/dockerconfigjson`
Example:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: awesomeapps
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
```
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...` it means
the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file.
#### Referring to an imagePullSecrets on a Pod
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
section to a pod definition.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
```
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [serviceAccount](service-accounts.md) resource.
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
will be merged. This approach will work on Google Container Engine (GKE).
### Use Cases
There are a number of solutions for configuring private registries. Here are some
common use cases and suggested solutions.
1. Cluster running only non-proprietary (e.g open-source) images. No need to hide images.
- Use public images on the Docker hub.
- no configuration required
- on GCE/GKE, a local mirror is automatically used for improved speed and availability
1. Cluster running some proprietary images which should be hidden to those outside the company, but
visible to all cluster users.
- Use a hosted private [Docker registry](https://docs.docker.com/registry/)
- may be hosted on the [Docker Hub](https://hub.docker.com/account/signup/), or elsewhere.
- manually configure .docker/config.json on each node as described above
- Or, run an internal private registry behind your firewall with open read access.
- no Kubernetes configuration required
- Or, when on GCE/GKE, use the project's Google Container Registry.
- will work better with cluster autoscaling than manual node configuration
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
1. Cluster with a proprietary images, a few of which require stricter access control
- ensure [AlwaysPullImages admission controller](../../docs/admin/admission-controllers.md#alwayspullimages) is active, otherwise, all Pods potentially have access to all images
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
1. A multi-tenant cluster where each tenant needs own private registry
- ensure [AlwaysPullImages admission controller](../../docs/admin/admission-controllers.md#alwayspullimages) is active, otherwise, all Pods of all tenants potentially have access to all images
- run a private registry with authorization required.
- generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
- tenant adds that secret to imagePullSecrets of each namespace.
This file has moved to: http://kubernetes.github.io/docs/user-guide/images/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,287 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Ingress
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Ingress](#ingress)
- [What is Ingress?](#what-is-ingress)
- [Prerequisites](#prerequisites)
- [The Ingress Resource](#the-ingress-resource)
- [Ingress controllers](#ingress-controllers)
- [Types of Ingress](#types-of-ingress)
- [Single Service Ingress](#single-service-ingress)
- [Simple fanout](#simple-fanout)
- [Name based virtual hosting](#name-based-virtual-hosting)
- [Loadbalancing](#loadbalancing)
- [Updating an Ingress](#updating-an-ingress)
- [Future Work](#future-work)
- [Alternatives](#alternatives)
<!-- END MUNGE: GENERATED_TOC -->
__Terminology__
Throughout this doc you will see a few terms that are sometimes used interchangeably elsewhere, that might cause confusion. This section attempts to clarify them.
* Node: A single virtual or physical machine in a Kubernetes cluster.
* Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloudprovider or a physical piece of hardware.
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/admin/networking.md). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/admin/ovs-networking.md).
* Service: A Kubernetes [Service](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md) that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
## What is Ingress?
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:
```
internet
|
------------
[ Services ]
```
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
```
internet
|
[ Ingress ]
--|-----|--
[ Services ]
```
It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server. An [Ingress controller](#ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
## Prerequisites
Before you start using the Ingress resource, there are a few things you should understand:
* The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1.
* You need an Ingress controller to satisfy an Ingress. Simply creating the resource will have no effect.
* On GCE/GKE there should be a [L7 cluster addon](../../cluster/addons/cluster-loadbalancing/glbc/README.md#prerequisites), on other platforms you either need to write your own or [deploy an existing controller](https://github.com/kubernetes/contrib/tree/master/Ingress) as a pod.
* The resource currently does not support HTTPS, but will do so before it leaves beta.
## The Ingress Resource
A minimal Ingress might look like:
```yaml
01. apiVersion: extensions/v1beta1
02. kind: Ingress
03. metadata:
04. name: test-ingress
05. spec:
06. rules:
07. - http:
08. paths:
09. - path: /testpath
10. backend:
11. serviceName: test
12. servicePort: 80
```
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [deploying applications](deploying-applications.md), [configuring containers](configuring-containers.md), and [working with resources](working-with-resources.md) documents.
__Lines 5-7__: Ingress [spec](../devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
__Lines 8-9__: Each http rule contains the following information: A host (eg: foo.bar.com, defaults to * in this example), a list of paths (eg: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
__Lines 10-12__: A backend is a service:port combination as described in the [services doc](services.md). Ingress traffic is typically sent directly to the endpoints matching a backend.
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [api-reference](../../pkg/apis/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller. Though the Ingress resource doesn't support HTTPS yet, security configs would also be global.
## Ingress controllers
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://github.com/kubernetes/contrib/tree/master/Ingress).
## Types of Ingress
### Single Service Ingress
There are existing Kubernetes concepts that allow you to expose a single service (see [alternatives](#alternatives)), however you can do so through an Ingress as well, by specifying a *default backend* with no rules.
<!-- BEGIN MUNGE: EXAMPLE ingress.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
```
[Download example](ingress.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ingress.yaml -->
If you create it using `kubectl -f` you should see:
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 107.178.254.228
```
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic send to the IP is directed to the Kubernetes Service listed under `BACKEND`.
### Simple fanout
As described previously, pods within kubernetes have ips only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer/s. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
```
foo.bar.com -> 178.91.123.132 -> / foo s1:80
/ bar s2:80
```
would require an Ingress such as:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
```
When you create the Ingress with `kubectl create -f`:
```
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test -
foo.bar.com
/foo s1:80
/bar s2:80
```
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
### Name based virtual hosting
Name-based virtual hosts use multiple host names for the same IP address.
```
foo.bar.com --| |-> foo.bar.com s1:80
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com s2:80
```
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
```
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the url of the request.
### Loadbalancing
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/production-pods.md#liveness-and-readiness-probes-aka-health-checks) which allow you to achieve the same end result.
## Updating an Ingress
Say you'd like to add a new Host to an existing Ingress, you can update it by editing the resource:
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test - 178.91.123.132
foo.bar.com
/foo s1:80
$ kubectl edit ing test
```
This should pop up an editor with the existing yaml, modify it to include the new Host.
```yaml
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
path: /foo
- host: bar.baz.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
path: /foo
..
```
saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test - 178.91.123.132
foo.bar.com
/foo s1:80
bar.baz.com
/foo s2:80
```
You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.
## Future Work
* Various modes of HTTPS/TLS support (edge termination, sni etc)
* Requesting an IP or Hostname via claims
* Combining L4 and L7 Ingress
* More Ingress controllers
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress sub-repository](https://github.com/kubernetes/contrib/tree/master/Ingress) for more details on the evolution of various Ingress controllers.
## Alternatives
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
* Use [Service.Type=LoadBalancer](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-loadbalancer)
* Use [Service.Type=NodePort](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-nodeport)
* Use a [Port Proxy] (https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)
* Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
This file has moved to: http://kubernetes.github.io/docs/user-guide/ingress/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,327 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Application Introspection and Debugging
Once your application is running, youll inevitably need to debug problems with it.
Earlier we described how you can use `kubectl get pods` to retrieve simple status information about
your pods. But there are a number of ways to get even more information about your application.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Application Introspection and Debugging](#kubernetes-user-guide-managing-applications-application-introspection-and-debugging)
- [Using `kubectl describe pod` to fetch details about pods](#using-kubectl-describe-pod-to-fetch-details-about-pods)
- [Example: debugging Pending Pods](#example-debugging-pending-pods)
- [Example: debugging a down/unreachable node](#example-debugging-a-downunreachable-node)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Using `kubectl describe pod` to fetch details about pods
For this example well use a ReplicationController to create two pods, similar to the earlier example.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
```
```console
$ kubectl create -f ./my-nginx-rc.yaml
replicationcontrollers/my-nginx
```
```console
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-gy1ij 1/1 Running 0 1m
my-nginx-yv5cn 1/1 Running 0 1m
```
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
```console
$ kubectl describe pod my-nginx-gy1ij
Name: my-nginx-gy1ij
Image(s): nginx
Node: kubernetes-node-y3vk/10.240.154.168
Labels: app=nginx
Status: Running
Reason:
Message:
IP: 10.244.1.4
Replication Controllers: my-nginx (2/2 replicas created)
Containers:
nginx:
Image: nginx
Limits:
cpu: 500m
memory: 128Mi
State: Running
Started: Thu, 09 Jul 2015 15:33:07 -0700
Ready: True
Restart Count: 0
Conditions:
Type Status
Ready True
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {scheduler } scheduled Successfully assigned my-nginx-gy1ij to kubernetes-node-y3vk
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD created Created with docker id cd1644065066
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD started Started with docker id cd1644065066
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx"
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
The container state is one of Waiting, Running, or Terminated. Depending on the state, additional information will be provided -- here you can see that for a container in Running state, the system tells you when the container started.
Ready tells you whether the container passed its last readiness probe. (In this case, the container does not have a readiness probe configured; the container is assumed to be ready if no readiness probe is configured.)
Restart Count tells you how many times the container has restarted; this information can be useful for detecting crash loops in containers that are configured with a restart policy of “always.”
Currently the only Condition associated with a Pod is the binary Ready condition, which indicates that the pod is able to service requests and should be added to the load balancing pools of all matching services.
Lastly, you see a log of recent events related to your Pod. The system compresses multiple identical events by indicating the first and last time it was seen and the number of times it was seen. "From" indicates the component that is logging the event, "SubobjectPath" tells you which object (e.g. container within the pod) is being referred to, and "Reason" and "Message" tell you what happened.
## Example: debugging Pending Pods
A common scenario that you can detect using events is when youve created a Pod that wont fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesnt match any nodes. Lets say we created the previous Replication Controller with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
```console
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-9unp9 0/1 Pending 0 8s
my-nginx-b7zs9 0/1 Running 0 8s
my-nginx-i595c 0/1 Running 0 8s
my-nginx-iichp 0/1 Running 0 8s
my-nginx-tc2j9 0/1 Running 0 8s
```
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
```console
$ kubectl describe pod my-nginx-9unp9
Name: my-nginx-9unp9
Image(s): nginx
Node: /
Labels: app=nginx
Status: Pending
Reason:
Message:
IP:
Replication Controllers: my-nginx (5/5 replicas created)
Containers:
nginx:
Image: nginx
Limits:
cpu: 600m
memory: 128Mi
State: Waiting
Ready: False
Restart Count: 0
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
```
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use
```
kubectl get events
```
but you have to remember that events are namespaced. This means that if you're interested in events for some namespaced object (e.g. what happened with Pods in namespace `my-namespace`) you need to explicitly provide a namespace to the command:
```
kubectl get events --namespace=my-namespace
```
To see events from all namespaces, you can use the `--all-namespaces` argument.
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
```yaml
$ kubectl get pod my-nginx-i595c -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: '{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"my-nginx","uid":"c555c14f-26d0-11e5-99cb-42010af00e4b","apiVersion":"v1","resourceVersion":"26174"}}'
creationTimestamp: 2015-07-10T06:56:21Z
generateName: my-nginx-
labels:
app: nginx
name: my-nginx-i595c
namespace: default
resourceVersion: "26243"
selfLink: /api/v1/namespaces/default/pods/my-nginx-i595c
uid: c558e44b-26d0-11e5-99cb-42010af00e4b
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 600m
memory: 128Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-zkhkk
readOnly: true
dnsPolicy: ClusterFirst
nodeName: kubernetes-node-u619
restartPolicy: Always
serviceAccountName: default
volumes:
- name: default-token-zkhkk
secret:
secretName: default-token-zkhkk
status:
conditions:
- status: "True"
type: Ready
containerStatuses:
- containerID: docker://9506ace0eb91fbc31aef1d249e0d1d6d6ef5ebafc60424319aad5b12e3a4e6a9
image: nginx
imageID: docker://319d2015d149943ff4d2a20ddea7d7e5ce06a64bbab1792334c0d3273bbbff1e
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2015-07-10T06:56:28Z
hostIP: 10.240.112.234
phase: Running
podIP: 10.244.3.4
startTime: 2015-07-10T06:56:21Z
```
## Example: debugging a down/unreachable node
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod thats running on the node, or to find out why a Pod wont schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
```console
$ kubectl get nodes
NAME LABELS STATUS
kubernetes-node-861h kubernetes.io/hostname=kubernetes-node-861h NotReady
kubernetes-node-bols kubernetes.io/hostname=kubernetes-node-bols Ready
kubernetes-node-st6x kubernetes.io/hostname=kubernetes-node-st6x Ready
kubernetes-node-unaj kubernetes.io/hostname=kubernetes-node-unaj Ready
$ kubectl describe node kubernetes-node-861h
Name: kubernetes-node-861h
Labels: kubernetes.io/hostname=kubernetes-node-861h
CreationTimestamp: Fri, 10 Jul 2015 14:32:29 -0700
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
Ready Unknown Fri, 10 Jul 2015 14:34:32 -0700 Fri, 10 Jul 2015 14:35:15 -0700 Kubelet stopped posting node status.
Addresses: 10.240.115.55,104.197.0.26
Capacity:
cpu: 1
memory: 3800808Ki
pods: 100
Version:
Kernel Version: 3.16.0-0.bpo.4-amd64
OS Image: Debian GNU/Linux 7 (wheezy)
Container Runtime Version: docker://Unknown
Kubelet Version: v0.21.1-185-gffc5a86098dc01
Kube-Proxy Version: v0.21.1-185-gffc5a86098dc01
PodCIDR: 10.244.0.0/24
ExternalID: 15233045891481496305
Pods: (0 in total)
Namespace Name
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-node-861h} starting Starting kubelet.
Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-node-861h} NodeReady Node kubernetes-node-861h status is now: NodeReady
Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
$ kubectl get node kubernetes-node-861h -o yaml
apiVersion: v1
kind: Node
metadata:
creationTimestamp: 2015-07-10T21:32:29Z
labels:
kubernetes.io/hostname: kubernetes-node-861h
name: kubernetes-node-861h
resourceVersion: "757"
selfLink: /api/v1/nodes/kubernetes-node-861h
uid: 2a69374e-274b-11e5-a234-42010af0d969
spec:
externalID: "15233045891481496305"
podCIDR: 10.244.0.0/24
providerID: gce://striped-torus-760/us-central1-b/kubernetes-node-861h
status:
addresses:
- address: 10.240.115.55
type: InternalIP
- address: 104.197.0.26
type: ExternalIP
capacity:
cpu: "1"
memory: 3800808Ki
pods: "100"
conditions:
- lastHeartbeatTime: 2015-07-10T21:34:32Z
lastTransitionTime: 2015-07-10T21:35:15Z
reason: Kubelet stopped posting node status.
status: Unknown
type: Ready
nodeInfo:
bootID: 4e316776-b40d-4f78-a4ea-ab0d73390897
containerRuntimeVersion: docker://Unknown
kernelVersion: 3.16.0-0.bpo.4-amd64
kubeProxyVersion: v0.21.1-185-gffc5a86098dc01
kubeletVersion: v0.21.1-185-gffc5a86098dc01
machineID: ""
osImage: Debian GNU/Linux 7 (wheezy)
systemUUID: ABE5F6B4-D44B-108B-C46A-24CCE16C8B6E
```
## What's next?
Learn about additional debugging tools, including:
* [Logging](logging.md)
* [Monitoring](monitoring.md)
* [Getting into containers via `exec`](getting-into-containers.md)
* [Connecting to containers via proxies](connecting-to-applications-proxy.md)
* [Connecting to containers via port forwarding](connecting-to-applications-port-forward.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/introspection-and-debugging/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,391 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Jobs
This file has moved to: http://kubernetes.github.io/docs/user-guide/jobs/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Jobs](#jobs)
- [What is a _job_?](#what-is-a-job)
- [Running an example Job](#running-an-example-job)
- [Writing a Job Spec](#writing-a-job-spec)
- [Pod Template](#pod-template)
- [Pod Selector](#pod-selector)
- [Parallel Jobs](#parallel-jobs)
- [Controlling Parallelism](#controlling-parallelism)
- [Handling Pod and Container Failures](#handling-pod-and-container-failures)
- [Job Patterns](#job-patterns)
- [Advanced Usage](#advanced-usage)
- [Specifying your own pod selector](#specifying-your-own-pod-selector)
- [Alternatives](#alternatives)
- [Bare Pods](#bare-pods)
- [Replication Controller](#replication-controller)
- [Single Job starts Controller Pod](#single-job-starts-controller-pod)
- [Caveats](#caveats)
- [Future work](#future-work)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _job_?
A _job_ creates one or more pods and ensures that a specified number of them successfully terminate.
As pods successfully complete, the _job_ tracks the successful completions. When a specified number
of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the
pods it created.
A simple case is to create 1 Job object in order to reliably run one Pod to completion.
The Job object will start a new Pod if the first pod fails or is deleted (for example
due to a node hardware failure or a node reboot).
A Job can also be used to run multiple pods in parallel.
## Running an example Job
Here is an example Job config. It computes π to 2000 places and prints it out.
It takes around 10s to complete.
<!-- BEGIN MUNGE: EXAMPLE job.yaml -->
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```
[Download example](job.yaml?raw=true)
<!-- END MUNGE: EXAMPLE job.yaml -->
Run the example job by downloading the example file and then running this command:
```console
$ kubectl create -f ./job.yaml
job "pi" created
```
Check on the status of the job using this command:
```console
$ kubectl describe jobs/pi
Name: pi
Namespace: default
Image(s): perl
Selector: app in (pi)
Parallelism: 1
Completions: 1
Start Time: Mon, 11 Jan 2016 15:35:52 -0800
Labels: app=pi
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q
```
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.
To list all the pods that belong to job in a machine readable form, you can use a command like this:
```console
$ pods=$(kubectl get pods --selector=app=pi --output=jsonpath={.items..metadata.name})
echo $pods
pi-aiw0a
```
Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression
that just gets the name from each pod in the returned list.
View the standard output of one of the pods:
```console
$ kubectl logs $pods
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
```
## Writing a Job Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [deploying applications](deploying-applications.md),
[configuring containers](configuring-containers.md), and [working with resources](working-with-resources.md) documents.
A Job also needs a [`.spec` section](../devel/api-conventions.md#spec-and-status).
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller.md#pod-template). It has exactly
the same schema as a [pod](pods.md), except it is nested and does not have an `apiVersion` or
`kind`.
In addition to required fields for a Pod, a pod template in a job must specify appropriate
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
Only a [`RestartPolicy`](pod-states.md) equal to `Never` or `OnFailure` are allowed.
### Pod Selector
The `.spec.selector` field is optional. In almost all cases you should not specify it.
See section [specifying your own pod selector](#specifying-your-own-pod-selector).
### Parallel Jobs
There are three main types of jobs:
1. Non-parallel Jobs
- normally only one pod is started, unless the pod fails.
- job is complete as soon as Pod terminates successfully.
1. Parallel Jobs with a *fixed completion count*:
- specify a non-zero positive value for `.spec.completions`
- the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`.
- **not implemented yet:** each pod passed a different index in the range 1 to `.spec.completions`.
1. Parallel Jobs with a *work queue*:
- do not specify `.spec.completions`
- the pods must coordinate with themselves or an external service to determine what each should work on
- each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done.
- when _any_ pod terminates with success, no new pods are created.
- once at least one pod has terminated with success and all pods are terminated, then the job is completed with success.
- once any pod has exited with success, no other pod should still be doing any work or writing any output. They should all be
in the process of exiting.
For a Non-parallel job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
unset, both are defaulted to 1.
For a Fixed Completion Count job, you should set `.spec.completions` to the number of completions needed.
You can set `.spec.parallelism`, or leave it unset and it will default to 1.
For a Work Queue Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
a non-negative integer.
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
#### Controlling Parallelism
The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
If it is unspecified, it defaults to 1.
If it is specified as 0, then the Job is effectively paused until it is increased.
A job can be scaled up using the `kubectl scale` command. For example, the following
command sets `.spec.parallelism` of a job called `myjob` to 10:
```console
$ kubectl scale --replicas=$N jobs/myjob
job "myjob" scaled
```
You can also use the `scale` subresource of the Job resource.
Actual parallelism (number of pods running at any instant) may be more or less than requested
parallelism, for a variety or reasons:
- For Fixed Completion Count jobs, the actual number of pods running in parallel will not exceed the number of
remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
- For work queue jobs, no new pods are started after any pod has succeeded -- remaining pods are allowed to complete, however.
- If the controller has not had time to react.
- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc),
then there may be fewer pods than requested.
- The controller may throttle new pod creation due to excessive previous pod failures in the same Job.
- When a pod is gracefully shutdown, it make take time to stop.
## Handling Pod and Container Failures
A Container in a Pod may fail for a number of reasons, such as because the process in it exited with
a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this
happens, and the `.spec.template.containers[].restartPolicy = "OnFailure"`, then the Pod stays
on the node, but the Container is re-run. Therefore, your program needs to handle the the case when it is
restarted locally, or else specify `.spec.template.containers[].restartPolicy = "Never"`.
See [pods-states](pod-states.md) for more information on `restartPolicy`.
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
`.spec.template.containers[].restartPolicy = "Never"`. When a Pod fails, then the Job controller
starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
caused by previous runs.
Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and
`.spec.template.containers[].restartPolicy = "Never"`, the same program may
sometimes be started twice.
If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
## Job Patterns
The Job object can be used to support reliable parallel execution of Pods. The Job object is not
designed to support closely-communicating parallel processes, as commonly found in scientific
computing. It does support parallel processing of a set of independent but related *work items*.
These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a
NoSQL database to scan, and so on.
In a complex system, there may be multiple different sets of work items. Here we are just
considering one set of work items that the user wants to manage together &mdash; a *batch job*.
There are several different patterns for parallel computation, each with strengths and weaknesses.
The tradeoffs are:
- One Job object for each work item, vs a single Job object for all work items. The latter is
better for large numbers of work items. The former creates some overhead for the user and for the
system to manage large numbers of Job objects. Also, with the latter, the resource usage of the job
(number of concurrently running pods) can be easily adjusted using the `kubectl scale` command.
- Number of pods created equals number of work items, vs each pod can process multiple work items.
The former typically requires less modification to existing code and containers. The latter
is better for large numbers of work items, for similar reasons to the previous bullet.
- Several approaches use a work queue. This requires running a queue service,
and modifications to the existing program or container to make it use the work queue.
Other approaches are easier to adapt to an existing containerised application.
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
The pattern names are also links to examples and more detailed description.
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
| -------------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
| [Job Template Expansion](../../examples/job/expansions/README.md) | | | ✓ | ✓ |
| [Queue with Pod Per Work Item](../../examples/job/work-queue-1/README.md) | ✓ | | sometimes | ✓ |
| [Queue with Variable Pod Count](../../examples/job/work-queue-2/README.md) | | ✓ | ✓ | | ✓ |
| Single Job with Static Work Assignment | ✓ | | ✓ | |
When you specify completions with `.spec.completions`, each Pod created by the Job controller
has an identical [`spec`](../devel/api-conventions.md#spec-and-status). This means that
all pods will have the same command line and the same
image, the same volumes, and (almost) the same environment variables. These patterns
are different ways to arrange for pods to work on different things.
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
Here, `W` is the number of work items.
| Pattern | `.spec.completions` | `.spec.parallelism` |
| -------------------------------------------------------------------------- |:-------------------:|:--------------------:|
| [Job Template Expansion](../../examples/job/expansions/README.md) | 1 | should be 1 |
| [Queue with Pod Per Work Item](../../examples/job/work-queue-1/README.md) | W | any |
| [Queue with Variable Pod Count](../../examples/job/work-queue-2/README.md) | 1 | any |
| Single Job with Static Work Assignment | W | any |
## Advanced Usage
### Specifying your own pod selector
Normally, when you create a job object, you do not specify `spec.selector`.
The system defaulting logic adds this field when the job is created.
It picks a selector value that will not overlap with any other jobs.
However, in some cases, you might need to override this automatically set selector.
To do this, you can specify the `spec.selector` of the job.
Be very careful when doing this. If you specify a label selector which is not
unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated
job may be deleted, or this job may count other pods as completing it, or one or both
of the jobs may refuse to create pods or run to completion. If a non-unique selector is
chosen, then other controllers (e.g. ReplicationController) and their pods may behave
in unpredicatable ways too. Kubernetes will not stop you from making a mistake when
specifying `spec.selector`.
Here is an example of a case when you might want to use this feature.
Say job `old` is already running. You want existing pods
to keep running, but you want the rest of the pods it creates
to use a different pod template and for the job to have a new name.
You cannot update the job because these fields are not updatable.
Therefore, you delete job `old` but leave its pods
running, using `kubectl delete jobs/old-one --cascade=false`.
Before deleting it, you make a note of what selector it uses:
```
kind: Job
metadata:
name: old
...
spec:
selector:
matchLabels:
job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...
```
Then you create a new job with name `new` and you explicitly specify the same selector.
Since the existing pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
they are controlled by job `new` as well.
You need to specify `manualSelector: true` in the new job since you are not using
the selector that the system normally generates for you automatically.
```
kind: Job
metadata:
name: new
...
spec:
manualSelector: true
selector:
matchLabels:
job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...
```
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
`manualSelector: true` tells the system to that you know what you are doing and to allow this
mismatch.
## Alternatives
### Bare Pods
When the node that a pod is running on reboots or fails, the pod is terminated
and will not be restarted. However, a Job will create new pods to replace terminated ones.
For this reason, we recommend that you use a job rather than a bare pod, even if your application
requires only a single pod.
### Replication Controller
Jobs are complementary to [Replication Controllers](replication-controller.md).
A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job
manages pods that are expected to terminate (e.g. batch jobs).
As discussed in [life of a pod](pod-states.md), `Job` is *only* appropriate for pods with
`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default
value is `Always`.)
### Single Job starts Controller Pod
Another pattern is for a single Job to create a pod which then creates other pods, acting as a sort
of custom controller for those pods. This allows the most flexibility, but may be somewhat
complicated to get started with and offers less integration with Kubernetes.
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
starts a Spark master controller (see [spark example](../../examples/spark/README.md)), runs a spark
driver, and then cleans up.
An advantage of this approach is that the overall process gets the completion guarantee of a Job
object, but complete control over what pods are created and how work is assigned to them.
## Caveats
Job objects are in the [`extensions` API Group](../api.md#api-groups).
Job objects have [API version `v1beta1`](../api.md#api-versioning). Beta objects may
undergo changes to their schema and/or semantics in future software releases, but
similar functionality will be supported.
## Future work
Support for creating Jobs at specified times/dates (i.e. cron) is expected in the next minor
release.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/jobs.md?pixel)]()

View File

@@ -32,70 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# JSONPath template syntax
JSONPath template is composed of JSONPath expressions enclosed by {}.
And we add three functions in addition to the original JSONPath syntax:
1. The `$` operator is optional since the expression always start from the root object by default.
2. We can use `""` to quote text inside JSONPath expression.
3. We can use `range` operator to iterate list.
The result object is printed as its String() function.
Given the input:
```json
{
"kind": "List",
"items":[
{
"kind":"None",
"metadata":{"name":"127.0.0.1"},
"status":{
"capacity":{"cpu":"4"},
"addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}]
}
},
{
"kind":"None",
"metadata":{"name":"127.0.0.2"},
"status":{
"capacity":{"cpu":"8"},
"addresses":[
{"type": "LegacyHostIP", "address":"127.0.0.2"},
{"type": "another", "address":"127.0.0.3"}
]
}
}
],
"users":[
{
"name": "myself",
"user": {}
},
{
"name": "e2e",
"user": {"username": "admin", "password": "secret"}
}
]
}
```
Function | Description | Example | Result
---------|--------------------|--------------------|------------------
text | the plain text | kind is {.kind} | kind is List
@ | the current object | {@} | the same as input
. or [] | child operator | {.kind} or {['kind']}| List
.. | recursive descent | {..name} | 127.0.0.1 127.0.0.2 myself e2e
* | wildcard. Get all objects| {.items[*].metadata.name} | [127.0.0.1 127.0.0.2]
[start:end :step] | subscript operator | {.users[0].name}| myself
[,] | union operator | {.items[*]['metadata.name', 'status.capacity']} | 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]
?() | filter | {.users[?(@.name=="e2e")].user.password} | secret
range, end | iterate list | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]
"" | quote interpreted string | {range .items[*]}{.metadata.name}{"\t"}{end} | 127.0.0.1 127.0.0.2
This file has moved to: http://kubernetes.github.io/docs/user-guide/jsonpath/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,27 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Known Issues
This document summarizes known issues with existing Kubernetes releases.
Please consult this document before filing new bugs.
### Release 1.0.1
* `exec` liveness/readiness probes leak resources due to Docker exec leaking resources (#10659)
* `docker load` sometimes hangs which causes the `kube-apiserver` not to start. Restarting the Docker daemon should fix the issue (#10868)
* The kubelet on the master node doesn't register with the `kube-apiserver` so statistics aren't collected for master daemons (#10891)
* Heapster and InfluxDB both leak memory (#10653)
* Wrong node cpu/memory limit metrics from Heapster (https://github.com/GoogleCloudPlatform/heapster/issues/399)
* Services that set `type=LoadBalancer` can not use port `10250` because of Google Compute Engine firewall limitations
* Add-on services can not be created or deleted via `kubectl` or the Kubernetes API (#11435)
* If a pod with a GCE PD is created and deleted in rapid succession, it may fail to attach/mount correctly leaving PD data inaccessible (or corrupted in the worst case). (http://issue.k8s.io/11231#issuecomment-122049113)
* Suggested temporary work around: introduce a 1-2 minute delay between deleting and recreating a pod with a PD on the same node.
* Explicit errors while detaching GCE PD could prevent PD from ever being detached (#11321)
* GCE PDs may sometimes fail to attach (#11302)
* If multiple Pods use the same RBD volume in read-write mode, it is possible data on the RBD volume could get corrupted. This problem has been found in environments where both apiserver and etcd rebooted and Pods were redistributed.
* A workaround is to ensure there is no other Ceph client using the RBD volume before mapping RBD image in read-write mode. For example, `rados -p poolname listwatchers image_name.rbd` can list RBD clients that are mapping the image.
This file has moved to: http://kubernetes.github.io/docs/user-guide/known-issues/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,314 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# kubeconfig files
This file has moved to: http://kubernetes.github.io/docs/user-guide/kubeconfig-file/
Authentication in kubernetes can differ for different individuals.
- A running kubelet might have one way of authenticating (i.e. certificates).
- Users might have a different way of authenticating (i.e. tokens).
- Administrators might have a list of certificates which they provide individual users.
- There may be multiple clusters, and we may want to define them all in one place - giving users the ability to use their own certificates and reusing the same global configuration.
So in order to easily switch between multiple clusters, for multiple users, a kubeconfig file was defined.
This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname.
Multiple kubeconfig files are allowed, if specified explicitly. At runtime they are loaded and merged together along with override options specified from the command line (see [rules](#loading-and-merging) below).
## Related discussion
http://issue.k8s.io/1755
## Components of a kubeconfig file
### Example kubeconfig file
```yaml
current-context: federal-context
apiVersion: v1
clusters:
- cluster:
api-version: v1
server: http://cow.org:8080
name: cow-cluster
- cluster:
certificate-authority: path/to/my/cafile
server: https://horse.org:4443
name: horse-cluster
- cluster:
insecure-skip-tls-verify: true
server: https://pig.org:443
name: pig-cluster
contexts:
- context:
cluster: horse-cluster
namespace: chisel-ns
user: green-user
name: federal-context
- context:
cluster: pig-cluster
namespace: saw-ns
user: black-user
name: queen-anne-context
kind: Config
preferences:
colors: true
users:
- name: blue-user
user:
token: blue-token
- name: green-user
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
```
### Breakdown/explanation of components
#### cluster
```
clusters:
- cluster:
certificate-authority: path/to/my/cafile
server: https://horse.org:4443
name: horse-cluster
- cluster:
insecure-skip-tls-verify: true
server: https://pig.org:443
name: pig-cluster
```
A `cluster` contains endpoint data for a kubernetes cluster. This includes the fully
qualified url for the kubernetes apiserver, as well as the cluster's certificate
authority or `insecure-skip-tls-verify: true`, if the cluster's serving
certificate is not signed by a system trusted certificate authority.
A `cluster` has a name (nickname) which acts as a dictionary key for the cluster
within this kubeconfig file. You can add or modify `cluster` entries using
[`kubectl config set-cluster`](kubectl/kubectl_config_set-cluster.md).
#### user
```
users:
- name: blue-user
user:
token: blue-token
- name: green-user
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
```
A `user` defines client credentials for authenticating to a kubernetes cluster. A
`user` has a name (nickname) which acts as its key within the list of user entries
after kubeconfig is loaded/merged. Available credentials are `client-certificate`,
`client-key`, `token`, and `username/password`. `username/password` and `token`
are mutually exclusive, but client certs and keys can be combined with them.
You can add or modify `user` entries using
[`kubectl config set-credentials`](kubectl/kubectl_config_set-credentials.md).
#### context
```
contexts:
- context:
cluster: horse-cluster
namespace: chisel-ns
user: green-user
name: federal-context
```
A `context` defines a named [`cluster`](#cluster),[`user`](#user),[`namespace`](namespaces.md) tuple
which is used to send requests to the specified cluster using the provided authentication info and
namespace. Each of the three is optional; it is valid to specify a context with only one of `cluster`,
`user`,`namespace`, or to specify none. Unspecified values, or named values that don't have corresponding
entries in the loaded kubeconfig (e.g. if the context specified a `pink-user` for the above kubeconfig file)
will be replaced with the default. See [Loading and merging rules](#loading-and-merging) below for override/merge behavior.
You can add or modify `context` entries with [`kubectl config set-conext`](kubectl/kubectl_config_set-context.md).
#### current-context
```yaml
current-context: federal-context
```
`current-context` is the nickname or 'key' for the cluster,user,namespace tuple that kubectl
will use by default when loading config from this file. You can override any of the values in kubectl
from the commandline, by passing `--context=CONTEXT`, `--cluster=CLUSTER`, `--user=USER`, and/or `--namespace=NAMESPACE` respectively.
You can change the `current-context` with [`kubectl config use-context`](kubectl/kubectl_config_use-context.md).
#### miscellaneous
```
apiVersion: v1
kind: Config
preferences:
colors: true
```
`apiVersion` and `kind` identify the version and schema for the client parser and should not
be edited manually.
`preferences` specify optional (and currently unused) kubectl preferences.
## Viewing kubeconfig files
`kubectl config view` will display the current kubeconfig settings. By default
it will show you all loaded kubeconfig settings; you can filter the view to just
the settings relevant to the `current-context` by passing `--minify`. See
[`kubectl config view`](kubectl/kubectl_config_view.md) for other options.
## Building your own kubeconfig file
NOTE, that if you are deploying k8s via kube-up.sh, you do not need to create your own kubeconfig files, the script will do it for you.
In any case, you can easily use this file as a template to create your own kubeconfig files.
So, lets do a quick walk through the basics of the above file so you can easily modify it as needed...
The above file would likely correspond to an api-server which was launched using the `--token-auth-file=tokens.csv` option, where the tokens.csv file looked something like this:
```
blue-user,blue-user,1
mister-red,mister-red,2
```
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in successfully, because we are providing the green-user's client credentials.
- Similarly, we can operate as the "blue-user" if we choose to change the value of current-context.
In the above scenario, green-user would have to log in by providing certificates, whereas blue-user would just provide the token. All this information would be handled for us by the
## Loading and merging rules
The rules for loading and merging the kubeconfig files are straightforward, but there are a lot of them. The final config is built in this order:
1. Get the kubeconfig from disk. This is done with the following hierarchy and merge rules:
If the CommandLineLocation (the value of the `kubeconfig` command line option) is set, use this file only. No merging. Only one instance of this flag is allowed.
Else, if EnvVarLocation (the value of $KUBECONFIG) is available, use it as a list of files that should be merged.
Merge files together based on the following rules.
Empty filenames are ignored. Files with non-deserializable content produced errors.
The first file to set a particular value or map key wins and the value or map key is never changed.
This means that the first file to set CurrentContext will have its context preserved. It also means that if two files specify a "red-user", only values from the first file's red-user are used. Even non-conflicting entries from the second file's "red-user" are discarded.
Otherwise, use HomeDirectoryLocation (~/.kube/config) with no merging.
1. Determine the context to use based on the first hit in this chain
1. command line argument - the value of the `context` command line option
1. current-context from the merged kubeconfig file
1. Empty is allowed at this stage
1. Determine the cluster info and user to use. At this point, we may or may not have a context. They are built based on the first hit in this chain. (run it twice, once for user, once for cluster)
1. command line argument - `user` for user name and `cluster` for cluster name
1. If context is present, then use the context's value
1. Empty is allowed
1. Determine the actual cluster info to use. At this point, we may or may not have a cluster info. Build each piece of the cluster info based on the chain (first hit wins):
1. command line arguments - `server`, `api-version`, `certificate-authority`, and `insecure-skip-tls-verify`
1. If cluster info is present and a value for the attribute is present, use it.
1. If you don't have a server location, error.
1. Determine the actual user info to use. User is built using the same rules as cluster info, EXCEPT that you can only have one authentication technique per user.
1. Load precedence is 1) command line flag, 2) user fields from kubeconfig
1. The command line flags are: `client-certificate`, `client-key`, `username`, `password`, and `token`.
1. If there are two conflicting techniques, fail.
1. For any information still missing, use default values and potentially prompt for authentication information
1. All file references inside of a kubeconfig file are resolved relative to the location of the kubeconfig file itself. When file references are presented on the command line
they are resolved relative to the current working directory. When paths are saved in the ~/.kube/config, relative paths are stored relatively while absolute paths are stored absolutely.
Any path in a kubeconfig file is resolved relative to the location of the kubeconfig file itself.
## Manipulation of kubeconfig via `kubectl config <subcommand>`
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
See [kubectl/kubectl_config.md](kubectl/kubectl_config.md) for help.
### Example
```console
$ kubectl config set-credentials myself --username=admin --password=secret
$ kubectl config set-cluster local-server --server=http://localhost:8080
$ kubectl config set-context default-context --cluster=local-server --user=myself
$ kubectl config use-context default-context
$ kubectl config set contexts.default-context.namespace the-right-prefix
$ kubectl config view
```
produces this output
```yaml
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: local-server
contexts:
- context:
cluster: local-server
namespace: the-right-prefix
user: myself
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: myself
user:
password: secret
username: admin
```
and a kubeconfig file that looks like this
```yaml
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: local-server
contexts:
- context:
cluster: local-server
namespace: the-right-prefix
user: myself
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: myself
user:
password: secret
username: admin
```
#### Commands for the example file
```console
$ kubectl config set preferences.colors true
$ kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
$ kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
$ kubectl config set-cluster pig-cluster --server=https://pig.org:443 --insecure-skip-tls-verify=true
$ kubectl config set-credentials blue-user --token=blue-token
$ kubectl config set-credentials green-user --client-certificate=path/to/my/client/cert --client-key=path/to/my/client/key
$ kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
$ kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
$ kubectl config use-context federal-context
```
### Final notes for tying it all together
So, tying this all together, a quick start to creating your own kubeconfig file:
- Take a good look and understand how you're api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
- Replace the snippet above with information for your cluster's api-server endpoint.
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubeconfig-file.md?pixel)]()

View File

@@ -32,190 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Labels
This file has moved to: http://kubernetes.github.io/docs/user-guide/labels/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Labels](#labels)
- [Motivation](#motivation)
- [Syntax and character set](#syntax-and-character-set)
- [Label selectors](#label-selectors)
- [_Equality-based_ requirement](#equality-based-requirement)
- [_Set-based_ requirement](#set-based-requirement)
- [API](#api)
- [LIST and WATCH filtering](#list-and-watch-filtering)
- [Set references in API objects](#set-references-in-api-objects)
- [Service and ReplicationController](#service-and-replicationcontroller)
- [Job and other new resources](#job-and-other-new-resources)
- [Selecting sets of nodes](#selecting-sets-of-nodes)
<!-- END MUNGE: GENERATED_TOC -->
_Labels_ are key/value pairs that are attached to objects, such as pods.
Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but which do not directly imply semantics to the core system.
Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time.
Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
```json
"labels": {
"key1" : "value1",
"key2" : "value2"
}
```
We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](annotations.md).
## Motivation
Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings.
Service deployments and batch processing pipelines are often multi-dimensional entities (e.g., multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-services per tier). Management often requires cross-cutting operations, which breaks encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather than by users.
Example labels:
* `"release" : "stable"`, `"release" : "canary"`
* `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"`
* `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "cache"`
* `"partition" : "customerA"`, `"partition" : "customerB"`
* `"track" : "daily"`, `"track" : "weekly"`
These are just examples; you are free to develop your own conventions.
## Syntax and character set
_Labels_ are key value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`).
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for Kubernetes core components.
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
## Label selectors
Unlike [names and UIDs](identifiers.md), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
The API currently supports two types of selectors: _equality-based_ and _set-based_.
A label selector can be made of multiple _requirements_ which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as an _AND_ logical operator.
An empty label selector (that is, one with zero requirements) selects every object in the collection.
A null label selector (which is only possible for optional selector fields) selects no objects.
### _Equality-based_ requirement
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
```
environment = production
tier != frontend
```
The former selects all resources with key equal to `environment` and value equal to `production`.
The latter selects all resources with key equal to `tier` and value distinct from `frontend`, and all resources with no labels with the `tier` key.
One could filter for resources in `production` excluding `frontend` using the comma operator: `environment=production,tier!=frontend`
### _Set-based_ requirement
_Set-based_ label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: `in`,`notin` and exists (only the key identifier). For example:
```
environment in (production, qa)
tier notin (frontend, backend)
partition
!partition
```
The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`.
The second example selects all resources with key equal to `tier` and values other than `frontend` and `backend`, and all resources with no labels with the `tier` key.
The third example selects all resources including a label with key `partition`; no values are checked.
The fourth example selects all resources without a label with key `partition`; no values are checked.
Similarly the comma separator acts as an _AND_ operator. So filtering resources with a `partition` key (no matter the value) and with `environment` different than  `qa` can be achieved using `partition,environment notin (qa)`.
The _set-based_ label selector is a general form of equality since `environment=production` is equivalent to `environment in (production)`; similarly for `!=` and `notin`.
_Set-based_ requirements can be mixed with _equality-based_ requirements. For example: `partition in (customerA, customerB),environment!=qa`.
## API
### LIST and WATCH filtering
LIST and WATCH operations may specify label selectors to filter the sets of objects returned using a query parameter. Both requirements are permitted:
* _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend`
* _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29`
Both label selector styles can be used to list or watch resources via a REST client. For example targeting `apiserver` with `kubectl` and using _equality-based_ one may write:
```console
$ kubectl get pods -l environment=production,tier=frontend
```
or using _set-based_ requirements:
```console
$ kubectl get pods -l 'environment in (production),tier in (frontend)'
```
As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values:
```console
$ kubectl get pods -l 'environment in (production, qa)'
```
or restricting negative matching via _exists_ operator:
```console
$ kubectl get pods -l 'environment,environment notin (frontend)'
```
### Set references in API objects
Some Kubernetes objects, such as [services](services.md) and [replicationcontrollers](replication-controller.md), also use label selectors to specify sets of other resources, such as [pods](pods.md).
#### Service and ReplicationController
The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationcontroller` should manage is also defined with a label selector.
Labels selectors for both objects are defined in `json` or `yaml` files using maps, and only _equality-based_ requirement selectors are supported:
```json
"selector": {
"component" : "redis",
}
```
or
```yaml
selector:
component: redis
```
this selector (respectively in `json` or `yaml` format) is equivalent to `component=redis` or `component in (redis)`.
#### Job and other new resources
Newer resources, such as [job](jobs.md), support _set-based_ requirements as well.
```yaml
selector:
matchLabels:
component: redis
matchExpressions:
- {key: tier, operator: In, values: [cache]}
- {key: environment, operator: NotIn, values: [dev]}
```
`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.
#### Selecting sets of nodes
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
See the documentation on [node selection](node-selection/README.md) for more information.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/labels.md?pixel)]()

View File

@@ -32,90 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Overview
This example shows two types of pod [health checks](../production-pods.md#liveness-and-readiness-probes-aka-health-checks): HTTP checks and container execution checks.
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
```yaml
livenessProbe:
exec:
command:
- cat
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
```
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
Note that the container removes the `/tmp/health` file after 10 seconds,
```sh
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
```
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
```yaml
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15
timeoutSeconds: 1
```
The Kubelet sends an HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails. The Kubelet sends probes to the container's IP address, unless overridden by the optional `host` field in httpGet. If the container listens on `127.0.0.1` and `hostNetwork` is `true` (i.e., it does not use the pod-specific network), then `host` should be specified as `127.0.0.1`. Be warned that, outside of less common cases like that, `host` does probably not result in what you would expect. If you set it to a non-existing hostname (or your competitor's!), probes will never reach the pod, defeating the whole point of health checks. If your pod relies on e.g. virtual hosts, which is probably the more common case, you should not use `host`, but rather set the `Host` header in `httpHeaders`.
This [guide](../walkthrough/k8s201.md#health-checking) has more information on health checks.
## Get your hands dirty
To show the health check is actually working, first create the pods:
```console
$ kubectl create -f docs/user-guide/liveness/exec-liveness.yaml
$ kubectl create -f docs/user-guide/liveness/http-liveness.yaml
```
Check the status of the pods once they are created:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
[...]
liveness-exec 1/1 Running 0 13s
liveness-http 1/1 Running 0 13s
```
Check the status half a minute later, you will see the container restart count being incremented:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
[...]
liveness-exec 1/1 Running 1 36s
liveness-http 1/1 Running 1 36s
```
At the bottom of the *kubectl describe* output there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
```console
$ kubectl describe pods liveness-exec
[...]
Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-node-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
```
This file has moved to: http://kubernetes.github.io/docs/user-guide/liveness/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,20 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Elasticsearch/Kibana Logging Demonstration
This directory contains two [pod](../../../docs/user-guide/pods.md) specifications which can be used as synthetic
logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
describes a pod that just emits a log message once every 4 seconds. The pod specification in
[synthetic_10lps.yaml](synthetic_10lps.yaml)
describes a pod that just emits 10 log lines per second.
See [logging document](../logging.md) for more details about logging. To observe the ingested log lines when using Google Cloud Logging please see the getting
started instructions
at [Cluster Level Logging to Google Cloud Logging](../../../docs/getting-started-guides/logging.md).
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting
started instructions
at [Cluster Level Logging with Elasticsearch and Kibana](../../../docs/getting-started-guides/logging-elasticsearch.md).
This file has moved to: http://kubernetes.github.io/docs/user-guide/logging-demo/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,110 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Logging
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Logging](#logging)
- [Logging by Kubernetes Components](#logging-by-kubernetes-components)
- [Examining the logs of running containers](#examining-the-logs-of-running-containers)
- [Cluster level logging to Google Cloud Logging](#cluster-level-logging-to-google-cloud-logging)
- [Cluster level logging with Elasticsearch and Kibana](#cluster-level-logging-with-elasticsearch-and-kibana)
- [Ingesting Application Log Files](#ingesting-application-log-files)
- [Known issues](#known-issues)
<!-- END MUNGE: GENERATED_TOC -->
## Logging by Kubernetes Components
Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](../devel/logging.md).
## Examining the logs of running containers
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
this pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
output every second. (You can find different pod specifications [here](logging-demo/).)
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: ubuntu:14.04
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
[Download example](../../examples/blog-logging/counter-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
we can run the pod:
```console
$ kubectl create -f ./counter-pod.yaml
pods/counter
```
and then fetch the logs:
```console
$ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015
1: Tue Jun 2 21:37:32 UTC 2015
2: Tue Jun 2 21:37:33 UTC 2015
3: Tue Jun 2 21:37:34 UTC 2015
4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015
...
```
If a pod has more than one container then you need to specify which container's log files should
be fetched e.g.
```console
$ kubectl logs kube-dns-v3-7r1l9 etcd
2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
2015/06/23 00:43:10 etcdserver: compacted log at index 30003
2015/06/23 00:43:10 etcdserver: saved snapshot at index 30003
2015/06/23 02:05:42 etcdserver: start to snapshot (applied: 40004, lastsnap: 30003)
2015/06/23 02:05:42 etcdserver: compacted log at index 40004
2015/06/23 02:05:42 etcdserver: saved snapshot at index 40004
2015/06/23 03:28:31 etcdserver: start to snapshot (applied: 50005, lastsnap: 40004)
2015/06/23 03:28:31 etcdserver: compacted log at index 50005
2015/06/23 03:28:31 etcdserver: saved snapshot at index 50005
2015/06/23 03:28:56 filePurge: successfully removed file default.etcd/member/wal/0000000000000000-0000000000000000.wal
2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005)
2015/06/23 04:51:03 etcdserver: compacted log at index 60006
2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006
...
```
## Cluster level logging to Google Cloud Logging
The getting started guide [Cluster Level Logging to Google Cloud Logging](../getting-started-guides/logging.md)
explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/)
and shows how to query the ingested logs.
## Cluster level logging with Elasticsearch and Kibana
The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](../getting-started-guides/logging-elasticsearch.md)
describes how to ingest cluster level logs into Elasticsearch and view them using Kibana.
## Ingesting Application Log Files
Cluster level logging only collects the standard output and standard error output of the applications
running in containers. The guide [Collecting log files from within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
## Known issues
Kubernetes does log rotation for Kubernetes components and docker containers. The command `kubectl logs` currently only read the latest logs, not all historical ones.
This file has moved to: http://kubernetes.github.io/docs/user-guide/logging/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,502 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Managing deployments
Youve deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features well discuss in more depth are [configuration files](configuring-containers.md#configuration-in-kubernetes) and [labels](deploying-applications.md#labels).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Managing deployments](#kubernetes-user-guide-managing-applications-managing-deployments)
- [Organizing resource configurations](#organizing-resource-configurations)
- [Bulk operations in kubectl](#bulk-operations-in-kubectl)
- [Using labels effectively](#using-labels-effectively)
- [Canary deployments](#canary-deployments)
- [Updating labels](#updating-labels)
- [Updating annotations](#updating-annotations)
- [Scaling your application](#scaling-your-application)
- [Updating your application without a service outage](#updating-your-application-without-a-service-outage)
- [In-place updates of resources](#in-place-updates-of-resources)
- [kubectl patch](#kubectl-patch)
- [kubectl edit](#kubectl-edit)
- [Using configuration files](#using-configuration-files)
- [Disruptive updates](#disruptive-updates)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Organizing resource configurations
Many applications require multiple resources to be created, such as a Replication Controller and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example:
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
Multiple resources can be created the same way as a single resource:
```console
$ kubectl create -f ./nginx-app.yaml
services/my-nginx-svc
replicationcontrollers/my-nginx
```
The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the replication controller(s).
`kubectl create` also accepts multiple `-f` arguments:
```console
$ kubectl create -f ./nginx-svc.yaml -f ./nginx-rc.yaml
```
And a directory can be specified rather than or in addition to individual files:
```console
$ kubectl create -f ./nginx/
```
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
```console
$ kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/docs/user-guide/pod.yaml
pods/nginx
```
## Bulk operations in kubectl
Resource creation isnt the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
```console
$ kubectl delete -f ./nginx/
replicationcontrollers "my-nginx" deleted
services "my-nginx-svc" deleted
```
In the case of just two resources, its also easy to specify both on the command line using the resource/name syntax:
```console
$ kubectl delete replicationcontrollers/my-nginx services/my-nginx-svc
```
For larger numbers of resources, one can use labels to filter resources. The selector is specified using `-l`:
```console
$ kubectl delete all -lapp=nginx
replicationcontrollers "my-nginx" deleted
services "my-nginx-svc" deleted
```
Because `kubectl` outputs resource names in the same syntax it accepts, its easy to chain operations using `$()` or `xargs`:
```console
$ kubectl get $(kubectl create -f ./nginx/ -o name | grep my-nginx)
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx-svc app=nginx app=nginx 10.0.152.174 80/TCP
```
## Using labels effectively
The examples weve used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](../../examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
```yaml
labels:
app: guestbook
tier: frontend
```
while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
```yaml
labels:
app: guestbook
tier: backend
role: master
```
and
```yaml
labels:
app: guestbook
tier: backend
role: slave
```
The labels allow us to slice and dice our resources along any dimension specified by a label:
```console
$ kubectl create -f ./guestbook-fe.yaml -f ./redis-master.yaml -f ./redis-slave.yaml
replicationcontrollers "guestbook-fe" created
replicationcontrollers "guestbook-redis-master" created
replicationcontrollers "guestbook-redis-slave" created
$ kubectl get pods -Lapp -Ltier -Lrole
NAME READY STATUS RESTARTS AGE APP TIER ROLE
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>
guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master
guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestbook backend slave
guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave
my-nginx-divi2 1/1 Running 0 29m nginx <none> <none>
my-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>
$ kubectl get pods -lapp=guestbook,role=slave
NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m
```
## Canary deployments
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. For example, it is common practice to deploy a *canary* of a new application release (specified via image tag) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. For instance, a new release of the guestbook frontend might carry the following labels:
```yaml
labels:
app: guestbook
tier: frontend
track: canary
```
and the primary, stable release would have a different value of the `track` label, so that the sets of pods controlled by the two replication controllers would not overlap:
```yaml
labels:
app: guestbook
tier: frontend
track: stable
```
The frontend service would span both sets of replicas by selecting the common subset of their labels, omitting the `track` label:
```yaml
selector:
app: guestbook
tier: frontend
```
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. For example:
```console
$ kubectl label pods -lapp=nginx tier=fe
pod "my-nginx-v4-9gw19" labeled
pod "my-nginx-v4-hayza" labeled
pod "my-nginx-v4-mde6m" labeled
pod "my-nginx-v4-sh6m8" labeled
pod "my-nginx-v4-wfof4" labeled
$ kubectl get pods -lapp=nginx -Ltier
NAME READY STATUS RESTARTS AGE TIER
my-nginx-v4-9gw19 1/1 Running 0 15m fe
my-nginx-v4-hayza 1/1 Running 0 14m fe
my-nginx-v4-mde6m 1/1 Running 0 18m fe
my-nginx-v4-sh6m8 1/1 Running 0 19m fe
my-nginx-v4-wfof4 1/1 Running 0 16m fe
```
For more information, please see [labels](labels.md) and [kubectl label](kubectl/kubectl_label.md) document.
## Updating annotations
Sometimes you want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example:
```console
$ kubectl annotate pods my-nginx-v4-9gw19 decscription='my frontend running nginx'
$ kubectl get pods my-nginx-v4-9gw19 -o yaml
apiversion: v1
kind: pod
metadata:
annotations:
description: my frontend running nginx
...
```
For more information, please see [annotations](annotations.md) and [kubectl annotate](kubectl/kubectl_annotate.md) document.
## Scaling your application
When load on your application grows or shrinks, its easy to scale with `kubectl`. For instance, to increase the number of nginx replicas from 2 to 3, do:
```console
$ kubectl scale rc my-nginx --replicas=3
replicationcontroller "my-nginx" scaled
$ kubectl get pods -lapp=nginx
NAME READY STATUS RESTARTS AGE
my-nginx-1jgkf 1/1 Running 0 3m
my-nginx-divi2 1/1 Running 0 1h
my-nginx-o0ef1 1/1 Running 0 1h
```
To have the system automatically choose the number of nginx replicas as needed, range from 1 to 3, do:
```console
$ kubectl autoscale rc my-nginx --min=1 --max=3
replicationcontroller "my-nginx" autoscaled
$ kubectl get pods -lapp=nginx
NAME READY STATUS RESTARTS AGE
my-nginx-1jgkf 1/1 Running 0 3m
my-nginx-divi2 1/1 Running 0 3m
$ kubectl get horizontalpodautoscaler
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
nginx ReplicationController/nginx/scale 80% <waiting> 1 3 1m
```
For more information, please see [kubectl scale](kubectl/kubectl_scale.md), [kubectl autoscale](kubectl/kubectl_autoscale.md) and [horizontal pod autoscaler](horizontal-pod-autoscaling/README.md) document.
## Updating your application without a service outage
At some point, youll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](../design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.
Lets say you were running version 1.7.9 of nginx:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](../../docs/design/simple-rolling-update.md):
```console
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
```
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
```console
$ kubectl get pods -lapp=nginx -Ldeployment
NAME READY STATUS RESTARTS AGE DEPLOYMENT
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46
my-nginx-divi2 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
my-nginx-o0ef1 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
my-nginx-q6all 1/1 Running 0 8m 2d1d7a8f682934a254002b56404b813e
```
`kubectl rolling-update` reports progress as it progresses:
```console
Scaling up my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 0 to 3, scaling down my-nginx from 3 to 0 (keep 3 pods available, don't exceed 4 pods)
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 1
Scaling my-nginx down to 2
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 2
Scaling my-nginx down to 1
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 3
Scaling my-nginx down to 0
Update succeeded. Deleting old controller: my-nginx
Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx
replicationcontroller "my-nginx" rolling updated
```
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
```console
$ kubectl rolling-update my-nginx --rollback
Setting "my-nginx" replicas to 1
Continuing update with existing controller my-nginx.
Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 down to 0
Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
replicationcontroller "my-nginx" rolling updated
```
This is one example where the immutability of containers is a huge asset.
If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx-v4
spec:
replicas: 5
selector:
app: nginx
deployment: v4
template:
metadata:
labels:
app: nginx
deployment: v4
spec:
containers:
- name: nginx
image: nginx:1.9.2
args: [“nginx”,”-T”]
ports:
- containerPort: 80
```
and roll it out:
```console
$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml
Created my-nginx-v4
Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods)
Scaling my-nginx-v4 up to 1
Scaling my-nginx down to 3
Scaling my-nginx-v4 up to 2
Scaling my-nginx down to 2
Scaling my-nginx-v4 up to 3
Scaling my-nginx down to 1
Scaling my-nginx-v4 up to 4
Scaling my-nginx down to 0
Scaling my-nginx-v4 up to 5
Update succeeded. Deleting old controller: my-nginx
replicationcontroller "my-nginx-v4" rolling updated
```
You can also run the [update demo](update-demo/) to see a visual representation of the rolling update process.
## In-place updates of resources
Sometimes its necessary to make narrow, non-disruptive updates to resources youve created. For instance, you might want to update the container's image of your pod.
### kubectl patch
Suppose you want to fix a typo of the container's image of a pod. One way to do that is with `kubectl patch`:
```console
# Suppose you have a pod with a container named "nginx" and its image "nignx" (typo),
# use container name "nginx" as a key to update the image from "nignx" (typo) to "nginx"
$ kubectl get pod my-nginx-1jgkf -o yaml
apiversion: v1
kind: pod
...
spec:
containers:
-image: nignx
name: nginx
...
$ kubectl patch pod my-nginx-1jgkf -p '{"spec":{"containers":[{"name":"nginx","image":"nginx"}]}}'
"my-nginx-1jgkf" patched
$ kubectl get pod my-nginx-1jgkf -o yaml
apiversion: v1
kind: pod
...
spec:
containers:
-image: nginx
name: nginx
...
```
The patch is specified using json.
The system ensures that you dont clobber changes made by other users or components by confirming that the `resourceVersion` doesnt differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, dont use your original configuration file as the source since additional fields most likely were set in the live state.
For more information, please see [kubectl patch](kubectl/kubectl_patch.md) document.
### kubectl edit
Alternatively, you may also update resources with `kubectl edit`:
```console
$ kubectl edit pod my-nginx-1jgkf
```
This is equivalent to first `get` the resource, edit it in text editor, and then `replace` the resource with the updated version:
```console
$ kubectl get pod my-nginx-1jgkf -o yaml > /tmp/nginx.yaml
$ vi /tmp/nginx.yaml
# do some edit, and then save the file
$ kubectl replace -f /tmp/nginx.yaml
pod "my-nginx-1jgkf" replaced
$ rm /tmp/nginx.yaml
```
This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.
For more information, please see [kubectl edit](kubectl/kubectl_edit.md) document.
## Using configuration files
A more disciplined alternative to patch and edit is `kubectl apply`.
With apply, you can keep a set of configuration files in source control, where they can be maintained and versioned along with the code for the resources they configure. Then, when you're ready to push configuration changes to the cluster, you can run `kubectl apply`.
This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
```console
$ kubectl apply -f ./nginx-rc.yaml
replicationcontroller "my-nginx-v4" configured
```
As shown in the example above, the configuration used with `kubectl apply` is the same as the one used with `kubectl replace`. However, instead of deleting the existing resource and replacing it with a new one, `kubectl apply` modifies the configuration of the existing resource.
Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource.
Currently, resources are created without this annotation, so the first invocation of `kubectl apply` will fall back to a two-way diff between the provided input and the current configuration of the resource. During this first invocation, it cannot detect the deletion of properties set when the resource was created. For this reason, it will not remove them.
All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff.
## Disruptive updates
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a replication controller. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
```console
$ kubectl replace -f ./nginx-rc.yaml --force
replicationcontrollers "my-nginx-v4" replaced
```
## What's next?
- [Learn about how to use `kubectl` for application introspection and debugging.](introspection-and-debugging.md)
- [Configuration Best Practices and Tips](config-best-practices.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/managing-deployments/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,65 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Resource Usage Monitoring in Kubernetes
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](pods.md), [services](services.md), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/GoogleCloudPlatform/heapster), a project meant to provide a base monitoring platform on Kubernetes.
### Overview
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes [Kubelet](../../DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization) and [Google Cloud Monitoring](https://cloud.google.com/monitoring/). The overall architecture of the service can be seen below:
![overall monitoring architecture](monitoring-architecture.png)
Lets look at some of the other components in more detail.
### cAdvisor
cAdvisor is an open source container resource usage and performance analysis agent. It is purpose built for containers and supports Docker containers natively. In Kubernetes, cadvisor is integrated into the Kubelet binary. cAdvisor auto-discovers all containers in the machine and collects CPU, memory, filesystem, and network usage statistics. cAdvisor also provides the overall machine usage by analyzing the “root” container on the machine.
On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisors UI that shows the overall machine usage:
![cAdvisor](cadvisor.png)
### Kubelet
The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from cAdvisor. It then exposes the aggregated pod resource usage statistics via a REST API.
## Storage Backends
### InfluxDB and Grafana
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
The Grafana container serves Grafanas UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/storage-schema.md#metrics).
Here is a video showing how to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana:
[![How to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana](http://img.youtube.com/vi/SZgqjMrxo3g/0.jpg)](http://www.youtube.com/watch?v=SZgqjMrxo3g)
Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
![snapshot of the default Kubernetes Grafana dashboard](influx.png)
### Google Cloud Monitoring
Google Cloud Monitoring is a hosted monitoring service that allows you to visualize and alert on important metrics in your application. Heapster can be setup to automatically push all collected metrics to Google Cloud Monitoring. These metrics are then available in the [Cloud Monitoring Console](https://app.google.stackdriver.com/). This storage backend is the easiest to setup and maintain. The monitoring console allows you to easily create and customize dashboards using the exported data.
Here is a video showing how to setup and run a Google Cloud Monitoring backed Heapster:
[![how to setup and run a Google Cloud Monitoring backed Heapster](http://img.youtube.com/vi/xSMNR2fcoLs/0.jpg)](http://www.youtube.com/watch?v=xSMNR2fcoLs)
Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
![Google Cloud Monitoring dashboard](gcm.png)
## Try it out!
Now that youve learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/kubernetes/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues via the troubleshooting [channels](../troubleshooting.md).
***
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
*This article was originally posted in [Kubernetes blog](http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html).*
This file has moved to: http://kubernetes.github.io/docs/user-guide/monitoring/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,95 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Namespaces
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
These virtual clusters are called namespaces.
## When to Use Multiple Namespaces
Namespaces are intended for use in environments with many users spread across multiple
teams, or projects. For clusters with a few to tens of users, you should not
need to create or think about namespaces at all. Start using namespaces when you
need the features they provide.
Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.
Namespaces are a way to divide cluster resources between multiple uses (via [resource quota](../../docs/admin/resource-quota.md)).
In future versions of Kubernetes, objects in the same namespace will have the same
access control policies by default.
It is not necessary to use multiple namespaces just to separate slightly different
resources, such as different versions of the same software: use [labels](#labels.md) to distinguish
resources within the same namespace.
## Working with Namespaces
Creation and deletion of namespaces is described in the [Admin Guide documentation
for namespaces](../../docs/admin/namespaces.md)
### Viewing namespaces
You can list the current namespaces in a cluster using:
```console
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
### Setting the namespace for a request
To temporarily set the namespace for a request, use the `--namespace` flag.
For example:
```console
$ kubectl --namespace=<insert-namespace-name-here> run nginx --image=nginx
$ kubectl --namespace=<insert-namespace-name-here> get pods
```
### Setting the namespace preference
You can permanently save the namespace for all subsequent kubectl commands in that
context.
First get your current context:
```console
$ export CONTEXT=$(kubectl config view | grep current-context | awk '{print $2}')
```
Then update the default namespace:
```console
$ kubectl config set-context $CONTEXT --namespace=<insert-namespace-name-here>
# Validate it
$ kubectl config view | grep namespace:
```
## Namespaces and DNS
When you create a [Service](services.md), it creates a corresponding [DNS entry](../admin/dns.md).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
## Not All Objects are in a Namespace
Most kubernetes resources (e.g. pods, services, replication controllers, and others) are
in a some namespace. However namespace resources are not themselves in a namespace.
And, low-level resources, such as [nodes](../../docs/admin/node.md) and
persistentVolumes, are not in any namespace. Events are an exception: they may or may not
have a namespace, depending on the object the event is about.
This file has moved to: http://kubernetes.github.io/docs/user-guide/namespaces/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,148 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Node selection example
This example shows how to assign a [pod](../pods.md) to a specific [node](../../admin/node.md) or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it.
### Step Zero: Prerequisites
This example assumes that you have a basic understanding of Kubernetes pods and that you have [turned up a Kubernetes cluster](https://github.com/kubernetes/kubernetes#documentation).
### Step One: Attach label to the node
Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to.
Then, to add a label to the node you've chosen, run `kubectl label nodes <node-name> <label-key>=<label-value>`. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
If this fails with an "invalid command" error, you're likely using an older version of kubectl that doesn't have the `label` command. In that case, see the [previous version](https://github.com/kubernetes/kubernetes/blob/a053dbc313572ed60d89dae9821ecab8bfd676dc/examples/node-selection/README.md) of this guide for instructions on how to manually set labels on a node.
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](../../../docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
You can verify that it worked by re-running `kubectl get nodes` and checking that the node now has a label.
### Step Two: Add a nodeSelector field to your pod configuration
Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
```
Then add a nodeSelector like so:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
disktype: ssd
```
When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the pod was assigned to.
#### Alpha feature in Kubernetes v1.2: Node Affinity
During the first half of 2016 we are rolling out a new mechanism, called *affinity* for controlling which nodes your pods wil be scheduled onto.
Like `nodeSelector`, affinity is based on labels. But it allows you to write much more expressive rules.
`nodeSelector` wil continue to work during the transition, but will eventually be deprecated.
Kubernetes v1.2 offers an alpha version of the first piece of the affinity mechanism, called [node affinity](../../design/nodeaffinity.md).
There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
`preferresDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively,
in the sense that the former specifies rules that *must* be met for a pod to schedule onto a node (just like
`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler
will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar
to how `nodeSelector` works, if labels on a node change at runtime such that the rules on a pod are no longer
met, the pod will still continue to run on the node. In the future we plan to offer
`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution`
except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.
Node affinity is currently expressed using an annotation on Pod. In v1.3 it will use a field, and we will
also introduce the second piece of the affinity mechanism, called [pod affinity](../../design/podaffinity.md),
which allows you to control whether a pod schedules onto a particular node based on which other pods are
running on the node, rather than the labels on the node.
Here's an example of a pod that uses node affinity:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: with-labels
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/e2e-az-name",
"operator": "In",
"values": ["e2e-az1", "e2e-az2"]
}
]
}
]
},
"preferredDuringSchedulingIgnoredDuringExecution": [
{
"weight": 10,
"preference": {"matchExpressions": [
{
"key": "foo",
"operator": "In", "values": ["bar"]
}
]}
}
]
}
}
another-annotation-key: another-annotation-value
spec:
containers:
- name: with-labels
image: gcr.io/google_containers/pause:2.0
```
This node affinity rule says the pod can only be placed on a node with a label whose key is
`kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition,
among nodes that meet that criteria, nodes with a label whose key is `foo` and whose
value is `bar` should be preferred.
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
to be scheduled onto a candidate node.
### Built-in node labels
In addition to labels you [attach yourself](#step-one-attach-label-to-the-node), nodes come pre-populated
with a standard set of labels. As of Kubernetes v1.2 these labels are
* `kubernetes.io/hostname`
* `failure-domain.beta.kubernetes.io/zone`
* `failure-domain.beta.kubernetes.io/region`
* `beta.kubernetes.io/instance-type`
### Conclusion
While this example only covered one node, you can attach labels to as many nodes as you want. Then when you schedule a pod with a nodeSelector, it can be scheduled on any of the nodes that satisfy that nodeSelector. Be careful that it will match at least one node, however, because if it doesn't the pod won't be scheduled at all.
This file has moved to: http://kubernetes.github.io/docs/user-guide/node-selection/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,27 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Overview
Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. Kubernetes is intended to make deploying containerized/microservice-based applications easy but powerful.
Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure that the state of the cluster continually matches the user's intentions. An operations user should be able to launch a micro-service, letting the scheduler find the right placement. We also want to improve the tools and experience for how users can roll-out applications through patterns like canary deployments.
Kubernetes supports [Docker](http://www.docker.io) and [rkt](https://coreos.com/blog/rocket/) containers, and other container image formats and container runtimes will be supported in the future.
While Kubernetes currently focuses on continuously-running stateless (e.g. web server or in-memory object cache) and "cloud native" stateful applications (e.g. NoSQL datastores), in the near future it will support all the other workload types commonly found in production cluster environments, such as batch, stream processing, and traditional databases.
In Kubernetes, all containers run inside [pods](pods.md). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](volumes.md), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient available capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](replication-controller.md), which we discuss next.
Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](replication-controller.md). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails.
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md).
Kubernetes supports a unique [networking model](../admin/networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](../admin/dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the objects name, and the objects [namespace](namespaces.md). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.
This file has moved to: http://kubernetes.github.io/docs/user-guide/overview/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,198 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Persistent Volumes and Claims
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](volumes.md) is suggested.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Persistent Volumes and Claims](#persistent-volumes-and-claims)
- [Introduction](#introduction)
- [Lifecycle of a volume and claim](#lifecycle-of-a-volume-and-claim)
- [Provisioning](#provisioning)
- [Binding](#binding)
- [Using](#using)
- [Releasing](#releasing)
- [Reclaiming](#reclaiming)
- [Types of Persistent Volumes](#types-of-persistent-volumes)
- [Persistent Volumes](#persistent-volumes)
- [Capacity](#capacity)
- [Access Modes](#access-modes)
- [Recycling Policy](#recycling-policy)
- [Phase](#phase)
- [PersistentVolumeClaims](#persistentvolumeclaims)
- [Access Modes](#access-modes)
- [Resources](#resources)
- [Claims As Volumes](#claims-as-volumes)
<!-- END MUNGE: GENERATED_TOC -->
## Introduction
Managing storage is a distinct problem from managing compute. The `PersistentVolume` subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources: `PersistentVolume` and `PersistentVolumeClaim`.
A `PersistentVolume` (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).
Please see the [detailed walkthrough with working examples](persistent-volumes/).
## Lifecycle of a volume and claim
PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle:
### Provisioning
A cluster administrator will create a number of PVs. They carry the details of the real storage which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
### Binding
A user creates a `PersistentVolumeClaim` with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. The user will always get at least what they asked for, but the volume may be in excess of what was requested.
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
### Using
Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For volumes which support multiple access modes, the user specifies which mode desired when using their claim as a volume in a pod.
Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a persistentVolumeClaim in their Pod's volumes block. [See below for syntax details](#claims-as-volumes).
### Releasing
When a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource. The volume is considered "released" when the claim is deleted, but it is not yet available for another claim. The previous claimant's data remains on the volume which must be handled according to policy.
### Reclaiming
The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released. Currently, volumes can either be Retained or Recycled. Retention allows for manual reclamation of the resource. For those volume plugins that support it, recycling performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim.
## Types of Persistent Volumes
`PersistentVolume` types are implemented as plugins. Kubernetes currently supports the following plugins:
* GCEPersistentDisk
* AWSElasticBlockStore
* NFS
* iSCSI
* RBD (Ceph Block Device)
* Glusterfs
* HostPath (single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
## Persistent Volumes
Each PV contains a spec and status, which is the specification and status of the volume.
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /tmp
server: 172.17.0.2
```
### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](../design/resources.md) to understand the units expected by `capacity`.
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
### Access Modes
A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.
The access modes are:
* ReadWriteOnce -- the volume can be mounted as read-write by a single node
* ReadOnlyMany -- the volume can be mounted read-only by many nodes
* ReadWriteMany -- the volume can be mounted as read-write by many nodes
In the CLI, the access modes are abbreviated to:
* RWO - ReadWriteOnce
* ROX - ReadOnlyMany
* RWX - ReadWriteMany
> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
### Recycling Policy
Current recycling policies are:
* Retain -- manual reclamation
* Recycle -- basic scrub ("rm -rf /thevolume/*")
Currently, NFS and HostPath support recycling.
### Phase
A volume will be in one of the following phases:
* Available -- a free resource that is not yet bound to a claim
* Bound -- the volume is bound to a claim
* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster
* Failed -- the volume has failed its automatic reclamation
The CLI will show the name of the PVC bound to the PV.
## PersistentVolumeClaims
Each PVC contains a spec and status, which is the specification and status of the claim.
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
```
### Access Modes
Claims use the same conventions as volumes when requesting storage with specific access modes.
### Resources
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](../design/resources.md) applies to both volumes and claims.
## Claims As Volumes
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the pod.
```yaml
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
```
This file has moved to: http://kubernetes.github.io/docs/user-guide/persistent-volumes/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,100 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# How To Use Persistent Volumes
The purpose of this guide is to help you become familiar with [Kubernetes Persistent Volumes](../persistent-volumes.md). By the end of the guide, we'll have
nginx serving content from your persistent volume.
This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running.
See [Persistent Storage design document](../../design/persistent-storage.md) for more information.
## Provisioning
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
must first create storage (create their Google Compute Engine (GCE) disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. `HostPath` was included
for ease of development and testing. You'll create a local `HostPath` for this example.
> IMPORTANT! For `HostPath` to work, you will need to run a single node cluster. Kubernetes does not
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides.
```console
# This will be nginx's webroot
$ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
```
PVs are created by posting them to the API server.
```console
$ kubectl create -f docs/user-guide/persistent-volumes/volumes/local-01.yaml
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Available
```
## Requesting storage
Users of Kubernetes request persistent storage for their pods. They don't know how the underlying cluster is provisioned.
They just know they can rely on their claim to storage and can manage its lifecycle independently from the many pods that may use it.
Claims must be created in the same namespace as the pods that use them.
```console
$ kubectl create -f docs/user-guide/persistent-volumes/claims/claim-01.yaml
$ kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[]
# A background process will attempt to match this claim to a volume.
# The eventual state of your claim will look something like this:
$ kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound pv0001
$ kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Bound default/myclaim-1
```
## Using your claim as a volume
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
```console
$ kubectl create -f docs/user-guide/persistent-volumes/simpletest/pod.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 1h
$ kubectl create -f docs/user-guide/persistent-volumes/simpletest/service.json
$ kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
frontendservice 10.0.0.241 <none> 3000/TCP name=frontendhttp 1d
kubernetes 10.0.0.2 <none> 443/TCP <none> 2d
```
## Next steps
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
need to disable SELinux (setenforce 0).
```console
$ curl 10.0.0.241:3000
I love Kubernetes storage!
```
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join the team on [Slack](../../troubleshooting.md#slack) and ask!
Enjoy!
This file has moved to: http://kubernetes.github.io/docs/user-guide/persistent-volumes/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,126 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# The life of a pod
Updated: 4/14/2015
This document covers the lifecycle of a pod. It is not an exhaustive document, but an introduction to the topic.
## Pod Phase
As consistent with the overall [API convention](../devel/api-conventions.md#typical-status-properties), phase is a simple, high-level summary of the phase of the lifecycle of a pod. It is not intended to be a comprehensive rollup of observations of container-level or even pod-level conditions or other state, nor is it intended to be a comprehensive state machine.
The number and meanings of `PodPhase` values are tightly guarded. Other than what is documented here, nothing should be assumed about pods with a given `PodPhase`.
* Pending: The pod has been accepted by the system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.
* Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
* Succeeded: All containers in the pod have terminated in success, and will not be restarted.
* Failed: All containers in the pod have terminated, at least one container has terminated in failure (exited with non-zero exit status or was terminated by the system).
* Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.
## Pod Conditions
A pod containing containers that specify readiness probes will also report the Ready condition. Condition status values may be `True`, `False`, or `Unknown`.
## Container Probes
A [Probe](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Probe) is a diagnostic performed periodically by the kubelet on a container. Specifically the diagnostic is one of three [Handlers](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler):
* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
* `HTTPGetAction`: performs an HTTP Get against the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
Each probe will have one of three results:
* `Success`: indicates that the container passed the diagnostic.
* `Failure`: indicates that the container failed the diagnostic.
* `Unknown`: indicates that the diagnostic failed so no action should be taken.
Currently, the kubelet optionally performs two independent diagnostics on running containers which trigger action:
* `LivenessProbe`: indicates whether the container is *live*, i.e. still running. The LivenessProbe hints to the kubelet when a container is unhealthy. If the LivenessProbe fails, the kubelet will kill the container and the container will be subjected to it's [RestartPolicy](#restartpolicy). The default state of Liveness before the initial delay is `Success`. The state of Liveness for a container when no probe is provided is assumed to be `Success`.
* `ReadinessProbe`: indicates whether the container is *ready* to service requests. If the ReadinessProbe fails, the endpoints controller will remove the pod's IP address from the endpoints of all services that match the pod. Thus, the ReadinessProbe is sometimes useful to signal to the endpoints controller that even though a pod may be running, it should not receive traffic from the proxy (e.g. the container has a long startup time before it starts listening or the container is down for maintenance). The default state of Readiness before the initial delay is `Failure`. The state of Readiness for a container when no probe is provided is assumed to be `Success`.
## Container Statuses
More detailed information about the current (and previous) container statuses can be found in [ContainerStatuses](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#PodStatus). The information reported depends on the current [ContainerState](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#ContainerState), which may be Waiting, Running, or Terminated.
## RestartPolicy
The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x ... capped at 5 minutes and is reset after 10 minutes of successful execution. As discussed in the [pods document](pods.md#durability-of-pods-or-lack-thereof), once bound to a node, a pod will never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
Three types of controllers are currently available:
- Use a [`Job`](jobs.md) for pods which are expected to terminate (e.g. batch computations).
- Use a [`ReplicationController`](replication-controller.md) for pods which are not expected to
terminate (e.g. web servers).
- Use a [`DaemonSet`](../admin/daemons.md): Use for pods which need to run 1 per machine because they provide a
machine-specific system service.
If you are unsure whether to use ReplicationController or Daemon, then see [Daemon Set versus
Replication Controller](../admin/daemons.md#daemon-set-versus-replication-controller).
`ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`.
`Job` is *only* appropriate for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
All 3 types of controllers contain a PodTemplate, which has all the same fields as a Pod.
It is recommended to create the appropriate controller and let it create pods, rather than to
directly create pods yourself. That is because pods alone are not resilient to machine failures,
but Controllers are.
## Pod lifetime
In general, pods which are created do not disappear until someone destroys them. This might be a human or a `ReplicationController`, or another controller. The only exception to this rule is that pods with a `PodPhase` of `Succeeded` or `Failed` for more than some duration (determined by the master) will expire and be automatically reaped.
If a node dies or is disconnected from the rest of the cluster, some entity within the system (call it the NodeController for now) is responsible for applying policy (e.g. a timeout) and marking any pods on the lost node as `Failed`.
## Examples
* Pod is `Running`, 1 container, container exits success
* Log completion event
* If RestartPolicy is:
* Always: restart container, pod stays `Running`
* OnFailure: pod becomes `Succeeded`
* Never: pod becomes `Succeeded`
* Pod is `Running`, 1 container, container exits failure
* Log failure event
* If RestartPolicy is:
* Always: restart container, pod stays `Running`
* OnFailure: restart container, pod stays `Running`
* Never: pod becomes `Failed`
* Pod is `Running`, 2 containers, container 1 exits failure
* Log failure event
* If RestartPolicy is:
* Always: restart container, pod stays `Running`
* OnFailure: restart container, pod stays `Running`
* Never: pod stays `Running`
* When container 2 exits...
* Log failure event
* If RestartPolicy is:
* Always: restart container, pod stays `Running`
* OnFailure: restart container, pod stays `Running`
* Never: pod becomes `Failed`
* Pod is `Running`, container becomes OOM
* Container terminates in failure
* Log OOM event
* If RestartPolicy is:
* Always: restart container, pod stays `Running`
* OnFailure: restart container, pod stays `Running`
* Never: log failure event, pod becomes `Failed`
* Pod is `Running`, a disk dies
* All containers are killed
* Log appropriate event
* Pod becomes `Failed`
* If running under a controller, pod will be recreated elsewhere
* Pod is `Running`, its node is segmented out
* NodeController waits for timeout
* NodeController marks pod `Failed`
* If running under a controller, pod will be recreated elsewhere
This file has moved to: http://kubernetes.github.io/docs/user-guide/pod-states/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -27,18 +27,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Pod Templates
Pod templates are [pod](pods.md) specifications which are included in other objects, such as
[Replication Controllers](replication-controller.md), [Jobs](jobs.md), and
[DaemonSets](../admin/daemons.md). Controllers use Pod Templates to make actual pods.
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive.
## Future Work
A replication controller creates new pods from a template, which is currently inline in the `ReplicationController` object, but which we plan to extract into its own resource [#170](http://issue.k8s.io/170).
This file has moved to: http://kubernetes.github.io/docs/user-guide/pod-templates/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,201 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Pods
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Pods](#pods)
- [What is a _pod_?](#what-is-a-pod)
- [Motivation for pods](#motivation-for-pods)
- [Management](#management)
- [Resource sharing and communication](#resource-sharing-and-communication)
- [Uses of pods](#uses-of-pods)
- [Alternatives considered](#alternatives-considered)
- [Durability of pods (or lack thereof)](#durability-of-pods-or-lack-thereof)
- [Termination of Pods](#termination-of-pods)
- [Privileged mode for pod containers](#privileged-mode-for-pod-containers)
- [API Object](#api-object)
<!-- END MUNGE: GENERATED_TOC -->
_pods_ are the smallest deployable units of computing that can be created and
managed in Kubernetes.
## What is a _pod_?
A _pod_ (as in a pod of whales or pea pod) is a group of one or more containers
(such as Docker containers), the shared storage for those containers, and
options about how to run the containers. Pods are always co-located and
co-scheduled, and run in a shared context. A pod models an
application-specific "logical host" - it contains one or more application
containers which are relatively tightly coupled &mdash; in a pre-container
world, they would have executed on the same physical or virtual machine.
While Kubernetes supports more container runtimes than just Docker, Docker is
the most commonly known runtime, and it helps to describe pods in Docker terms.
The shared context of a pod is a set of Linux namespaces, cgroups, and
potentially other facets of isolation - the same things that isolate a Docker
container. Within a pod's context, the individual applications may have
further sub-isolations applied.
Containers within a pod share an IP address and port space, and
can find each other via `localhost`. They can also communicate with each
other using standard inter-process communications like SystemV semaphores or
POSIX shared memory. Containers in different pods have distinct IP addresses
and can not communicate by IPC.
Applications within a pod also have access to shared volumes, which are defined
as part of a pod and are made available to be mounted into each application's
filesystem.
In terms of [Docker](https://www.docker.com/) constructs, a pod is modelled as
a group of Docker containers with shared namespaces and shared
[volumes](volumes.md). PID namespace sharing is not yet implemented in Docker.
Like individual application containers, pods are considered to be relatively
ephemeral (rather than durable) entities. As discussed in [life of a
pod](pod-states.md), pods are created, assigned a unique ID (UID), and
scheduled to nodes where they remain until termination (according to restart
policy) or deletion. If a node dies, the pods scheduled to that node are
deleted, after a timeout period. A given pod (as defined by a UID) is not
"rescheduled" to a new node; instead, it can be replaced by an identical pod,
with even the same name if desired, but with a new UID (see [replication
controller](replication-controller.md) for more details). (In the future, a
higher-level API may support pod migration.)
When something is said to have the same lifetime as a pod, such as a volume,
that means that it exists as long as that pod (with that UID) exists. If that
pod is deleted for any reason, even if an identical replacement is created, the
related thing (e.g. volume) is also destroyed and created anew.
## Motivation for pods
### Management
Pods are a model of the pattern of multiple cooperating processes which form a
cohesive unit of service. They simplify application deployment and management
by providing a higher-level abstraction than the set of their constituent
applications. Pods serve as unit of deployment, horizontal scaling, and
replication. Colocation (co-scheduling), shared fate (e.g. termination),
coordinated replication, resource sharing, and dependency management are
handled automatically for containers in a pod.
### Resource sharing and communication
Pods enable data sharing and communication among their constituents.
The applications in a pod all use the same network namespace (same IP and port
space), and can thus "find" each other and communicate using `localhost`.
Because of this, applications in a pod must coordinate their usage of ports.
Each pod has an IP address in a flat shared networking space that has full
communication with other physical computers and pods across the network.
The hostname is set to the pod's Name for the application containers within the
pod. [More details on networking](../admin/networking.md).
In addition to defining the application containers that run in the pod, the pod
specifies a set of shared storage volumes. Volumes enable data to survive
container restarts and to be shared among the applications within the pod.
## Uses of pods
Pods can be used to host vertically integrated application stacks (e.g. LAMP),
but their primary motivation is to support co-located, co-managed helper
programs, such as:
* content management systems, file and data loaders, local cache managers, etc.
* log and checkpoint backup, compression, rotation, snapshotting, etc.
* data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
* proxies, bridges, and adapters
* controllers, managers, configurators, and updaters
Individual pods are not intended to run multiple instances of the same
application, in general.
For a longer explanation, see [The Distributed System ToolKit: Patterns for
Composite
Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html).
## Alternatives considered
_Why not just run multiple programs in a single (Docker) container?_
1. Transparency. Making the containers within the pod visible to the
infrastructure enables the infrastructure to provide services to those
containers, such as process management and resource monitoring. This
facilitates a number of conveniences for users.
2. Decoupling software dependencies. The individual containers may be
versioned, rebuilt and redeployed independently. Kubernetes may even support
live updates of individual containers someday.
3. Ease of use. Users don't need to run their own process managers, worry about
signal and exit-code propagation, etc.
4. Efficiency. Because the infrastructure takes on more responsibility,
containers can be lighter weight.
_Why not support affinity-based co-scheduling of containers?_
That approach would provide co-location, but would not provide most of the
benefits of pods, such as resource sharing, IPC, guaranteed fate sharing, and
simplified management.
## Durability of pods (or lack thereof)
Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](replication-controller.md)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
Pod is exposed as a primitive in order to facilitate:
* scheduler and controller pluggability
* support for pod-level operations without the need to "proxy" them via controller APIs
* decoupling of pod lifetime from controller lifetime, such as for bootstrapping
* decoupling of controllers and services &mdash; the endpoint controller just watches pods
* clean composition of Kubelet-level functionality with cluster-level functionality &mdash; Kubelet is effectively the "pod controller"
* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949)
The current best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service. If you find this cumbersome, please comment on [issue #260](http://issue.k8s.io/260).
## Termination of Pods
Because pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a pod the system records the intended grace period before the pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired the KILL signal is sent to those processes and the pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period.
An example flow:
1. User sends command to delete Pod, with default grace period (30s)
2. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period.
3. Pod shows up as "Terminating" when listed in client commands
4. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.
1. If the pod has defined a [preStop hook](container-environment.md#hook-details), it is invoked inside of the pod. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
2. The processes in the Pod are sent the TERM signal.
5. (simultaneous with 3), Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.
6. When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
7. The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client.
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports the `--grace-period=<seconds>` option which allows a user to override the default and specify their own value. The value `0` indicates that delete should be immediate, and removes the pod in the API immediately so a new pod can be created with the same name. On the node pods that are set to terminate immediately will still be given a small grace period before being force killed.
## Privileged mode for pod containers
From kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as separate pods that don't need to be compiled into the kubelet.
If the master is running kubernetes v1.1 or higher, and the nodes are running a version lower than v1.1, then new privileged pods will be accepted by api-server, but will not be launched. They will be pending state.
If user calls `kubectl describe pod FooPodName`, user can see the reason why the pod is in pending state. The events table in the describe command output will say:
`Error validating pod "FooPodName"."FooPodNamespace" from api, ignoring: spec.containers[0].securityContext.privileged: forbidden '<*>(0xc2089d3248)true'`
If the master is running a version lower than v1.1, then privileged pods cannot be created. If user attempts to create a pod, that has a privileged container, the user will get the following error:
`The Pod "FooPodName" is invalid.
spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20b222db0)true'`
## API Object
Pod is a top-level resource in the kubernetes REST API. More details about the
API object can be found at: [Pod API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_pod).
This file has moved to: http://kubernetes.github.io/docs/user-guide/pods/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,61 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Prerequisites
To deploy and manage applications on Kubernetes, youll use the Kubernetes command-line tool, [kubectl](kubectl/kubectl.md). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
## Installing kubectl
If you downloaded a pre-compiled [release](https://github.com/kubernetes/kubernetes/releases), kubectl should be under `platforms/<os>/<arch>` from the tar bundle.
If you built from source, kubectl should be either under `_output/local/bin/<os>/<arch>` or `_output/dockerized/bin/<os>/<arch>`.
The kubectl binary doesn't have to be installed to be executable, but the rest of the walkthrough will assume that it's in your PATH.
The simplest way to install is to copy or move kubectl into a dir already in PATH (e.g. `/usr/local/bin`). For example:
```console
# OS X
$ sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl
# Linux
$ sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl
```
You also need to ensure it's executable:
```console
$ sudo chmod +x /usr/local/bin/kubectl
```
If you prefer not to copy kubectl, you need to ensure the tool is in your path:
```bash
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
## Configuring kubectl
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](kubeconfig-file.md), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](../../docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didnt create, see the [Sharing Cluster Access document](sharing-clusters.md).
By default, kubectl configuration lives at `~/.kube/config`.
#### Making sure you're ready
Check that kubectl is properly configured by getting the cluster state:
```console
$ kubectl cluster-info
```
If you see a url response, you are ready to go.
## What's next?
[Learn how to launch and expose your application.](quick-start.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/prereqs/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,361 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Working with pods and containers in production
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Working with pods and containers in production](#kubernetes-user-guide-managing-applications-working-with-pods-and-containers-in-production)
- [Persistent storage](#persistent-storage)
- [Distributing credentials](#distributing-credentials)
- [Authenticating with a private image registry](#authenticating-with-a-private-image-registry)
- [Helper containers](#helper-containers)
- [Resource management](#resource-management)
- [Liveness and readiness probes (aka health checks)](#liveness-and-readiness-probes-aka-health-checks)
- [Lifecycle hooks and termination notice](#lifecycle-hooks-and-termination-notice)
- [Termination message](#termination-message)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
Youve seen [how to configure and deploy pods and containers](configuring-containers.md), using some of the most common configuration parameters. This section dives into additional features that are especially useful for running applications in production.
## Persistent storage
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. To access more-persistent storage, outside the container file system, you need a [*volume*](volumes.md). This is especially important to stateful applications, such as key-value stores and databases.
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](../../examples/guestbook/) and other examples. We can add a volume to it to store persistent data as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
spec:
# Provision a fresh volume for the pod
volumes:
- name: data
emptyDir: {}
containers:
- name: redis
image: kubernetes/redis:v1
ports:
- containerPort: 6379
# Mount the volume into the pod
volumeMounts:
- mountPath: /redis-master-data
name: data # must match the name of the volume, above
```
`emptyDir` volumes live for the lifespan of the [pod](pods.md), which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.
In addition to the local disk storage provided by `emptyDir`, Kubernetes supports many different network-attached storage solutions, including PD on GCE and EBS on EC2, which are preferred for critical data, and will handle details such as mounting and unmounting the devices on the nodes. See [the volumes doc](volumes.md) for more details.
## Distributing credentials
Many applications need credentials, such as passwords, OAuth tokens, and TLS keys, to authenticate with other applications, databases, and services. Storing these credentials in container images or environment variables is less than ideal, since the credentials can then be copied by anyone with access to the image, pod/container specification, host file system, or host Docker daemon.
Kubernetes provides a mechanism, called [*secrets*](secrets.md), that facilitates delivery of sensitive credentials to applications. A `Secret` is a simple resource containing a map of data. For instance, a simple secret with a username and password might look as follows:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: dmFsdWUtMg0K
username: dmFsdWUtMQ0K
```
As with other resources, this secret can be instantiated using `create` and can be viewed with `get`:
```console
$ kubectl create -f ./secret.yaml
secrets/mysecret
$ kubectl get secrets
NAME TYPE DATA
default-token-v9pyz kubernetes.io/service-account-token 2
mysecret Opaque 2
```
To use the secret, you need to reference it in a pod or pod template. The `secret` volume source enables you to mount it as an in-memory directory into your containers.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
spec:
volumes:
- name: data
emptyDir: {}
- name: supersecret
secret:
secretName: mysecret
containers:
- name: redis
image: kubernetes/redis:v1
ports:
- containerPort: 6379
# Mount the volume into the pod
volumeMounts:
- mountPath: /redis-master-data
name: data # must match the name of the volume, above
- mountPath: /var/run/secrets/super
name: supersecret
```
For more details, see the [secrets document](secrets.md), [example](secrets/) and [design doc](../../docs/design/secrets.md).
## Authenticating with a private image registry
Secrets can also be used to pass [image registry credentials](images.md#using-a-private-registry).
First, create a `.docker/config.json`, such as by running `docker login <registry.domain>`.
Then put the resulting `.docker/config.json` file into a [secret resource](secrets.md). For example:
```console
$ docker login
Username: janedoe
Password: ●●●●●●●●●●●
Email: jdoe@example.com
WARNING: login credentials saved in /Users/jdoe/.docker/config.json.
Login Succeeded
$ echo $(cat ~/.docker/config.json)
{ "https://index.docker.io/v1/": { "auth": "ZmFrZXBhc3N3b3JkMTIK", "email": "jdoe@example.com" } }
$ cat ~/.docker/config.json | base64
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
$ cat > /tmp/image-pull-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
data:
.dockerconfigjson: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
type: kubernetes.io/dockerconfigjson
EOF
$ kubectl create -f /tmp/image-pull-secret.yaml
secrets/myregistrykey
```
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
section to a pod definition.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
```
## Helper containers
[Pods](pods.md) support running multiple containers co-located together. They can be used to host vertically integrated application stacks, but their primary motivation is to support auxiliary helper programs that assist the primary application. Typical examples are data pullers, data pushers, and proxies.
Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a [program that polls a git repository](https://github.com/kubernetes/contrib/tree/master/git-sync) for new updates:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: www-data
emptyDir: {}
containers:
- name: nginx
image: nginx
# This container reads from the www-data volume
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: myrepo/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
# This container writes to the www-data volume
volumeMounts:
- mountPath: /data
name: www-data
```
More examples can be found in our [blog article](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) and [presentation slides](http://www.slideshare.net/Docker/slideshare-burns).
## Resource management
Kubernetess scheduler will place applications only where they have adequate CPU and memory, but it can only do so if it knows how much [resources they require](compute-resources.md). The consequence of specifying too little CPU is that the containers could be starved of CPU if too many other containers were scheduled onto the same node. Similarly, containers could die unpredictably due to running out of memory if no memory were requested, which can be especially likely for large-memory applications.
If no resource requirements are specified, a nominal amount of resources is assumed. (This default is applied via a [LimitRange](../admin/limitrange/) for the default [Namespace](namespaces.md). It can be viewed with `kubectl describe limitrange limits`.) You may explicitly specify the amount of resources required as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
resources:
limits:
# cpu units are cores
cpu: 500m
# memory units are bytes
memory: 64Mi
requests:
# cpu units are cores
cpu: 500m
# memory units are bytes
memory: 64Mi
```
The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See [Resource QoS](../proposals/resource-qos.md) for the difference between resource limits and requests.
If youre not sure how much resources to request, you can first launch the application without specifying resources, and use [resource usage monitoring](monitoring.md) to determine appropriate values.
## Liveness and readiness probes (aka health checks)
Many applications running for long periods of time eventually transition to broken states, and cannot recover except by restarting them. Kubernetes provides [*liveness probes*](pod-states.md#container-probes) to detect and remedy such situations.
A common way to probe an application is using HTTP, which can be specified as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
# Path to probe; should be cheap, but representative of typical behavior
path: /index.html
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
```
Other times, applications are only temporarily unable to serve, and will recover on their own. Typically in such cases youd prefer not to kill the application, but dont want to send it requests, either, since the application wont respond correctly or at all. A common such scenario is loading large data or configuration files during application startup. Kubernetes provides *readiness probes* to detect and mitigate such situations. Readiness probes are configured similarly to liveness probes, just using the `readinessProbe` field. A pod with containers reporting that they are not ready will not receive traffic through Kubernetes [services](connecting-applications.md).
For more details (e.g., how to specify command-based probes), see the [example in the walkthrough](walkthrough/k8s201.md#health-checking), the [standalone example](liveness/), and the [documentation](pod-states.md#container-probes).
## Lifecycle hooks and termination notice
Of course, nodes and applications may fail at any time, but many applications benefit from clean shutdown, such as to complete in-flight requests, when the termination of the application is deliberate. To support such cases, Kubernetes supports two kinds of notifications:
* Kubernetes will send SIGTERM to applications, which can be handled in order to effect graceful termination. SIGKILL is sent a configurable number of seconds later if the application does not terminate sooner (defaults to 30 seconds, controlled by `spec.terminationGracePeriodSeconds`).
* Kubernetes supports the (optional) specification of a [*pre-stop lifecycle hook*](container-environment.md#container-hooks), which will execute prior to sending SIGTERM.
The specification of a pre-stop hook is similar to that of probes, but without the timing-related parameters. For example:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
preStop:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["/usr/sbin/nginx","-s","quit"]
```
## Termination message
In order to achieve a reasonably high level of availability, especially for actively developed applications, its important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](kubectl/kubectl.md) or the [UI](ui.md), in addition to general [log collection](logging.md). It is possible to specify a `terminationMessagePath` where a container will write its “death rattle”, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
Here is a toy example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-w-message
spec:
containers:
- name: messager
image: "ubuntu:14.04"
command: ["/bin/sh","-c"]
args: ["sleep 60 && /bin/echo Sleep expired > /dev/termination-log"]
```
The message is recorded along with the other state of the last (i.e., most recent) termination:
```console
$ kubectl create -f ./pod.yaml
pods/pod-w-message
$ sleep 70
$ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
Sleep expired
$ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.exitCode}}{{end}}"
0
```
## What's next?
[Learn more about managing deployments.](managing-deployments.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/production-pods/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,85 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Quick start
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Quick start](#kubernetes-user-guide-managing-applications-quick-start)
- [Launching a simple application](#launching-a-simple-application)
- [Exposing your application to the Internet](#exposing-your-application-to-the-internet)
- [Killing the application](#killing-the-application)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
This guide will help you get oriented to Kubernetes and running your first containers on the cluster. If you are already familiar with the docker-cli, you can also checkout the docker-cli to kubectl migration guide [here](docker-cli-to-kubectl.md).
## Launching a simple application
Once your application is packaged into a container and pushed to an image registry, youre ready to deploy it to Kubernetes.
For example, [nginx](http://wiki.nginx.org/Main) is a popular HTTP server, with a [pre-built container on Docker hub](https://registry.hub.docker.com/_/nginx/). The [`kubectl run`](kubectl/kubectl_run.md) command below will create two nginx replicas, listening on port 80.
```console
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx my-nginx nginx run=my-nginx 2
```
You can see that they are running by:
```console
$ kubectl get po
NAME READY STATUS RESTARTS AGE
my-nginx-l8n3i 1/1 Running 0 29m
my-nginx-q7jo3 1/1 Running 0 29m
```
Kubernetes will ensure that your application keeps running, by automatically restarting containers that fail, spreading containers across nodes, and recreating containers on new nodes when nodes fail.
## Exposing your application to the Internet
Through integration with some cloud providers (for example Google Compute Engine and AWS EC2), Kubernetes enables you to request that it provision a public IP address for your application. To do this run:
```console
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
service "my-nginx" exposed
```
To find the public IP address assigned to your application, execute:
```console
$ kubectl get svc my-nginx
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.179.240.1 25.1.2.3 80/TCP run=nginx 8d
```
You may need to wait for a minute or two for the external ip address to be provisioned.
In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a [firewall to allow traffic on port 80](services-firewalls.md).
If you're running on AWS, Kubernetes creates an ELB for you. ELBs use host
names, not IPs, so you will have to do `kubectl describe svc my-nginx` and look
for the `LoadBalancer Ingress` host name. Traffic from external IPs is allowed
automatically.
## Killing the application
To kill the application and delete its containers and public IP address, do:
```console
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
$ kubectl delete svc my-nginx
services/my-nginx
```
## What's next?
[Learn about how to configure common container parameters, such as commands and environment variables.](configuring-containers.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/quick-start/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,307 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Replication Controller
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Replication Controller](#replication-controller)
- [What is a _replication controller_?](#what-is-a-replication-controller)
- [Running an example Replication Controller](#running-an-example-replication-controller)
- [Writing a Replication Controller Spec](#writing-a-replication-controller-spec)
- [Pod Template](#pod-template)
- [Labels on the Replication Controller](#labels-on-the-replication-controller)
- [Pod Selector](#pod-selector)
- [Multiple Replicas](#multiple-replicas)
- [Working with Replication Controllers](#working-with-replication-controllers)
- [Deleting a Replication Controller and its Pods](#deleting-a-replication-controller-and-its-pods)
- [Deleting just a Replication Controller](#deleting-just-a-replication-controller)
- [Isolating pods from a Replication Controller](#isolating-pods-from-a-replication-controller)
- [Common usage patterns](#common-usage-patterns)
- [Rescheduling](#rescheduling)
- [Scaling](#scaling)
- [Rolling updates](#rolling-updates)
- [Multiple release tracks](#multiple-release-tracks)
- [Using Replication Controllers with Services](#using-replication-controllers-with-services)
- [Writing programs for Replication](#writing-programs-for-replication)
- [Responsibilities of the replication controller](#responsibilities-of-the-replication-controller)
- [API Object](#api-object)
- [Alternatives to Replication Controller](#alternatives-to-replication-controller)
- [Bare Pods](#bare-pods)
- [Job](#job)
- [DaemonSet](#daemonset)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _replication controller_?
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one
time. In other words, a replication controller makes sure that a pod or homogeneous set of pods are
always up and available.
If there are too many pods, it will kill some. If there are too few, the
replication controller will start more. Unlike manually created pods, the pods maintained by a
replication controller are automatically replaced if they fail, get deleted, or are terminated.
For example, your pods get re-created on a node after disruptive maintenance such as a kernel upgrade.
For this reason, we recommend that you use a replication controller even if your application requires
only a single pod. You can think of a replication controller as something similar to a process supervisor,
but rather then individual processes on a single node, the replication controller supervises multiple pods
across multiple nodes.
Replication Controller is often abbreviated to "rc" or "rcs" in discussion, and as a shortcut in
kubectl commands.
A simple case is to create 1 Replication Controller object in order to reliably run one instance of
a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated
service, such as web servers.
## Running an example Replication Controller
Here is an example Replication Controller config. It runs 3 copies of the nginx web server.
<!-- BEGIN MUNGE: EXAMPLE replication.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
[Download example](replication.yaml?raw=true)
<!-- END MUNGE: EXAMPLE replication.yaml -->
Run the example job by downloading the example file and then running this command:
```console
$ kubectl create -f ./replication.yaml
replicationcontrollers/nginx
```
Check on the status of the replication controller using this command:
```console
$ kubectl describe replicationcontrollers/nginx
Name: nginx
Namespace: default
Image(s): nginx
Selector: app=nginx
Labels: app=nginx
Replicas: 3 current / 3 desired
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Events:
FirstSeen LastSeen Count From
SubobjectPath Reason Message
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
{replication-controller } SuccessfulCreate Created pod: nginx-qrm3m
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
{replication-controller } SuccessfulCreate Created pod: nginx-3ntk0
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
{replication-controller } SuccessfulCreate Created pod: nginx-4ok8v
```
Here, 3 pods have been made, but none are running yet, perhaps because the image is being pulled.
A little later, the same command may show:
```
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
```
To list all the pods that belong to the rc in a machine readable form, you can use a command like this:
```console
$ pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
```
Here, the selector is the same as the selector for the replication controller (seen in the
`kubectl describe` output, and in a different form in `replication.yaml`. The `--output=jsonpath` option
specifies an expression that just gets the name from each pod in the returned list.
## Writing a Replication Controller Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [here](simple-yaml.md),
[here](configuring-containers.md), and [here](working-with-resources.md).
A Replication Controller also needs a [`.spec` section](../devel/api-conventions.md#spec-and-status).
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller.md#pod-template). It has exactly
the same schema as a [pod](pods.md), except it is nested and does not have an `apiVersion` or
`kind`.
In addition to required fields for a Pod, a pod template in a job must specify appropriate
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
Only a [`RestartPolicy`](pod-states.md) equal to `Always` is allowed, which is the default
if not specified.
For local container restarts, replication controllers delegate to an agent on the node,
for example the [Kubelet](../admin/kubelet.md) or Docker.
### Labels on the Replication Controller
The replication controller can itself have labels (`.metadata.labels`). Typically, you
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
then it is defaulted to `.spec.template.metadata.labels`. However, they are allowed to be
different, and the `.metadata.labels` do not affec the behavior of the replication controller.
### Pod Selector
The `.spec.selector` field is a [label selector](labels.md#label-selectors). A replication
controller manages all the pods with labels which match the selector. It does not distinguish
between pods which it created or deleted versus pods which some other person or process created or
deleted. This allows the replication controller to be replaced without affecting the running pods.
If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will
be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to
`.spec.template.metadata.labels`.
Also you should not normally create any pods whose labels match this selector, either directly, via
another ReplicationController or via another controller such as Job. Otherwise, the
ReplicationController will think that those pods were created by it. Kubernetes will not stop you
from doing this.
If you do end up with multiple controllers that have overlapping selectors, you
will have to manage the deletion yourself (see [below](#updating-a-replication-controller)).
### Multiple Replicas
You can specify how many pods should run concurrently by setting `.spec.replicas` to the number
of pods you would like to have running concurrently. The number running at any time may be higher
or lower, such as if the replicas was just increased or decreased, or if a pod is gracefully
shutdown, and a replacement starts early.
If you do not specify `.spec.replicas`, then it defaults to 1.
## Working with Replication Controllers
### Deleting a Replication Controller and its Pods
To delete a replication controller and all its pods, use [`kubectl
delete`](kubectl/kubectl_delete.md). Kubectl will scale the replication controller to zero and wait
for it to delete each pod before deleting the replication controller itself. If this kubectl
command is interrupted, it can be restarted.
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
0, wait for pod deletions, then delete the replication controller).
### Deleting just a Replication Controller
You can delete a replication controller without affecting any of its pods.
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](kubectl/kubectl_delete.md).
When using the REST API or go client library, simply delete the replication controller object.
Once the original is deleted, you can create a new replication controller to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
However, it will not make any effort to make existing pods match a new, different pod template.
To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
### Isolating pods from a Replication Controller
Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
## Common usage patterns
### Rescheduling
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a replication controller will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent).
### Scaling
The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
### Rolling updates
The replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one.
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new replication controller with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
The two replication controllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
Rolling update is implemented in the client tool
[kubectl](kubectl/kubectl_rolling-update.md)
### Multiple release tracks
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a replication controller with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another replication controller with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the replication controllers separately to test things out, monitor the results, etc.
### Using Replication Controllers with Services
Multiple replication controllers can sit behind a single service, so that, for example, some traffic
goes to the old version, and some goes to the new version.
A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services.
## Writing programs for Replication
Pods created by a replication controller are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but replication controllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the replication controller itself.
## Responsibilities of the replication controller
The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
## API Object
Replication controller is a top-level resource in the kubernetes REST API. More details about the
API object can be found at: [ReplicationController API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_replicationcontroller).
## Alternatives to Replication Controller
### Bare Pods
Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
### Job
Use a [Job](jobs.md) instead of a replication controller for pods that are expected to terminate on their own
(i.e. batch jobs).
### DaemonSet
Use a [DaemonSet](../admin/daemons.md) instead of a replication controller for pods that provide a
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime is tied
to machine lifetime: the pod needs to be running on the machine before other pods start, and are
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
This file has moved to: http://kubernetes.github.io/docs/user-guide/replication-controller/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,9 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
Resource Quota
========================================
This page has been moved to [here](../../admin/resourcequota/README.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/resourcequota/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/resourcequota/README.md?pixel)]()

View File

@@ -32,706 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Secrets
Objects of type `secret` are intended to hold sensitive information, such as
passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
is safer and more flexible than putting it verbatim in a `pod` definition or in
a docker image. See [Secrets design document](../design/secrets.md) for more information.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Secrets](#secrets)
- [Overview of Secrets](#overview-of-secrets)
- [Built-in Secrets](#built-in-secrets)
- [Service Accounts Automatically Create and Attach Secrets with API Credentials](#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)
- [Creating your own Secrets](#creating-your-own-secrets)
- [Creating a Secret Using kubectl create secret](#creating-a-secret-using-kubectl-create-secret)
- [Creating a Secret Manually](#creating-a-secret-manually)
- [Decoding a Secret](#decoding-a-secret)
- [Using Secrets](#using-secrets)
- [Using Secrets as Files from a Pod](#using-secrets-as-files-from-a-pod)
- [Consuming Secret Values from Volumes](#consuming-secret-values-from-volumes)
- [Using Secrets as Environment Variables](#using-secrets-as-environment-variables)
- [Consuming Secret Values from Environment Variables](#consuming-secret-values-from-environment-variables)
- [Using imagePullSecrets](#using-imagepullsecrets)
- [Manually specifying an imagePullSecret](#manually-specifying-an-imagepullsecret)
- [Arranging for imagePullSecrets to be Automatically Attached](#arranging-for-imagepullsecrets-to-be-automatically-attached)
- [Automatic Mounting of Manually Created Secrets](#automatic-mounting-of-manually-created-secrets)
- [Details](#details)
- [Restrictions](#restrictions)
- [Secret and Pod Lifetime interaction](#secret-and-pod-lifetime-interaction)
- [Use cases](#use-cases)
- [Use-Case: Pod with ssh keys](#use-case-pod-with-ssh-keys)
- [Use-Case: Pods with prod / test credentials](#use-case-pods-with-prod--test-credentials)
- [Use-case: Dotfiles in secret volume](#use-case-dotfiles-in-secret-volume)
- [Use-case: Secret visible to one container in a pod](#use-case-secret-visible-to-one-container-in-a-pod)
- [Security Properties](#security-properties)
- [Protections](#protections)
- [Risks](#risks)
<!-- END MUNGE: GENERATED_TOC -->
## Overview of Secrets
A Secret is an object that contains a small amount of sensitive data such as
a password, a token, or a key. Such information might otherwise be put in a
Pod specification or in an image; putting it in a Secret object allows for
more control over how it is used, and reduces the risk of accidental exposure.
Users can create secrets, and the system also creates some secrets.
To use a secret, a pod needs to reference the secret.
A secret can be used with a pod in two ways: as files in a [volume](volumes.md) mounted on one or more of
its containers, in environment variables, or used by kubelet when pulling images for the pod.
### Built-in Secrets
#### Service Accounts Automatically Create and Attach Secrets with API Credentials
Kubernetes automatically creates secrets which contain credentials for
accessing the API and it automatically modifies your pods to use this type of
secret.
The automatic creation and use of API credentials can be disabled or overridden
if desired. However, if all you need to do is securely access the apiserver,
this is the recommended workflow.
See the [Service Account](service-accounts.md) documentation for more
information on how Service Accounts work.
### Creating your own Secrets
#### Creating a Secret Using kubectl create secret
Say that some pods need to access a database. The
username and password that the pods should use is in the files
`./username.txt` and `./password.txt` on your local machine.
```console
# Create files needed for rest of example.
$ echo "admin" > ./username.txt
$ echo "1f2d1e2e67df" > ./password.txt
```
The `kubectl create secret` command
packages these files into a Secret and creates
the object on the Apiserver.
```console
$ kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
secret "db-user-pass" created
```
You can check that the secret was created like this:
```console
$ kubectl get secrets
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
$ kubectl describe secrets/db-user-pass
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password.txt: 13 bytes
username.txt: 6 bytes
```
Note that neither `get` nor `describe` shows the contents of the file by default.
This is to protect the secret from being exposed accidentally to someone looking
or from being stored in a terminal log.
See [decoding a secret](#decoding-a-secret) for how to see the contents.
#### Creating a Secret Manually
You can also create a secret object in a file first,
in json or yaml format, and then create that object.
Each item must be base64 encoded:
```console
$ echo "admin" | base64
YWRtaW4K
$ echo "1f2d1e2e67df" | base64
MWYyZDFlMmU2N2RmCg==
```
Now write a secret object that looks like this:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: MWYyZDFlMmU2N2RmCg==
username: YWRtaW4K
```
The data field is a map. Its keys must match
[`DNS_SUBDOMAIN`](../design/identifiers.md), except that leading dots are also
allowed. The values are arbitrary data, encoded using base64.
Create the secret using [`kubectl create`](kubectl/kubectl_create.md):
```console
$ kubectl create -f ./secret.yaml
secret "mysecret" created
```
**Encoding Note:** The serialized JSON and YAML values of secret data are encoded as
base64 strings. Newlines are not valid within these strings and must be
omitted (i.e. do not use `-b` option of `base64` which breaks long lines.)
#### Decoding a Secret
Get back the secret created in the previous section:
```console
$ kubectl get secret mysecret -o yaml
apiVersion: v1
data:
password: MWYyZDFlMmU2N2RmCg==
username: YWRtaW4K
kind: Secret
metadata:
creationTimestamp: 2016-01-22T18:41:56Z
name: mysecret
namespace: default
resourceVersion: "164619"
selfLink: /api/v1/namespaces/default/secrets/mysecret
uid: cfee02d6-c137-11e5-8d73-42010af00002
type: Opaque
```
Decode the password field:
```console
$ echo "MWYyZDFlMmU2N2RmCg==" | base64 -D
1f2d1e2e67df
```
### Using Secrets
Secrets can be mounted as data volumes or be exposed as environment variables to
be used by a container in a pod. They can also be used by other parts of the
system, without being directly exposed to the pod. For example, they can hold
credentials that other parts of the system should use to interact with external
systems on your behalf.
#### Using Secrets as Files from a Pod
To consume a Secret in a volume in a Pod:
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
1. Modify your Pod definition to add a volume under `spec.volumes[]`. Name the volume anything, and have a `spec.volumes[].secret.secretName` field equal to the name of the secret object.
1. Add a `spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `spec.containers[].volumeMounts[].readOnly = true` and `spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear.
1. Modify your image and/or command line so that the the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`.
This is an example of a pod that mounts a secret in a volume:
```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "mypod",
"namespace": "myns"
},
"spec": {
"containers": [{
"name": "mypod",
"image": "redis",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "mysecret"
}
}]
}
}
```
Each secret you want to use needs to be referred to in `spec.volumes`.
If there are multiple containers in the pod, then each container needs its
own `volumeMounts` block, but only one `spec.volumes` is needed per secret.
You can package many files into one secret, or use many secrets, whichever is convenient.
See another example of creating a secret and a pod that consumes that secret in a volume [here](secrets/).
##### Consuming Secret Values from Volumes
Inside the container that mounts a secret volume, the secret keys appear as
files and the secret values are base-64 decoded and stored inside these files.
This is the result of commands
executed inside the container from the example above:
```console
$ ls /etc/foo/
username
password
$ cat /etc/foo/username
admin
$ cat /etc/foo/password
1f2d1e2e67df
```
The program in a container is responsible for reading the secret(s) from the
files.
#### Using Secrets as Environment Variables
To use a secret in an environment variable in a pod:
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[x].valueFrom.secretKeyRef`.
1. Modify your image and/or command line so that the the program looks for values in the specified environment variabless
This is an example of a pod that mounts a secret in a volume:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
```
##### Consuming Secret Values from Environment Variables
Inside a container that consumes a secret in an environment variables, the secret keys appear as
normal environment variables containing the base-64 decoded values of the secret data.
This is the result of commands executed inside the container from the example above:
```console
$ echo $SECRET_USERNAME
admin
$ cat /etc/foo/password
1f2d1e2e67df
```
#### Using imagePullSecrets
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
password to the Kubelet so it can pull a private image on behalf of your Pod.
##### Manually specifying an imagePullSecret
Use of imagePullSecrets is described in the [images documentation](images.md#specifying-imagepullsecrets-on-a-pod)
##### Arranging for imagePullSecrets to be Automatically Attached
You can manually create an imagePullSecret, and reference it from
a serviceAccount. Any pods created with that serviceAccount
or that default to use that serviceAccount, will get their imagePullSecret
field set to that of the service account.
See [here](service-accounts.md#adding-imagepullsecrets-to-a-service-account)
for a detailed explanation of that process.
#### Automatic Mounting of Manually Created Secrets
We plan to extend the service account behavior so that manually created
secrets (e.g. one containing a token for accessing a github account)
can be automatically attached to pods based on their service account.
*This is not implemented yet. See [issue 9902](http://issue.k8s.io/9902).*
## Details
### Restrictions
Secret volume sources are validated to ensure that the specified object
reference actually points to an object of type `Secret`. Therefore, a secret
needs to be created before any pods that depend on it.
Secret API objects reside in a namespace. They can only be referenced by pods
in that same namespace.
Individual secrets are limited to 1MB in size. This is to discourage creation
of very large secrets which would exhaust apiserver and kubelet memory.
However, creation of many smaller secrets could also exhaust memory. More
comprehensive limits on memory usage due to secrets is a planned feature.
Kubelet only supports use of secrets for Pods it gets from the API server.
This includes any pods created using kubectl, or indirectly via a replication
controller. It does not include pods created via the kubelets
`--manifest-url` flag, its `--config` flag, or its REST API (these are
not common ways to create pods.)
### Secret and Pod Lifetime interaction
When a pod is created via the API, there is no check whether a referenced
secret exists. Once a pod is scheduled, the kubelet will try to fetch the
secret value. If the secret cannot be fetched because it does not exist or
because of a temporary lack of connection to the API server, kubelet will
periodically retry. It will report an event about the pod explaining the
reason it is not started yet. Once the a secret is fetched, the kubelet will
create and mount a volume containing it. None of the pod's containers will
start until all the pod's volumes are mounted.
Once the kubelet has started a pod's containers, its secret volumes will not
change, even if the secret resource is modified. To change the secret used,
the original pod must be deleted, and a new pod (perhaps with an identical
`PodSpec`) must be created. Therefore, updating a secret follows the same
workflow as deploying a new container image. The `kubectl rolling-update`
command can be used ([man page](kubectl/kubectl_rolling-update.md)).
The [`resourceVersion`](../devel/api-conventions.md#concurrency-control-and-consistency)
of the secret is not specified when it is referenced.
Therefore, if a secret is updated at about the same time as pods are starting,
then it is not defined which version of the secret will be used for the pod. It
is not possible currently to check what resource version of a secret object was
used when a pod was created. It is planned that pods will report this
information, so that a replication controller restarts ones using an old
`resourceVersion`. In the interim, if this is a concern, it is recommended to not
update the data of existing secrets, but to create new ones with distinct names.
## Use cases
### Use-Case: Pod with ssh keys
Create a secret containing some ssh keys:
```console
$ kubectl create secret generic my-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
```
**Security Note:** think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to have accessible to all the users with whom you share the kubernetes cluster, and can revoke if they are compromised.
Now we can create a pod which references the secret with the ssh key and
consumes it in a volume:
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "secret-test-pod",
"labels": {
"name": "secret-test"
}
},
"spec": {
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "ssh-key-secret"
}
}
],
"containers": [
{
"name": "ssh-test-container",
"image": "mySshImage",
"volumeMounts": [
{
"name": "secret-volume",
"readOnly": true,
"mountPath": "/etc/secret-volume"
}
]
}
]
}
}
```
When the container's command runs, the pieces of the key will be available in:
/etc/secret-volume/id-rsa.pub
/etc/secret-volume/id-rsa
The container is then free to use the secret data to establish an ssh connection.
### Use-Case: Pods with prod / test credentials
This example illustrates a pod which consumes a secret containing prod
credentials and another pod which consumes a secret with test environment
credentials.
Make the secrets:
```console
$ kubectl create secret generic prod-db-password --from-literal=user=produser --from-literal=password=Y4nys7f11
secret "prod-db-password" created
$ kubectl create secret generic test-db-password --from-literal=user=testuser --from-literal=password=iluvtests
secret "test-db-password" created
```
Now make the pods:
```json
{
"apiVersion": "v1",
"kind": "List",
"items":
[{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "prod-db-client-pod",
"labels": {
"name": "prod-db-client"
}
},
"spec": {
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "prod-db-secret"
}
}
],
"containers": [
{
"name": "db-client-container",
"image": "myClientImage",
"volumeMounts": [
{
"name": "secret-volume",
"readOnly": true,
"mountPath": "/etc/secret-volume"
}
]
}
]
}
},
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "test-db-client-pod",
"labels": {
"name": "test-db-client"
}
},
"spec": {
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "test-db-secret"
}
}
],
"containers": [
{
"name": "db-client-container",
"image": "myClientImage",
"volumeMounts": [
{
"name": "secret-volume",
"readOnly": true,
"mountPath": "/etc/secret-volume"
}
]
}
]
}
}]
}
```
Both containers will have the following files present on their filesystems:
```console
/etc/secret-volume/username
/etc/secret-volume/password
```
Note how the specs for the two pods differ only in one field; this facilitates
creating pods with different capabilities from a common pod config template.
You could further simplify the base pod specification by using two Service Accounts:
one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "prod-db-client-pod",
"labels": {
"name": "prod-db-client"
}
},
"spec": {
"serviceAccount": "prod-db-client",
"containers": [
{
"name": "db-client-container",
"image": "myClientImage"
}
]
}
```
### Use-case: Dotfiles in secret volume
In order to make piece of data 'hidden' (ie, in a file whose name begins with a dot character), simply
make that key begin with a dot. For example, when the following secret secret is mounted into a volume:
```json
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "dotfile-secret"
},
"data": {
".secret-file": "dmFsdWUtMg0KDQo=",
}
}
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "secret-dotfiles-pod",
},
"spec": {
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "dotfile-secret"
}
}
],
"containers": [
{
"name": "dotfile-test-container",
"image": "gcr.io/google_containers/busybox",
"command": "ls -l /etc/secret-volume"
"volumeMounts": [
{
"name": "secret-volume",
"readOnly": true,
"mountPath": "/etc/secret-volume"
}
]
}
]
}
}
```
The `secret-volume` will contain a single file, called `.secret-file`, and
the `dotfile-test-container` will have this file present at the path
`/etc/secret-volume/.secret-file`.
**NOTE**
Files beginning with dot characters are hidden from the output of `ls -l`;
you must use `ls -la` to see them when listing directory contents.
### Use-case: Secret visible to one container in a pod
<a name="use-case-two-containers"></a>
Consider a program that needs to handle HTTP requests, do some complex business
logic, and then sign some messages with an HMAC. Because it has complex
application logic, there might be an unnoticed remote file reading exploit in
the server, which could expose the private key to an attacker.
This could be divided into two processes in two containers: a frontend container
which handles user interaction and business logic, but which cannot see the
private key; and a signer container that can see the private key, and responds
to simple signing requests from the frontend (e.g. over localhost networking).
With this partitioned approach, an attacker now has to trick the application
server into doing something rather arbitrary, which may be harder than getting
it to read a file.
<!-- TODO: explain how to do this while still using automation. -->
## Security Properties
### Protections
Because `secret` objects can be created independently of the `pods` that use
them, there is less risk of the secret being exposed during the workflow of
creating, viewing, and editing pods. The system can also take additional
precautions with `secret` objects, such as avoiding writing them to disk where
possible.
A secret is only sent to a node if a pod on that node requires it. It is not
written to disk. It is stored in a tmpfs. It is deleted once the pod that
depends on it is deleted.
On most Kubernetes-project-maintained distributions, communication between user
to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS.
Secrets are protected when transmitted over these channels.
Secret data on nodes is stored in tmpfs volumes and thus does not come to rest
on the node.
There may be secrets for several pods on the same node. However, only the
secrets that a pod requests are potentially visible within its containers.
Therefore, one Pod does not have access to the secrets of another pod.
There may be several containers in a pod. However, each container in a pod has
to request the secret volume in its `volumeMounts` for it to be visible within
the container. This can be used to construct useful [security partitions at the
Pod level](#use-case-two-containers).
### Risks
- In the API server secret data is stored as plaintext in etcd; therefore:
- Administrators should limit access to etcd to admin users
- Secret data in the API server is at rest on the disk that etcd uses; admins may want to wipe/shred disks
used by etcd when no longer in use
- Applications still need to protect the value of secret after reading it from the volume,
such as not accidentally logging it or transmitting it to an untrusted party.
- A user who can create a pod that uses a secret can also see the value of that secret. Even
if apiserver policy does not allow that user to read the secret object, the user could
run a pod which exposes the secret.
- If multiple replicas of etcd are run, then the secrets will be shared between them.
By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured.
- It is not possible currently to control which users of a Kubernetes cluster can
access a secret. Support for this is planned.
- Currently, anyone with root on any node can read any secret from the apiserver,
by impersonating the kubelet. It is a planned feature to only send secrets to
nodes that actually require them, to restrict the impact of a root exploit on a
single node.
This file has moved to: http://kubernetes.github.io/docs/user-guide/secrets/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,64 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Secrets example
Following this example, you will create a [secret](../secrets.md) and a [pod](../pods.md) that consumes that secret in a [volume](../volumes.md). See [Secrets design document](../../design/secrets.md) for more information.
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the secret
A secret contains a set of named byte arrays.
Use the [`examples/secrets/secret.yaml`](secret.yaml) file to create a secret:
```console
$ kubectl create -f docs/user-guide/secrets/secret.yaml
```
You can use `kubectl` to see information about the secret:
```console
$ kubectl get secrets
NAME TYPE DATA
test-secret Opaque 2
$ kubectl describe secret test-secret
Name: test-secret
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
data-1: 9 bytes
data-2: 11 bytes
```
## Step Two: Create a pod that consumes a secret
Pods consume secrets in volumes. Now that you have created a secret, you can create a pod that
consumes it.
Use the [`examples/secrets/secret-pod.yaml`](secret-pod.yaml) file to create a Pod that consumes the secret.
```console
$ kubectl create -f docs/user-guide/secrets/secret-pod.yaml
```
This pod runs a binary that displays the content of one of the pieces of secret data in the secret
volume:
```console
$ kubectl logs secret-test-pod
2015-04-29T21:17:24.712206409Z content of file "/etc/secret-volume/data-1": value-1
```
This file has moved to: http://kubernetes.github.io/docs/user-guide/secrets/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,9 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Security Contexts
A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. See [security context design](../design/security_context.md) for more details.
This file has moved to: http://kubernetes.github.io/docs/user-guide/security-context/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,198 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Service Accounts
This file has moved to: http://kubernetes.github.io/docs/user-guide/service-accounts/
A service account provides an identity for processes that run in a Pod.
*This is a user introduction to Service Accounts. See also the
[Cluster Admin Guide to Service Accounts](../admin/service-accounts-admin.md).*
*Note: This document describes how service accounts behave in a cluster set up
as recommended by the Kubernetes project. Your cluster administrator may have
customized the behavior in your cluster, in which case this documentation may
not apply.*
When you (a human) access the cluster (e.g. using `kubectl`), you are
authenticated by the apiserver as a particular User Account (currently this is
usually `admin`, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account (e.g.
`default`).
## Using the Default Service Account to access the API server.
When you create a pod, you do not need to specify a service account. It is
automatically assigned the `default` service account of the same namespace. If
you get the raw json or yaml for a pod you have created (e.g. `kubectl get
pods/podname -o yaml`), you can see the `spec.serviceAccount` field has been
[automatically set](working-with-resources.md#resources-are-automatically-modified).
You can access the API using a proxy or with a client library, as described in
[Accessing the Cluster](accessing-the-cluster.md#accessing-the-api-from-a-pod).
## Using Multiple Service Accounts.
Every namespace has a default service account resource called `default`.
You can list this and any other serviceAccount resources in the namespace with this command:
```console
$ kubectl get serviceAccounts
NAME SECRETS
default 1
```
You can create additional serviceAccounts like this:
```console
$ cat > /tmp/serviceaccount.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
EOF
$ kubectl create -f /tmp/serviceaccount.yaml
serviceaccounts/build-robot
```
If you get a complete dump of the service account object, like this:
```console
$ kubectl get serviceaccounts/build-robot -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2015-06-16T00:12:59Z
name: build-robot
namespace: default
resourceVersion: "272500"
selfLink: /api/v1/namespaces/default/serviceaccounts/build-robot
uid: 721ab723-13bc-11e5-aec2-42010af0021e
secrets:
- name: build-robot-token-bvbk5
```
then you will see that a token has automatically been created and is referenced by the service account.
In the future, you will be able to configure different access policies for each service account.
To use a non-default service account, simply set the `spec.serviceAccount`
field of a pod to the name of the service account you wish to use.
The service account has to exist at the time the pod is created, or it will be rejected.
You cannot update the service account of an already created pod.
You can clean up the service account from this example like this:
```console
$ kubectl delete serviceaccount/build-robot
```
<!-- TODO: describe how to create a pod with no Service Account. -->
Note that if a pod does not have a `ServiceAccount` set, the `ServiceAccount` will be set to `default`.
## Manually create a service account API token.
Suppose we have an existing service account named "build-robot" as mentioned above, and we create
a new secret manually.
```console
$ cat > /tmp/build-robot-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: build-robot-secret
annotations:
kubernetes.io/service-account.name: build-robot
type: kubernetes.io/service-account-token
EOF
$ kubectl create -f /tmp/build-robot-secret.yaml
secrets/build-robot-secret
```
Now you can confirm that the newly built secret is populated with an API token for the "build-robot" service account.
```console
$ kubectl describe secrets/build-robot-secret
Name: build-robot-secret
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name=build-robot,kubernetes.io/service-account.uid=870ef2a5-35cf-11e5-8d06-005056b45392
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1220 bytes
token: ...
namespace: 7 bytes
```
> Note that the content of `token` is elided here.
## Adding ImagePullSecrets to a service account
First, create an imagePullSecret, as described [here](images.md#specifying-imagepullsecrets-on-a-pod)
Next, verify it has been created. For example:
```console
$ kubectl get secrets myregistrykey
NAME TYPE DATA
myregistrykey kubernetes.io/.dockerconfigjson 1
```
Next, read/modify/write the service account for the namespace to use this secret as an imagePullSecret
```console
$ kubectl get serviceaccounts default -o yaml > ./sa.yaml
$ cat sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2015-08-07T22:02:39Z
name: default
namespace: default
resourceVersion: "243024"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
secrets:
- name: default-token-uudge
$ vi sa.yaml
[editor session not shown]
[delete line with key "resourceVersion"]
[add lines with "imagePullSecret:"]
$ cat sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2015-08-07T22:02:39Z
name: default
namespace: default
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
secrets:
- name: default-token-uudge
imagePullSecrets:
- name: myregistrykey
$ kubectl replace serviceaccount default -f ./sa.yaml
serviceaccounts/default
```
Now, any new pods created in the current namespace will have this added to their spec:
```yaml
spec:
imagePullSecrets:
- name: myregistrykey
```
## Adding Secrets to a service account.
TODO: Test and explain how to use additional non-K8s secrets with an existing service account.
TODO explain:
- The token goes to: "/var/run/secrets/kubernetes.io/serviceaccount/$WHATFILENAME"
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/service-accounts.md?pixel)]()

View File

@@ -32,56 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Services and Firewalls
Many cloud providers (e.g. Google Compute Engine) define firewalls that help prevent inadvertent
exposure to the internet. When exposing a service to the external world, you may need to open up
one or more ports in these firewalls to serve traffic. This document describes this process, as
well as any provider specific details that may be necessary.
### Google Compute Engine
When using a Service with `spec.type: LoadBalancer`, the firewall will be
opened automatically. When using `spec.type: NodePort`, however, the firewall
is *not* opened by default.
Google Compute Engine firewalls are documented [elsewhere](https://cloud.google.com/compute/docs/networking#firewalls_1).
You can add a firewall with the `gcloud` command line tool:
```console
$ gcloud compute firewall-rules create my-rule --allow=tcp:<port>
```
**Note**
There is one important security note when using firewalls on Google Compute Engine:
as of Kubernetes v1.0.0, GCE firewalls are defined per-vm, rather than per-ip
address. This means that when you open a firewall for a service's ports,
anything that serves on that port on that VM's host IP address may potentially
serve traffic. Note that this is not a problem for other Kubernetes services,
as they listen on IP addresses that are different than the host node's external
IP address.
Consider:
* You create a Service with an external load balancer (IP Address 1.2.3.4)
and port 80
* You open the firewall for port 80 for all nodes in your cluster, so that
the external Service actually can deliver packets to your Service
* You start an nginx server, running on port 80 on the host virtual machine
(IP Address 2.3.4.5). This nginx is **also** exposed to the internet on
the VM's external IP address.
Consequently, please be careful when opening firewalls in Google Compute Engine
or Google Container Engine. You may accidentally be exposing other services to
the wilds of the internet.
This will be fixed in an upcoming release of Kubernetes.
### Other cloud providers
Coming soon.
This file has moved to: http://kubernetes.github.io/docs/user-guide/services-firewalls/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,607 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Services in Kubernetes
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Services in Kubernetes](#services-in-kubernetes)
- [Overview](#overview)
- [Defining a service](#defining-a-service)
- [Services without selectors](#services-without-selectors)
- [Virtual IPs and service proxies](#virtual-ips-and-service-proxies)
- [Proxy-mode: userspace](#proxy-mode-userspace)
- [Proxy-mode: iptables](#proxy-mode-iptables)
- [Multi-Port Services](#multi-port-services)
- [Choosing your own IP address](#choosing-your-own-ip-address)
- [Why not use round-robin DNS?](#why-not-use-round-robin-dns)
- [Discovering services](#discovering-services)
- [Environment variables](#environment-variables)
- [DNS](#dns)
- [Headless services](#headless-services)
- [Publishing services - service types](#publishing-services---service-types)
- [Type NodePort](#type-nodeport)
- [Type LoadBalancer](#type-loadbalancer)
- [External IPs](#external-ips)
- [Shortcomings](#shortcomings)
- [Future work](#future-work)
- [The gory details of virtual IPs](#the-gory-details-of-virtual-ips)
- [Avoiding collisions](#avoiding-collisions)
- [IPs and VIPs](#ips-and-vips)
- [Userspace](#userspace)
- [Iptables](#iptables)
- [API Object](#api-object)
<!-- END MUNGE: GENERATED_TOC -->
## Overview
Kubernetes [`Pods`](pods.md) are mortal. They are born and they die, and they
are not resurrected. [`ReplicationControllers`](replication-controller.md) in
particular create and destroy `Pods` dynamically (e.g. when scaling up or down
or when doing [rolling updates](kubectl/kubectl_rolling-update.md)). While each `Pod` gets its own IP address, even
those IP addresses cannot be relied upon to be stable over time. This leads to
a problem: if some set of `Pods` (let's call them backends) provides
functionality to other `Pods` (let's call them frontends) inside the Kubernetes
cluster, how do those frontends find out and keep track of which backends are
in that set?
Enter `Services`.
A Kubernetes `Service` is an abstraction which defines a logical set of `Pods`
and a policy by which to access them - sometimes called a micro-service. The
set of `Pods` targeted by a `Service` is (usually) determined by a [`Label
Selector`](labels.md#label-selectors) (see below for why you might want a
`Service` without a selector).
As an example, consider an image-processing backend which is running with 3
replicas. Those replicas are fungible - frontends do not care which backend
they use. While the actual `Pods` that compose the backend set may change, the
frontend clients should not need to be aware of that or keep track of the list
of backends themselves. The `Service` abstraction enables this decoupling.
For Kubernetes-native applications, Kubernetes offers a simple `Endpoints` API
that is updated whenever the set of `Pods` in a `Service` changes. For
non-native applications, Kubernetes offers a virtual-IP-based bridge to Services
which redirects to the backend `Pods`.
## Defining a service
A `Service` in Kubernetes is a REST object, similar to a `Pod`. Like all of the
REST objects, a `Service` definition can be POSTed to the apiserver to create a
new instance. For example, suppose you have a set of `Pods` that each expose
port 9376 and carry a label `"app=MyApp"`.
```json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
]
}
}
```
This specification will create a new `Service` object named "my-service" which
targets TCP port 9376 on any `Pod` with the `"app=MyApp"` label. This `Service`
will also be assigned an IP address (sometimes called the "cluster IP"), which
is used by the service proxies (see below). The `Service`'s selector will be
evaluated continuously and the results will be POSTed to an `Endpoints` object
also named "my-service".
Note that a `Service` can map an incoming port to any `targetPort`. By default
the `targetPort` will be set to the same value as the `port` field. Perhaps
more interesting is that `targetPort` can be a string, referring to the name of
a port in the backend `Pods`. The actual port number assigned to that name can
be different in each backend `Pod`. This offers a lot of flexibility for
deploying and evolving your `Services`. For example, you can change the port
number that pods expose in the next version of your backend software, without
breaking clients.
Kubernetes `Services` support `TCP` and `UDP` for protocols. The default
is `TCP`.
### Services without selectors
Services generally abstract access to Kubernetes `Pods`, but they can also
abstract other kinds of backends. For example:
* You want to have an external database cluster in production, but in test
you use your own databases.
* You want to point your service to a service in another
[`Namespace`](namespaces.md) or on another cluster.
* You are migrating your workload to Kubernetes and some of your backends run
outside of Kubernetes.
In any of these scenarios you can define a service without a selector:
```json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
]
}
}
```
Because this service has no selector, the corresponding `Endpoints` object will not be
created. You can manually map the service to your own specific endpoints:
```json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"subsets": [
{
"addresses": [
{ "ip": "1.2.3.4" }
],
"ports": [
{ "port": 9376 }
]
}
]
}
```
NOTE: Endpoint IPs may not be loopback (127.0.0.0/8), link-local
(169.254.0.0/16), or link-local multicast ((224.0.0.0/24).
Accessing a `Service` without a selector works the same as if it had selector.
The traffic will be routed to endpoints defined by the user (`1.2.3.4:9376` in
this example).
## Virtual IPs and service proxies
Every node in a Kubernetes cluster runs a `kube-proxy`. This application
is responsible for implementing a form of virtual IP for `Service`s. In
Kubernetes v1.0 the proxy was purely in userspace. In Kubernetes v1.1 an
iptables proxy was added, but was not the default operating mode. In
Kubernetes v1.2 we expect the iptables proxy to be the default.
As of Kubernetes v1.0, `Services` are a "layer 3" (TCP/UDP over IP) construct.
In Kubernetes v1.1 the `Ingress` API was added (beta) to represent "layer 7"
(HTTP) services.
### Proxy-mode: userspace
In this mode, kube-proxy watches the Kubernetes master for the addition and
removal of `Service` and `Endpoints` objects. For each `Service` it opens a
port (randomly chosen) on the local node. Any connections to this "proxy port"
will be proxied to one of the `Service`'s backend `Pods` (as reported in
`Endpoints`). Which backend `Pod` to use is decided based on the
`SessionAffinity` of the `Service`. Lastly, it installs iptables rules which
capture traffic to the `Service`'s `clusterIP` (which is virtual) and `Port`
and redirects that traffic to the proxy port which proxies the a backend `Pod`.
The net result is that any traffic bound for the `Service`'s IP:Port is proxied
to an appropriate backend without the clients knowing anything about Kubernetes
or `Services` or `Pods`.
By default, the choice of backend is round robin. Client-IP based session affinity
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
default is `"None"`).
![Services overview diagram for userspace proxy](services-userspace-overview.png)
### Proxy-mode: iptables
In this mode, kube-proxy watches the Kubernetes master for the addition and
removal of `Service` and `Endpoints` objects. For each `Service` it installs
iptables rules which capture traffic to the `Service`'s `clusterIP` (which is
virtual) and `Port` and redirects that traffic to one of the `Service`'s
backend sets. For each `Endpoints` object it installs iptables rules which
select a backend `Pod`.
By default, the choice of backend is random. Client-IP based session affinity
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
default is `"None"`).
As with the userspace proxy, the net result is that any traffic bound for the
`Service`'s IP:Port is proxied to an appropriate backend without the clients
knowing anything about Kubernetes or `Services` or `Pods`. This should be
faster and more reliable than the userspace proxy.
![Services overview diagram for iptables proxy](services-iptables-overview.png)
## Multi-Port Services
Many `Services` need to expose more than one port. For this case, Kubernetes
supports multiple port definitions on a `Service` object. When using multiple
ports you must give all of your ports names, so that endpoints can be
disambiguated. For example:
```json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 9376
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 9377
}
]
}
}
```
## Choosing your own IP address
You can specify your own cluster IP address as part of a `Service` creation
request. To do this, set the `spec.clusterIP` field. For example, if you
already have an existing DNS entry that you wish to replace, or legacy systems
that are configured for a specific IP address and difficult to re-configure.
The IP address that a user chooses must be a valid IP address and within the
`service-cluster-ip-range` CIDR range that is specified by flag to the API
server. If the IP address value is invalid, the apiserver returns a 422 HTTP
status code to indicate that the value is invalid.
### Why not use round-robin DNS?
A question that pops up every now and then is why we do all this stuff with
virtual IPs rather than just use standard round-robin DNS. There are a few
reasons:
* There is a long history of DNS libraries not respecting DNS TTLs and
caching the results of name lookups.
* Many apps do DNS lookups once and cache the results.
* Even if apps and libraries did proper re-resolution, the load of every
client re-resolving DNS over and over would be difficult to manage.
We try to discourage users from doing things that hurt themselves. That said,
if enough people ask for this, we may implement it as an alternative.
## Discovering services
Kubernetes supports 2 primary modes of finding a `Service` - environment
variables and DNS.
### Environment variables
When a `Pod` is run on a `Node`, the kubelet adds a set of environment variables
for each active `Service`. It supports both [Docker links
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
[makeLinkVariables](http://releases.k8s.io/HEAD/pkg/kubelet/envvars/envvars.go#L49))
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
For example, the Service `"redis-master"` which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11 produces the following environment
variables:
```bash
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
```
*This does imply an ordering requirement* - any `Service` that a `Pod` wants to
access must be created before the `Pod` itself, or else the environment
variables will not be populated. DNS does not have this restriction.
### DNS
An optional (though strongly recommended) [cluster
add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md) is a DNS server. The
DNS server watches the Kubernetes API for new `Services` and creates a set of
DNS records for each. If DNS has been enabled throughout the cluster then all
`Pods` should be able to do name resolution of `Services` automatically.
For example, if you have a `Service` called `"my-service"` in Kubernetes
`Namespace` `"my-ns"` a DNS record for `"my-service.my-ns"` is created. `Pods`
which exist in the `"my-ns"` `Namespace` should be able to find it by simply doing
a name lookup for `"my-service"`. `Pods` which exist in other `Namespaces` must
qualify the name as `"my-service.my-ns"`. The result of these name lookups is the
cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the
`"my-service.my-ns"` `Service` has a port named `"http"` with protocol `TCP`, you
can do a DNS SRV query for `"_http._tcp.my-service.my-ns"` to discover the port
number for `"http"`.
## Headless services
Sometimes you don't need or want load-balancing and a single service IP. In
this case, you can create "headless" services by specifying `"None"` for the
cluster IP (`spec.clusterIP`).
For such `Services`, a cluster IP is not allocated. DNS is configured to return
multiple A records (addresses) for the `Service` name, which point directly to
the `Pods` backing the `Service`. Additionally, the kube proxy does not handle
these services and there is no load balancing or proxying done by the platform
for them. The endpoints controller will still create `Endpoints` records in
the API.
This option allows developers to reduce coupling to the Kubernetes system, if
they desire, but leaves them freedom to do discovery in their own way.
Applications can still use a self-registration pattern and adapters for other
discovery systems could easily be built upon this API.
## Publishing services - service types
For some parts of your application (e.g. frontends) you may want to expose a
Service onto an external (outside of your cluster, maybe public internet) IP
address, other services should be visible only from inside of the cluster.
Kubernetes `ServiceTypes` allow you to specify what kind of service you want.
The default and base type is `ClusterIP`, which exposes a service to connection
from inside the cluster. `NodePort` and `LoadBalancer` are two types that expose
services to external traffic.
Valid values for the `ServiceType` field are:
* `ClusterIP`: use a cluster-internal IP only - this is the default and is
discussed above. Choosing this value means that you want this service to be
reachable only from inside of the cluster.
* `NodePort`: on top of having a cluster-internal IP, expose the service on a
port on each node of the cluster (the same port on each node). You'll be able
to contact the service on any `<NodeIP>:NodePort` address.
* `LoadBalancer`: on top of having a cluster-internal IP and exposing service
on a NodePort also, ask the cloud provider for a load balancer
which forwards to the `Service` exposed as a `<NodeIP>:NodePort`
for each Node.
### Type NodePort
If you set the `type` field to `"NodePort"`, the Kubernetes master will
allocate a port from a flag-configured range (default: 30000-32767), and each
Node will proxy that port (the same port number on every Node) into your `Service`.
That port will be reported in your `Service`'s `spec.ports[*].nodePort` field.
If you want a specific port number, you can specify a value in the `nodePort`
field, and the system will allocate you that port or else the API transaction
will fail (i.e. you need to take care about possible port collisions yourself).
The value you specify must be in the configured range for node ports.
This gives developers the freedom to set up their own load balancers, to
configure cloud environments that are not fully supported by Kubernetes, or
even to just expose one or more nodes' IPs directly.
Note that this Service will be visible as both `<NodeIP>:spec.ports[*].nodePort`
and `spec.clusterIp:spec.ports[*].port`.
### Type LoadBalancer
On cloud providers which support external load balancers, setting the `type`
field to `"LoadBalancer"` will provision a load balancer for your `Service`.
The actual creation of the load balancer happens asynchronously, and
information about the provisioned balancer will be published in the `Service`'s
`status.loadBalancer` field. For example:
```json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 9376,
"nodePort": 30061
}
],
"clusterIP": "10.0.171.239",
"loadBalancerIP": "78.11.24.19",
"type": "LoadBalancer"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "146.148.47.155"
}
]
}
}
}
```
Traffic from the external load balancer will be directed at the backend `Pods`,
though exactly how that works depends on the cloud provider. Some cloud providers allow
the `loadBalancerIP` to be specified. In those cases, the load-balancer will be created
with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
an ephemeral IP will be assigned to the loadBalancer. If the `loadBalancerIP` is specified, but the
cloud provider does not support the feature, the field will be ignored.
### External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes services can be exposed on those
`externalIPs`. Traffic that ingresses into the cluster with the external IP (as destination IP), on the service port,
will be routed to one of the service endpoints. `externalIPs` are not managed by Kubernetes and are the responsibility
of the cluster administrator.
In the ServiceSpec, `externalIPs` can be specified along with any of the `ServiceTypes`.
In the example below, my-service can be accessed by clients on 80.11.12.10:80 (externalIP:port)
```json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
],
"externalIPs" : [
"80.11.12.10"
]
}
}
```
## Shortcomings
Using the userspace proxy for VIPs will work at small to medium scale, but will
not scale to very large clusters with thousands of Services. See [the original
design proposal for portals](http://issue.k8s.io/1107) for more details.
Using the userspace proxy obscures the source-IP of a packet accessing a `Service`.
This makes some kinds of firewalling impossible. The iptables proxier does not
obscure in-cluster source IPs, but it does still impact clients coming through
a load-balancer or node-port.
The `Type` field is designed as nested functionality - each level adds to the
previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does
not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does)
but the current API requires it.
## Future work
In the future we envision that the proxy policy can become more nuanced than
simple round robin balancing, for example master-elected or sharded. We also
envision that some `Services` will have "real" load balancers, in which case the
VIP will simply transport the packets there.
We intend to improve our support for L7 (HTTP) `Services`.
We intend to have more flexible ingress modes for `Services` which encompass
the current `ClusterIP`, `NodePort`, and `LoadBalancer` modes and more.
## The gory details of virtual IPs
The previous information should be sufficient for many people who just want to
use `Services`. However, there is a lot going on behind the scenes that may be
worth understanding.
### Avoiding collisions
One of the primary philosophies of Kubernetes is that users should not be
exposed to situations that could cause their actions to fail through no fault
of their own. In this situation, we are looking at network ports - users
should not have to choose a port number if that choice might collide with
another user. That is an isolation failure.
In order to allow users to choose a port number for their `Services`, we must
ensure that no two `Services` can collide. We do that by allocating each
`Service` its own IP address.
To ensure each service receives a unique IP, an internal allocator atomically
updates a global allocation map in etcd prior to each service. The map object
must exist in the registry for services to get IPs, otherwise creations will
fail with a message indicating an IP could not be allocated. A background
controller is responsible for creating that map (to migrate from older versions
of Kubernetes that used in memory locking) as well as checking for invalid
assignments due to administrator intervention and cleaning up any IPs
that were allocated but which no service currently uses.
### IPs and VIPs
Unlike `Pod` IP addresses, which actually route to a fixed destination,
`Service` IPs are not actually answered by a single host. Instead, we use
`iptables` (packet processing logic in Linux) to define virtual IP addresses
which are transparently redirected as needed. When clients connect to the
VIP, their traffic is automatically transported to an appropriate endpoint.
The environment variables and DNS for `Services` are actually populated in
terms of the `Service`'s VIP and port.
We support two proxy modes - userspace and iptables, which operate slightly
differently.
#### Userspace
As an example, consider the image processing application described above.
When the backend `Service` is created, the Kubernetes master assigns a virtual
IP address, for example 10.0.0.1. Assuming the `Service` port is 1234, the
`Service` is observed by all of the `kube-proxy` instances in the cluster.
When a proxy sees a new `Service`, it opens a new random port, establishes an
iptables redirect from the VIP to this new port, and starts accepting
connections on it.
When a client connects to the VIP the iptables rule kicks in, and redirects
the packets to the `Service proxy`'s own port. The `Service proxy` chooses a
backend, and starts proxying traffic from the client to the backend.
This means that `Service` owners can choose any port they want without risk of
collision. Clients can simply connect to an IP and port, without being aware
of which `Pods` they are actually accessing.
#### Iptables
Again, consider the image processing application described above.
When the backend `Service` is created, the Kubernetes master assigns a virtual
IP address, for example 10.0.0.1. Assuming the `Service` port is 1234, the
`Service` is observed by all of the `kube-proxy` instances in the cluster.
When a proxy sees a new `Service`, it installs a series of iptables rules which
redirect from the VIP to per-`Service` rules. The per-`Service` rules link to
per-`Endpoint` rules which redirect (Destination NAT) to the backends.
When a client connects to the VIP the iptables rule kicks in. A backend is
chosen (either based on session affinity or randomly) and packets are
redirected to the backend. Unlike the userspace proxy, packets are never
copied to userspace, the kube-proxy does not have to be running for the VIP to
work, and the client IP is not altered.
This same basic flow executes when traffic comes in through a node-port or
through a load-balancer, though in those cases the client IP does get altered.
## API Object
Service is a top-level resource in the kubernetes REST API. More details about the
API object can be found at: [Service API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_service).
This file has moved to: http://kubernetes.github.io/docs/user-guide/services/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,123 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Sharing Cluster Access
Client access to a running Kubernetes cluster can be shared by copying
the `kubectl` client config bundle ([kubeconfig](kubeconfig-file.md)).
This config bundle lives in `$HOME/.kube/config`, and is generated
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
**1. Create a cluster**
```console
$ cluster/kube-up.sh
```
**2. Copy `kubeconfig` to new host**
```console
$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
```
**3. On new host, make copied `config` available to `kubectl`**
* Option A: copy to default location
```console
$ mv /path/to/.kube/config $HOME/.kube/config
```
* Option B: copy to working directory (from which kubectl is run)
```console
$ mv /path/to/.kube/config $PWD
```
* Option C: manually pass `kubeconfig` location to `kubectl`
```console
# via environment variable
$ export KUBECONFIG=/path/to/.kube/config
# via commandline flag
$ kubectl ... --kubeconfig=/path/to/.kube/config
```
## Manually Generating `kubeconfig`
`kubeconfig` is generated by `kube-up` but you can generate your own
using (any desired subset of) the following commands.
```console
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK
```
Notes:
* The `--embed-certs` flag is needed to generate a standalone
`kubeconfig`, that will work as-is on another host.
* `--kubeconfig` is both the preferred file to load config from and the file to
save config too. In the above commands the `--kubeconfig` file could be
omitted if you first run
```console
$ export KUBECONFIG=/path/to/standalone/.kube/config
```
* The ca_file, key_file, and cert_file referenced above are generated on the
kube master at cluster turnup. They can be found on the master under
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
For more details on `kubeconfig` see [kubeconfig-file.md](kubeconfig-file.md),
and/or run `kubectl config -h`.
## Merging `kubeconfig` Example
`kubectl` loads and merges config from the following locations (in order)
1. `--kubeconfig=/path/to/.kube/config` command line flag
2. `KUBECONFIG=/path/to/.kube/config` env variable
3. `$PWD/.kube/config`
4. `$HOME/.kube/config`
If you create clusters A, B on host1, and clusters C, D on host2, you can
make all four clusters available on both hosts by running
```console
# on host2, copy host1's default kubeconfig, and merge it from env
$ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config
$ export $KUBECONFIG=/path/to/other/.kube/config
# on host1, copy host2's default kubeconfig and merge it from env
$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config
$ export $KUBECONFIG=/path/to/other/.kube/config
```
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](kubeconfig-file.md).
This file has moved to: http://kubernetes.github.io/docs/user-guide/sharing-clusters/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,61 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Running your first containers in Kubernetes
Ok, you've run one of the [getting started guides](../../docs/getting-started-guides/) and you have
successfully turned up a Kubernetes cluster. Now what? This guide will help you get oriented
to Kubernetes and running your first containers on the cluster.
### Running a container (simple version)
From this point onwards, it is assumed that `kubectl` is on your path from one of the getting started guides.
The [`kubectl run`](kubectl/kubectl_run.md) line below will create two [nginx](https://registry.hub.docker.com/_/nginx/) [pods](pods.md) listening on port 80. It will also create a [replication controller](replication-controller.md) named `my-nginx` to ensure that there are always two pods running.
```bash
kubectl run my-nginx --image=nginx --replicas=2 --port=80
```
Once the pods are created, you can list them to see what is up and running:
```bash
kubectl get pods
```
You can also see the replication controller that was created:
```bash
kubectl get rc
```
To stop the two replicated containers, delete the replication controller:
```bash
kubectl delete rc my-nginx
```
### Exposing your pods to the internet.
On some platforms (for example Google Compute Engine) the kubectl command can integrate with your cloud provider to add a [public IP address](services.md#external-services) for the pods,
to do this run:
```bash
kubectl expose rc my-nginx --port=80 --type=LoadBalancer
```
This should print the service that has been created, and map an external IP address to the service. Where to find this external IP address will depend on the environment you run in. For instance, for Google Compute Engine the external IP address is listed as part of the newly created service and can be retrieved by running
```bash
kubectl get services
```
In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a firewall to allow traffic on port 80.
### Next: Configuration files
Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](deploying-applications.md)
is given in a different document.
This file has moved to: http://kubernetes.github.io/docs/user-guide/simple-nginx/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,7 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
### This document has been subsumed by [deploying-applications.md](deploying-applications.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/simple-yaml/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/simple-yaml.md?pixel)]()

View File

@@ -32,142 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# The Kubernetes Dashboard User Interface
This file has moved to: http://kubernetes.github.io/docs/user-guide/ui/
Kubernetes has a web-based user interface that allows you to deploy containerized
applications to a Kubernetes cluster, troubleshoot them, and manage the cluster itself.
## Accessing the Dashboard
By default, the Kubernetes Dashboard is deployed as a cluster addon. To access it, visit
`https://<kubernetes-master>/ui`, which redirects to
`https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard`.
If you find that you're not able to access the Dashboard, it may be because the
`kubernetes-dashboard` service has not been started on your cluster. In that case,
you can start it manually as follows:
```sh
kubectl create -f cluster/addons/dashboard/dashboard-controller.yaml --namespace=kube-system
kubectl create -f cluster/addons/dashboard/dashboard-service.yaml --namespace=kube-system
```
Normally, this should be taken care of automatically by the
[`kube-addons.sh`](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kube-addons/kube-addons.sh)
script that runs on the master. Release notes and development versions of the Dashboard can be
found at https://github.com/kubernetes/dashboard/releases.
## Using the Dashboard
The Dashboard can be used to get an overview of applications running on the cluster, and to provide information on any errors that have occurred. You can also inspect your replication controllers and corresponding services, change the number of replicated Pods, and deploy new applications using a deploy wizard.
When accessing the Dashboard on an empty cluster for the first time, the Welcome page is displayed. This page contains a link to this document as well as a button to deploy your first application.
![Kubernetes Dashboard welcome page](ui-dashboard-zerostate.png)
### Deploying applications
The Dashboard lets you create and deploy a containerized application as a Replication Controller with a simple wizard:
![Kubernetes Dashboard deploy form](ui-dashboard-deploy-simple.png)
#### Specifying application details
The wizard expects that you provide the following information:
- **App name** (mandatory): Name for your application. A [label](http://kubernetes.io/v1.1/docs/user-guide/labels.html) with the name will be added to the Replication Controller and Service, if any, that will be deployed.
The application name must be unique within the selected Kubernetes namespace. It must start with a lowercase character, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters.
- **Container image** (mandatory): The URL of a public Docker [container image](http://kubernetes.io/v1.1/docs/user-guide/images.html) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub).
- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. The value must be a positive integer.
A [Replication Controller](http://kubernetes.io/v1.1/docs/user-guide/replication-controller.html) will be created to maintain the desired number of Pods across your cluster.
- **Ports** (optional): If your container listens on a port, you can provide a port and target port. The wizard will create a corresponding Kubernetes [Service](http://kubernetes.io/v1.1/docs/user-guide/services.html) which will route to your deployed Pods. Supported protocols are TCP and UDP. In case you specify ports, the internal DNS name for this Service will be the value you specified as application name above.
Be aware that if you specify ports, you need to provide both port and target port.
- For some parts of your application (e.g. frontends), you can expose the Service onto an external, maybe public IP address by selecting the **Expose service externally** option. You may need to open up one or more ports to do so. Find more details [here](http://kubernetes.io/v1.1/docs/user-guide/services-firewalls.html).
If needed, you can expand the **Advanced options** section where you can specify more settings:
![Kubernetes Dashboard deploy form advanced options](ui-dashboard-deploy-more.png)
- **Description**: The text you enter here will be added as an [annotation](http://kubernetes.io/v1.1/docs/user-guide/annotations.html) to the Replication Controller and displayed in the application's details.
- **Labels**: Default [labels](http://kubernetes.io/v1.1/docs/user-guide/labels.html) to be used for your application are application name and version. You can specify additional labels to be applied to the Replication Controller, Service (if any), and Pods, such as release, environment, tier, partition, and release track.
Example:
```
release=1.0
tier=frontend
environment=pod
track=stable
```
- **Kubernetes namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](http://kubernetes.io/v1.1/docs/admin/namespaces.html). They let you partition resources into logically named groups.
The Dashboard offers all available namespaces in a dropdown list and allows you to create a new namespace. The namespace name may contain alphanumeric characters and dashes (-).
- **Image pull secrets**: In case the Docker container image is private, it may require [pull secret](http://kubernetes.io/v1.1/docs/user-guide/secrets.html) credentials.
The Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base24-encoded and specified in a [`.dockercfg`](http://kubernetes.io/v1.1/docs/user-guide/images.html#specifying-imagepullsecrets-on-a-pod) file.
- **CPU requirement** and **Memory requirement**: You can specify the minimum [resource limits](http://kubernetes.io/v1.1/docs/admin/limitrange/README.html) for the container. By default, Pods run with unbounded CPU and memory limits.
- **Run command** and **Run command arguments**: By default, your containers run the selected Docker image's default [entrypoint command](http://kubernetes.io/v1.1/docs/user-guide/containers.html/containers-and-commands). You can use the command options and arguments to override the default.
- **Run as privileged**: This setting determines whether processes in [privileged containers](http://kubernetes.io/v1.1/docs/user-guide/pods.html#privileged-mode-for-pod-containers) are equivalent to processes running as root on the host. Privileged containers can make use of capabilities like manipulating the network stack and accessing devices.
- **Environment variables**: Kubernetes exposes Services through [environment variables](http://kubernetes.io/v1.1/design/expansion.html). You can compose environment variable or pass arguments to your commands using the values of environnment variables. They can be used in applications to find a Service. Environment variables are also useful for decreasing coupling and the use of workarounds. Values can reference other variables using the `$(VAR_NAME)` syntax.
#### Uploading a YAML or JSON file
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes' [API](http://kubernetes.io/v1.1/docs/api.html) resource schemas as the configuration schemas.
As an alternative to specifying application details in the deploy wizard, you can define your Replication Controllers and Services in YAML or JSON files, and upload the files to your Pods:
![Kubernetes Dashboard deploy from file upload](ui-dashboard-deploy-file.png)
### Applications view
As soon as applications are running on your cluster, the initial view of the Dashboard defaults to showing an overview of them, for example:
![Kubernetes Dashboard applications view](ui-dashboard-rcs.png)
Individual applications are shown as cards - where an application is defined as a Replication Controller and its corresponding Services. Each card shows the current number of running and desired replicas, along with errors reported by Kubernetes, if any.
You can view application details (**View details**), make quick changes to the number of replicas (**Edit pod count**) or delete the application directly (**Delete**) from the menu in each card's corner:
![Kubernetes Dashboard deploy form file upload](ui-dashboard-cards-menu.png)
#### View details
Selecting this option from the card menu will take you to the following page where you can view more information about the Pods that make up your application:
![Kubernetes Dashboard application detail](ui-dashboard-rcs-detail.png)
The **EVENTS** tab can be useful for debugging flapping applications.
Clicking the plus sign in the right corner of the screen leads you back to the page for deploying a new application.
#### Edit pod count
If you choose to change the number of Pods, the respective Replication Controller will be updated to reflect the newly specified number.
#### Delete
Deleting a Replication Controller also deletes the Pods managed by it. It is currently not supported to leave the Pods running.
You have the option to also delete Services related to the Replication Controller if the label selector targets only the Replication Controller to be deleted.
## More Information
For more information, see the
[Kubernetes Dashboard repository](https://github.com/kubernetes/dashboard).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/ui.md?pixel)]()

View File

@@ -31,130 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
<!--
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Rolling update example
This example demonstrates the usage of Kubernetes to perform a [rolling update](../kubectl/kubectl_rolling-update.md) on a running group of [pods](../../../docs/user-guide/pods.md). See [here](../managing-deployments.md#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](../../design/simple-rolling-update.md) for more information.
### Step Zero: Prerequisites
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../../docs/getting-started-guides/):
```console
$ cd kubernetes
$ ./cluster/kube-up.sh
```
### Step One: Turn up the UX for the demo
You can use bash job control to run this in the background (note that you must use the default port -- 8001 -- for the following demonstration to work properly).
This can sometimes spew to the output so you could also run it in a different terminal. You have to run `kubectl proxy` in the root of the
Kubernetes repository. Otherwise you will get "404 page not found" errors as the paths will not match. You can find more information about `kubectl proxy`
[here](../../../docs/user-guide/kubectl/kubectl_proxy.md).
```console
$ kubectl proxy --www=docs/user-guide/update-demo/local/ &
I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
```
Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet.
### Step Two: Run the replication controller
Now we will turn up two replicas of an [image](../images.md). They all serve on internal port 80.
```console
$ kubectl create -f docs/user-guide/update-demo/nautilus-rc.yaml
```
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
### Step Three: Try scaling the replication controller
Now we will increase the number of replicas from two to four:
```console
$ kubectl scale rc update-demo-nautilus --replicas=4
```
If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod.
### Step Four: Update the docker image
We will now update the docker image to serve a different image by doing a rolling update to a new Docker image.
```console
$ kubectl rolling-update update-demo-nautilus --update-period=10s -f docs/user-guide/update-demo/kitten-rc.yaml
```
The rolling-update command in kubectl will do 2 things:
1. Create a new [replication controller](../../../docs/user-guide/replication-controller.md) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinning up new ones to replace them.
Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.
Note that the new replication controller definition does not include the replica count, so the current replica count of the old replication controller is preserved.
But if the replica count had been specified, the final replica count of the new replication controller will be equal this number.
### Step Five: Bring down the pods
```console
$ kubectl delete rc update-demo-kitten
```
This first stops the replication controller by turning the target number of replicas to 0 and then deletes the controller.
### Step Six: Cleanup
To turn down a Kubernetes cluster:
```console
$ ./cluster/kube-down.sh
```
Kill the proxy running in the background:
After you are done running this demo make sure to kill it:
```console
$ jobs
[1]+ Running ./kubectl proxy --www=local/ &
$ kill %1
[1]+ Terminated: 15 ./kubectl proxy --www=local/
```
### Updating the Docker images
If you want to build your own docker images, you can set `$DOCKER_HUB_USER` to your Docker user id and run the included shell script. It can take a few minutes to download/upload stuff.
```console
$ export DOCKER_HUB_USER=my-docker-id
$ ./docs/user-guide/update-demo/build-images.sh
```
To use your custom docker image in the above examples, you will need to change the image name in `docs/user-guide/update-demo/nautilus-rc.yaml` and `docs/user-guide/update-demo/kitten-rc.yaml`.
### Image Copyright
Note that the images included here are public domain.
* [kitten](http://commons.wikimedia.org/wiki/File:Kitten-stare.jpg)
* [nautilus](http://commons.wikimedia.org/wiki/File:Nautilus_pompilius.jpg)
This file has moved to: http://kubernetes.github.io/docs/user-guide/update-demo/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,422 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Volumes
This file has moved to: http://kubernetes.github.io/docs/user-guide/volumes/
On-disk files in a container are ephemeral, which presents some problems for
non-trivial applications when running in containers. First, when a container
crashes kubelet will restart it, but the files will be lost - the
container starts with a clean slate. Second, when running containers together
in a `Pod` it is often necessary to share files between those containers. The
Kubernetes `Volume` abstraction solves both of these problems.
Familiarity with [pods](pods.md) is suggested.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Volumes](#volumes)
- [Background](#background)
- [Types of Volumes](#types-of-volumes)
- [emptyDir](#emptydir)
- [hostPath](#hostpath)
- [gcePersistentDisk](#gcepersistentdisk)
- [Creating a PD](#creating-a-pd)
- [Example pod](#example-pod)
- [awsElasticBlockStore](#awselasticblockstore)
- [Creating an EBS volume](#creating-an-ebs-volume)
- [AWS EBS Example configuration](#aws-ebs-example-configuration)
- [nfs](#nfs)
- [iscsi](#iscsi)
- [flocker](#flocker)
- [glusterfs](#glusterfs)
- [rbd](#rbd)
- [gitRepo](#gitrepo)
- [secret](#secret)
- [persistentVolumeClaim](#persistentvolumeclaim)
- [downwardAPI](#downwardapi)
- [FlexVolume](#flexvolume)
- [AzureFileVolume](#azurefilevolume)
- [Resources](#resources)
<!-- END MUNGE: GENERATED_TOC -->
## Background
Docker also has a concept of
[volumes](https://docs.docker.com/userguide/dockervolumes/), though it is
somewhat looser and less managed. In Docker, a volume is simply a directory on
disk or in another container. Lifetimes are not managed and until very
recently there were only local-disk-backed volumes. Docker now provides volume
drivers, but the functionality is very limited for now (e.g. as of Docker 1.7
only one volume driver is allowed per container and there is no way to pass
parameters to volumes).
A Kubernetes volume, on the other hand, has an explicit lifetime - the same as
the pod that encloses it. Consequently, a volume outlives any containers that run
within the Pod, and data is preserved across Container restarts. Of course, when a
Pod ceases to exist, the volume will cease to exist, too. Perhaps more
importantly than this, Kubernetes supports many type of volumes, and a Pod can
use any number of them simultaneously.
At its core, a volume is just a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the
medium that backs it, and the contents of it are determined by the particular
volume type used.
To use a volume, a pod specifies what volumes to provide for the pod (the
[`spec.volumes`](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod)
field) and where to mount those into containers(the
[`spec.containers.volumeMounts`](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod)
field).
A process in a container sees a filesystem view composed from their Docker
image and volumes. The [Docker
image](https://docs.docker.com/userguide/dockerimages/) is at the root of the
filesystem hierarchy, and any volumes are mounted at the specified paths within
the image. Volumes can not mount onto other volumes or have hard links to
other volumes. Each container in the Pod must independently specify where to
mount each volume.
## Types of Volumes
Kubernetes supports several types of Volumes:
* `emptyDir`
* `hostPath`
* `gcePersistentDisk`
* `awsElasticBlockStore`
* `nfs`
* `iscsi`
* `flocker`
* `glusterfs`
* `rbd`
* `gitRepo`
* `secret`
* `persistentVolumeClaim`
* `downwardAPI`
* `azureFileVolume`
We welcome additional contributions.
### emptyDir
An `emptyDir` volume is first created when a Pod is assigned to a Node, and
exists as long as that Pod is running on that node. As the name says, it is
initially empty. Containers in the pod can all read and write the same
files in the `emptyDir` volume, though that volume can be mounted at the same
or different paths in each container. When a Pod is removed from a node for
any reason, the data in the `emptyDir` is deleted forever. NOTE: a container
crashing does *NOT* remove a pod from a node, so the data in an `emptyDir`
volume is safe across container crashes.
Some uses for an `emptyDir` are:
* scratch space, such as for a disk-based merge sort
* checkpointing a long computation for recovery from crashes
* holding files that a content-manager container fetches while a webserver
container serves the data
By default, `emptyDir` volumes are stored on whatever medium is backing the
machine - that might be disk or SSD or network storage, depending on your
environment. However, you can set the `emptyDir.medium` field to `"Memory"`
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead.
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
machine reboot and any files you write will count against your container's
memory limit.
### hostPath
A `hostPath` volume mounts a file or directory from the host node's filesystem
into your pod. This is not something that most Pods will need, but it offers a
powerful escape hatch for some applications.
For example, some uses for a `hostPath` are:
* running a container that needs access to Docker internals; use a `hostPath`
of `/var/lib/docker`
* running cAdvisor in a container; use a `hostPath` of `/dev/cgroups`
Watch out when using this type of volume, because:
* pods with identical configuration (such as created from a podTemplate) may
behave differently on different nodes due to different files on the nodes
* when Kubernetes adds resource-aware scheduling, as is planned, it will not be
able to account for resources used by a `hostPath`
### gcePersistentDisk
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent
Disk](http://cloud.google.com/compute/docs/disks) into your pod. Unlike
`emptyDir`, which is erased when a Pod is removed, the contents of a PD are
preserved and the volume is merely unmounted. This means that a PD can be
pre-populated with data, and that data can be "handed off" between pods.
__Important: You must create a PD using `gcloud` or the GCE API or UI
before you can use it__
There are some restrictions when using a `gcePersistentDisk`:
* the nodes on which pods are running must be GCE VMs
* those VMs need to be in the same GCE project and zone as the PD
A feature of PD is that they can be mounted as read-only by multiple consumers
simultaneously. This means that you can pre-populate a PD with your dataset
and then serve it in parallel from as many pods as you need. Unfortunately,
PDs can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed.
Using a PD on a pod controlled by a ReplicationController will fail unless
the PD is read-only or the replica count is 0 or 1.
#### Creating a PD
Before you can use a GCE PD with a pod, you need to create it.
```sh
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
```
#### Example pod
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
```
### awsElasticBlockStore
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS
Volume](http://aws.amazon.com/ebs/) into your pod. Unlike
`emptyDir`, which is erased when a Pod is removed, the contents of an EBS
volume are preserved and the volume is merely unmounted. This means that an
EBS volume can be pre-populated with data, and that data can be "handed off"
between pods.
__Important: You must create an EBS volume using `aws ec2 create-volume` or
the AWS API before you can use it__
There are some restrictions when using an awsElasticBlockStore volume:
* the nodes on which pods are running must be AWS EC2 instances
* those instances need to be in the same region and availability-zone as the EBS volume
* EBS only supports a single EC2 instance mounting a volume
#### Creating an EBS volume
Before you can use a EBS volume with a pod, you need to create it.
```sh
aws ec2 create-volume --availability-zone eu-west-1a --size 10 --volume-type gp2
```
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume
type are suitable for your use!)
#### AWS EBS Example configuration
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
```
### nfs
An `nfs` volume allows an existing NFS (Network File System) share to be
mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is
removed, the contents of an `nfs` volume are preserved and the volume is merely
unmounted. This means that an NFS volume can be pre-populated with data, and
that data can be "handed off" between pods. NFS can be mounted by multiple
writers simultaneously.
__Important: You must have your own NFS server running with the share exported
before you can use it__
See the [NFS example](../../examples/nfs/) for more details.
### iscsi
An `iscsi` volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your pod. Unlike `emptyDir`, which is erased when a Pod is removed, the
contents of an `iscsi` volume are preserved and the volume is merely
unmounted. This means that an iscsi volume can be pre-populated with data, and
that data can be "handed off" between pods.
__Important: You must have your own iSCSI server running with the volume
created before you can use it__
A feature of iSCSI is that it can be mounted as read-only by multiple consumers
simultaneously. This means that you can pre-populate a volume with your dataset
and then serve it in parallel from as many pods as you need. Unfortunately,
iSCSI volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed.
See the [iSCSI example](../../examples/iscsi/) for more details.
### flocker
[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management
and orchestration of data volumes backed by a variety of storage backends.
A `flocker` volume allows a Flocker dataset to be mounted into a pod. If the
dataset does not already exist in Flocker, it needs to be first created with the Flocker
CLI or by using the Flocker API. If the dataset already exists it will be
reattached by Flocker to the node that the pod is scheduled. This means data
can be "handed off" between pods as required.
__Important: You must have your own Flocker installation running before you can use it__
See the [Flocker example](../../examples/flocker/) for more details.
### glusterfs
A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open
source networked filesystem) volume to be mounted into your pod. Unlike
`emptyDir`, which is erased when a Pod is removed, the contents of a
`glusterfs` volume are preserved and the volume is merely unmounted. This
means that a glusterfs volume can be pre-populated with data, and that data can
be "handed off" between pods. GlusterFS can be mounted by multiple writers
simultaneously.
__Important: You must have your own GlusterFS installation running before you
can use it__
See the [GlusterFS example](../../examples/glusterfs/) for more details.
### rbd
An `rbd` volume allows a [Rados Block
Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your
pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of
a `rbd` volume are preserved and the volume is merely unmounted. This
means that a RBD volume can be pre-populated with data, and that data can
be "handed off" between pods.
__Important: You must have your own Ceph installation running before you
can use RBD__
A feature of RBD is that it can be mounted as read-only by multiple consumers
simultaneously. This means that you can pre-populate a volume with your dataset
and then serve it in parallel from as many pods as you need. Unfortunately,
RBD volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed.
See the [RBD example](../../examples/rbd/) for more details.
### gitRepo
A `gitRepo` volume is an example of what can be done as a volume plugin. It
mounts an empty directory and clones a git repository into it for your pod to
use. In the future, such volumes may be moved to an even more decoupled model,
rather than extending the Kubernetes API for every such use case.
Here is a example for gitRepo volume:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
```
### secret
A `secret` volume is used to pass sensitive information, such as passwords, to
pods. You can store secrets in the Kubernetes API and mount them as files for
use by pods without coupling to Kubernetes directly. `secret` volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
__Important: You must create a secret in the Kubernetes API before you can use
it__
Secrets are described in more detail [here](secrets.md).
### persistentVolumeClaim
A `persistentVolumeClaim` volume is used to mount a
[PersistentVolume](persistent-volumes.md) into a pod. PersistentVolumes are a
way for users to "claim" durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
See the [PersistentVolumes example](persistent-volumes/) for more
details.
### downwardAPI
A `downwardAPI` volume is used to make downward API data available to applications.
It mounts a directory and writes the requested data in plain text files.
See the [`downwardAPI` volume example](downward-api/volume/README.md) for more details.
### FlexVolume
A `FlexVolume` enables users to mount vendor volumes into a pod. It expects vendor
drivers are installed in the volume plugin path on each kubelet node. This is
an alpha feature and may change in future.
More details are in [here](../../examples/flexvolume/README.md)
### AzureFileVolume
A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0)
into a Pod.
More details can be found [here](../../examples/azure_file/README.md)
## Resources
The storage media (Disk, SSD, etc) of an `emptyDir` volume is determined by the
medium of the filesystem holding the kubelet root dir (typically
`/var/lib/kubelet`). There is no limit on how much space an `emptyDir` or
`hostPath` volume can consume, and no isolation between containers or between
pods.
In the future, we expect that `emptyDir` and `hostPath` volumes will be able to
request a certain amount of space using a [resource](compute-resources.md)
specification, and to select the type of media to use, for clusters that have
several media types.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/volumes.md?pixel)]()

View File

@@ -32,198 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes 101 - Kubectl CLI and Pods
For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes 101 - Kubectl CLI and Pods](#kubernetes-101---kubectl-cli-and-pods)
- [Kubectl CLI](#kubectl-cli)
- [Pods](#pods)
- [Pod Definition](#pod-definition)
- [Pod Management](#pod-management)
- [Volumes](#volumes)
- [Volume Types](#volume-types)
- [Multiple Containers](#multiple-containers)
- [What's Next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Kubectl CLI
The easiest way to interact with Kubernetes is via the [kubectl](../kubectl/kubectl.md) command-line interface.
For more info about kubectl, including its usage, commands, and parameters, see the [kubectl CLI reference](../kubectl/kubectl.md).
If you haven't installed and configured kubectl, finish the [prerequisites](../prereqs.md) before continuing.
## Pods
In Kubernetes, a group of one or more containers is called a _pod_. Containers in a pod are deployed together, and are started, stopped, and replicated as a group.
See [pods](../../../docs/user-guide/pods.md) for more details.
#### Pod Definition
The simplest pod definition describes the deployment of a single container. For example, an nginx web server pod might be defined as such:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until the Pod is deleted.
See the [design document](../../design/README.md) for more details.
#### Pod Management
Create a pod containing an nginx server ([pod-nginx.yaml](pod-nginx.yaml)):
```sh
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx.yaml
```
List all pods:
```sh
$ kubectl get pods
```
On most providers, the pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec.md) for details.
Provided the pod IP is accessible, you should be able to access its http endpoint with curl on port 80:
```sh
$ curl http://$(kubectl get pod nginx -o go-template={{.status.podIP}})
```
Delete the pod by name:
```sh
$ kubectl delete pod nginx
```
#### Volumes
That's great for a simple static web server, but what about persistent storage?
The container file system only lives as long as the container does. So if your app's state needs to survive relocation, reboots, and crashes, you'll need to configure some persistent storage.
For this example we'll be creating a Redis pod with a named volume and volume mount that defines the path to mount the volume.
1. Define a volume:
```yaml
volumes:
- name: redis-persistent-storage
emptyDir: {}
```
2. Define a volume mount within a container definition:
```yaml
volumeMounts:
# name must match the volume name below
- name: redis-persistent-storage
# mount path within the container
mountPath: /data/redis
```
Example Redis pod definition with a persistent storage volume ([pod-redis.yaml](pod-redis.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-redis.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: redis-persistent-storage
mountPath: /data/redis
volumes:
- name: redis-persistent-storage
emptyDir: {}
```
[Download example](pod-redis.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-redis.yaml -->
Notes:
- The `volumeMounts` `name` is a reference to a specific `volumes` `name`.
- The `volumeMounts` `mountPath` is the path to mount the volume within the container.
##### Volume Types
- **EmptyDir**: Creates a new directory that will exist as long as the Pod is running on the node, but it can persist across container failures and restarts.
- **HostPath**: Mounts an existing directory on the node's file system (e.g. `/var/logs`).
See [volumes](../../../docs/user-guide/volumes.md) for more details.
#### Multiple Containers
_Note:
The examples below are syntactically correct, but some of the images (e.g. kubernetes/git-monitor) don't exist yet. We're working on turning these into working examples._
However, often you want to have two different containers that work together. An example of this would be a web server, and a helper job that polls a git repository for new updates:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
```
Note that we have also added a volume here. In this case, the volume is mounted into both containers. It is marked `readOnly` in the web server's case, since it doesn't need to write to the directory.
Finally, we have also introduced an environment variable to the `git-monitor` container, which allows us to parameterize that container with the particular git repository that we want to track.
## What's Next?
Continue on to [Kubernetes 201](k8s201.md) or
for a complete application see the [guestbook example](../../../examples/guestbook/README.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/walkthrough/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,295 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes 201 - Labels, Replication Controllers, Services and Health Checking
If you went through [Kubernetes 101](README.md), you learned about kubectl, pods, volumes, and multiple containers.
For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, deployment and
scaling.
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes 201 - Labels, Replication Controllers, Services and Health Checking](#kubernetes-201---labels-replication-controllers-services-and-health-checking)
- [Labels](#labels)
- [Replication Controllers](#replication-controllers)
- [Replication Controller Management](#replication-controller-management)
- [Services](#services)
- [Service Management](#service-management)
- [Health Checking](#health-checking)
- [Process Health Checking](#process-health-checking)
- [Application Health Checking](#application-health-checking)
- [What's Next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Labels
Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods. Please do! But eventually you will need a system to organize these pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
To add a label, add a labels section under metadata in the pod definition:
```yaml
labels:
app: nginx
```
For example, here is the nginx pod definition with labels ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
[Download example](pod-nginx-with-label.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
```console
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml
```
List all pods with the label `app=nginx`:
```console
$ kubectl get pods -l app=nginx
```
For more information, see [Labels](../labels.md).
They are a core concept used by two additional Kubernetes building blocks: Replication Controllers and Services.
## Replication Controllers
OK, now you know how to make awesome, multi-container, labeled pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogeneous?
Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single Kubernetes object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.
For example, here is a replication controller that instantiates two nginx pods ([replication-controller.yaml](replication-controller.yaml)):
<!-- BEGIN MUNGE: EXAMPLE replication-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 2
# selector identifies the set of Pods that this
# replication controller is responsible for managing
selector:
app: nginx
# podTemplate defines the 'cookie cutter' used for creating
# new pods when necessary
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
[Download example](replication-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE replication-controller.yaml -->
#### Replication Controller Management
Create an nginx replication controller ([replication-controller.yaml](replication-controller.yaml)):
```console
$ kubectl create -f docs/user-guide/walkthrough/replication-controller.yaml
```
List all replication controllers:
```console
$ kubectl get rc
```
Delete the replication controller by name:
```console
$ kubectl delete rc nginx-controller
```
For more information, see [Replication Controllers](../replication-controller.md).
## Services
Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
For example, here is a service that balances across the pods created in the previous nginx replication controller example ([service.yaml](service.yaml)):
<!-- BEGIN MUNGE: EXAMPLE service.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 8000 # the port that this service should serve on
# the container on each pod to connect to, can be a name
# (e.g. 'www') or a number (e.g. 80)
targetPort: 80
protocol: TCP
# just like the selector in the replication controller,
# but this time it identifies the set of pods to load balance
# traffic to.
selector:
app: nginx
```
[Download example](service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE service.yaml -->
#### Service Management
Create an nginx service ([service.yaml](service.yaml)):
```console
$ kubectl create -f docs/user-guide/walkthrough/service.yaml
```
List all services:
```console
$ kubectl get services
```
On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec.md) for details.
Provided the service IP is accessible, you should be able to access its http endpoint with curl on port 80:
```console
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template={{.spec.clusterIP}})
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template'={{(index .spec.ports 0).port}}')
$ curl http://${SERVICE_IP}:${SERVICE_PORT}
```
To delete the service by name:
```console
$ kubectl delete service nginx-service
```
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service.
For more information, see [Services](../services.md).
## Health Checking
When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/kubernetes/kubernetes/issues) indicates otherwise...
Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
and repair of your application. That way a system outside of your application itself is responsible for monitoring the
application and taking action to fix it. It's important that the system be outside of the application, since if
your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
In Kubernetes, the health check monitor is the Kubelet agent.
#### Process Health Checking
The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon
if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples
you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
Kubernetes.
#### Application Health Checking
However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
```go
lockOne := sync.Mutex{}
lockTwo := sync.Mutex{}
go func() {
lockOne.Lock();
lockTwo.Lock();
...
}()
lockTwo.Lock();
lockOne.Lock();
```
This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
Currently, there are three types of application health checks that you can choose from:
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](../liveness/).
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](../liveness/).
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
In all cases, if the Kubelet discovers a failure the container is restarted.
The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](pod-with-http-healthcheck.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
# defines the health checking
livenessProbe:
# an http probe
httpGet:
path: /_status/healthz
port: 80
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
timeoutSeconds: 1
ports:
- containerPort: 80
```
[Download example](pod-with-http-healthcheck.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->
For more information about health checking, see [Container Probes](../pod-states.md#container-probes).
## What's Next?
For a complete application see the [guestbook example](../../../examples/guestbook/).
This file has moved to: http://kubernetes.github.io/docs/user-guide/walkthrough/k8s201/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,57 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Working with Resources
*This document is aimed at users who have worked through some of the examples,
and who want to learn more about using kubectl to manage resources such
as pods and services. Users who want to access the REST API directly,
and developers who want to extend the Kubernetes API should
refer to the [api conventions](../devel/api-conventions.md) and
the [api document](../api.md).*
## Resources are Automatically Modified
When you create a resource such as pod, and then retrieve the created
resource, a number of the fields of the resource are added.
You can see this at work in the following example:
```console
$ cat > /tmp/original.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: busybox
restartPolicy: Never
EOF
$ kubectl create -f /tmp/original.yaml
pods/original
$ kubectl get pods/original -o yaml > /tmp/current.yaml
pods/original
$ wc -l /tmp/original.yaml /tmp/current.yaml
51 /tmp/current.yaml
9 /tmp/original.yaml
60 total
```
The resource we posted had only 9 lines, but the one we got back had 51 lines.
If you `diff -u /tmp/original.yaml /tmp/current.yaml`, you can see the fields added to the pod.
The system adds fields in several ways:
- Some fields are added synchronously with creation of the resource and some are set asynchronously.
- For example: `metadata.uid` is set synchronously. (Read more about [metadata](../devel/api-conventions.md#metadata)).
- For example, `status.hostIP` is set only after the pod has been scheduled. This often happens fast, but you may notice pods which do not have this set yet. This is called Late Initialization. (Read mode about [status](../devel/api-conventions.md#spec-and-status) and [late initialization](../devel/api-conventions.md#late-initialization) ).
- Some fields are set to default values. Some defaults vary by cluster and some are fixed for the API at a certain version. (Read more about [defaulting](../devel/api-conventions.md#defaulting)).
- For example, `spec.containers[0].imagePullPolicy` always defaults to `IfNotPresent` in api v1.
- For example, `spec.containers[0].resources.limits.cpu` may be defaulted to `100m` on some clusters, to some other value on others, and not defaulted at all on others.
The API will generally not modify fields that you have set; it just sets ones which were unspecified.
## <a name="finding_schema_docs"></a>Finding Documentation on Resource Fields
You can browse auto-generated API documentation at the [project website](http://kubernetes.io/v1.0/api-ref.html) or on [github](../../docs/api-reference/).
This file has moved to: http://kubernetes.github.io/docs/user-guide/working-with-resources/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->