move user docs to their new home
283
docs/user-guide/accessing-the-cluster.md
Normal file
@@ -0,0 +1,283 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# User Guide to Accessing the Cluster
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [User Guide to Accessing the Cluster](#user-guide-to-accessing-the-cluster)
|
||||
- [Accessing the cluster API](#accessing-the-cluster-api)
|
||||
- [Accessing for the first time with kubectl](#accessing-for-the-first-time-with-kubectl)
|
||||
- [Directly accessing the REST API](#directly-accessing-the-rest-api)
|
||||
- [Using kubectl proxy](#using-kubectl-proxy)
|
||||
- [Without kubectl proxy](#without-kubectl-proxy)
|
||||
- [Programmatic access to the API](#programmatic-access-to-the-api)
|
||||
- [Accessing the API from a Pod](#accessing-the-api-from-a-pod)
|
||||
- [Accessing services running on the cluster](#accessing-services-running-on-the-cluster)
|
||||
- [Ways to connect](#ways-to-connect)
|
||||
- [Discovering builtin services](#discovering-builtin-services)
|
||||
- [Manually constructing apiserver proxy URLs](#manually-constructing-apiserver-proxy-urls)
|
||||
- [Examples](#examples)
|
||||
- [Using web browsers to access services running on the cluster](#using-web-browsers-to-access-services-running-on-the-cluster)
|
||||
- [Requesting redirects](#requesting-redirects)
|
||||
- [So Many Proxies](#so-many-proxies)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Accessing the cluster API
|
||||
### Accessing for the first time with kubectl
|
||||
When accessing the Kubernetes API for the first time, we suggest using the
|
||||
kubernetes CLI, `kubectl`.
|
||||
|
||||
To access a cluster, you need to know the location of the cluster and have credentials
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
though a [Getting started guide](getting-started-guides/README.md),
|
||||
or someone else setup the cluster and provided you with credentials and a location.
|
||||
|
||||
Check the location and credentials that kubectl knows about with this command:
|
||||
```
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
Many of the [examples](../examples/) provide an introduction to using
|
||||
kubectl and complete documentation is found in the [kubectl manual](user-guide/kubectl/kubectl.md).
|
||||
|
||||
### Directly accessing the REST API
|
||||
Kubectl handles locating and authenticating to the apiserver.
|
||||
If you want to directly access the REST API with an http client like
|
||||
curl or wget, or a browser, there are several ways to locate and authenticate:
|
||||
- Run kubectl in proxy mode.
|
||||
- Recommended approach.
|
||||
- Uses stored apiserver location.
|
||||
- Verifies identity of apiserver using self-signed cert. No MITM possible.
|
||||
- Authenticates to apiserver.
|
||||
- In future, may do intelligent client-side load-balancing and failover.
|
||||
- Provide the location and credentials directly to the http client.
|
||||
- Alternate approach.
|
||||
- Works with some types of client code that are confused by using a proxy.
|
||||
- Need to import a root cert into your browser to protect against MITM.
|
||||
|
||||
#### Using kubectl proxy
|
||||
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
locating the apiserver and authenticating.
|
||||
Run it like this:
|
||||
```
|
||||
kubectl proxy --port=8080 &
|
||||
```
|
||||
See [kubectl proxy](user-guide/kubectl/kubectl_proxy.md) for more details.
|
||||
|
||||
Then you can explore the API with curl, wget, or a browser, like so:
|
||||
```
|
||||
$ curl http://localhost:8080/api/
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
#### Without kubectl proxy
|
||||
It is also possible to avoid using kubectl proxy by passing an authentication token
|
||||
directly to the apiserver, like this:
|
||||
```
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The above example uses the `--insecure` flag. This leaves it subject to MITM
|
||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
make take special configuration to get your http client to use root
|
||||
certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Configuring Access to the API](admin/accessing-the-api.md)
|
||||
describes how a cluster admin can configure this. Such approaches may conflict
|
||||
with future high-availability support.
|
||||
|
||||
### Programmatic access to the API
|
||||
|
||||
There are [client libraries](client-libraries.md) for accessing the API
|
||||
from several languages. The Kubernetes project-supported
|
||||
[Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client)
|
||||
client library can use the same [kubeconfig file](kubeconfig-file.md)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver.
|
||||
|
||||
See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
When accessing the API from a pod, locating and authenticating
|
||||
to the api server are somewhat different.
|
||||
|
||||
The recommended way to locate the apiserver within the pod is with
|
||||
the `kubernetes` DNS name, which resolves to a Service IP which in turn
|
||||
will be routed to an apiserver.
|
||||
|
||||
The recommended way to authenticate to the apiserver is with a
|
||||
[service account](service-accounts.md) credential. By default, a pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
From within a pod the recommended ways to connect to API are:
|
||||
- run a kubectl proxy as one of the containers in the pod, or as a background
|
||||
process within a container. This proxies the
|
||||
kubernetes API to the localhost interface of the pod, so that other processes
|
||||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](../examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
|
||||
This handles locating and authenticating to the apiserver.
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
||||
## Accessing services running on the cluster
|
||||
The previous section was about connecting the Kubernetes API server. This section is about
|
||||
connecting to other services running on Kubernetes cluster. In kubernetes, the
|
||||
[nodes](admin/node.md), [pods](pods.md) and [services](services.md) all have
|
||||
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
|
||||
routable, so they will not be reachable from a machine outside the cluster,
|
||||
such as your desktop machine.
|
||||
|
||||
### Ways to connect
|
||||
You have several options for connecting to nodes, pods and services from outside the cluster:
|
||||
- Access services through public IPs.
|
||||
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
|
||||
the cluster. See the [services](services.md) and
|
||||
[kubectl expose](user-guide/kubectl/kubectl_expose.md) documentation.
|
||||
- Depending on your cluster environment, this may just expose the service to your corporate network,
|
||||
or it may expose it to the internet. Think about whether the service being exposed is secure.
|
||||
Does it do its own authentication?
|
||||
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
|
||||
place a unique label on the pod it and create a new service which selects this label.
|
||||
- In most cases, it should not be necessary for application developer to directly access
|
||||
nodes via their nodeIPs.
|
||||
- Access services, nodes, or pods using the Proxy Verb.
|
||||
- Does apiserver authentication and authorization prior to accessing the remote service.
|
||||
Use this if the services are not secure enough to expose to the internet, or to gain
|
||||
access to ports on the node IP, or for debugging.
|
||||
- Proxies may cause problems for some web applications.
|
||||
- Only works for HTTP/HTTPS.
|
||||
- Described [here](#discovering-builtin-services).
|
||||
- Access from a node or pod in the cluster.
|
||||
- Run a pod, and then connect to a shell in it using [kubectl exec](user-guide/kubectl/kubectl_exec.md).
|
||||
Connect to other nodes, pods, and services from that shell.
|
||||
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
|
||||
access cluster services. This is a non-standard method, and will work on some clusters but
|
||||
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
|
||||
|
||||
### Discovering builtin services
|
||||
|
||||
Typically, there are several services which are started on a cluster by default. Get a list of these
|
||||
with the `kubectl cluster-info` command:
|
||||
```
|
||||
$ kubectl cluster-info
|
||||
|
||||
Kubernetes master is running at https://104.197.5.247
|
||||
elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging
|
||||
kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kibana-logging
|
||||
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kube-dns
|
||||
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-grafana
|
||||
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-heapster
|
||||
```
|
||||
This shows the proxy-verb URL for accessing each service.
|
||||
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
|
||||
at `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
|
||||
`http://localhost:8080/api/v1/proxy/namespaces/default/services/elasticsearch-logging/`.
|
||||
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
|
||||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/`*`service_path`*`/`*`service_name`*`/`*`service_endpoint-suffix-parameter`*
|
||||
<!--- TODO: update this part of doc because it doesn't seem to be valid. What
|
||||
about namespaces? 'proxy' verb? -->
|
||||
|
||||
##### Examples
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?q=user:kimchy`
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_cluster/health?pretty=true`
|
||||
```
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
"status" : "yellow",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 1,
|
||||
"number_of_data_nodes" : 1,
|
||||
"active_primary_shards" : 5,
|
||||
"active_shards" : 5,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 5
|
||||
}
|
||||
```
|
||||
|
||||
#### Using web browsers to access services running on the cluster
|
||||
You may be able to put an apiserver proxy url into the address bar of a browser. However:
|
||||
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
|
||||
but your cluster may not be configured to accept basic auth.
|
||||
- Some web apps may not work, particularly those with client side javascript that construct urls in a
|
||||
way that is unaware of the proxy path prefix.
|
||||
|
||||
## Requesting redirects
|
||||
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
|
||||
|
||||
## So Many Proxies
|
||||
There are several different proxies you may encounter when using kubernetes:
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
1. The [apiserver proxy](#discovering-builtin-services):
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
- client to proxy uses HTTPS (or http if apiserver so configured)
|
||||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
1. The [kube proxy](services.md#ips-and-vips):
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is just used to reach services
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
1. Cloud Load Balancers on external services:
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
|
||||
|
||||
|
||||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
will typically ensure that the latter types are setup correctly.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
44
docs/user-guide/annotations.md
Normal file
@@ -0,0 +1,44 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Annotations
|
||||
|
||||
We have [labels](labels.md) for identifying metadata.
|
||||
|
||||
It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels.
|
||||
|
||||
Like labels, annotations are key-value maps.
|
||||
```
|
||||
"annotations": {
|
||||
"key1" : "value1",
|
||||
"key2" : "value2"
|
||||
}
|
||||
```
|
||||
|
||||
Possible information that could be recorded in annotations:
|
||||
|
||||
* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging
|
||||
* build/release/image information (timestamps, release ids, git branch, PR numbers, image hashes, registry address, etc.)
|
||||
* pointers to logging/monitoring/analytics/audit repos
|
||||
* client library/tool information (e.g. for debugging purposes -- name, version, build info)
|
||||
* other user and/or tool/system provenance info, such as URLs of related objects from other ecosystem components
|
||||
* lightweight rollout tool metadata (config and/or checkpoints)
|
||||
* phone/pager number(s) of person(s) responsible, or directory entry where that info could be found, such as a team website
|
||||
|
||||
Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
171
docs/user-guide/application-troubleshooting.md
Normal file
@@ -0,0 +1,171 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Application Troubleshooting.
|
||||
|
||||
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
|
||||
This is *not* a guide for people who want to debug their cluster. For that you should check out
|
||||
[this guide](cluster-troubleshooting.md)
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [Application Troubleshooting.](#application-troubleshooting.)
|
||||
- [FAQ](#faq)
|
||||
- [Diagnosing the problem](#diagnosing-the-problem)
|
||||
- [Debugging Pods](#debugging-pods)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## FAQ
|
||||
Users are highly encouraged to check out our [FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/User-FAQ)
|
||||
|
||||
## Diagnosing the problem
|
||||
The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or
|
||||
your Service?
|
||||
* [Debugging Pods](#debugging-pods)
|
||||
* [Debugging Replication Controllers](#debugging-replication-controllers)
|
||||
* [Debugging Services](#debugging-services)
|
||||
|
||||
### Debugging Pods
|
||||
The first step in debugging a Pod is taking a look at it. For the purposes of example, imagine we have a pod
|
||||
```my-pod``` which holds two containers ```container-1``` and ```container-2```
|
||||
|
||||
First, describe the pod. This will show the current state of the Pod and recent events.
|
||||
|
||||
```sh
|
||||
export POD_NAME=my-pod
|
||||
kubectl describe pods ${POD_NAME}
|
||||
```
|
||||
|
||||
Look at the state of the containers in the pod. Are they all ```Running```? Have there been recent restarts?
|
||||
|
||||
Depending on the state of the pod, you may want to:
|
||||
* [Debug a pending pod](#debugging-pending-pods)
|
||||
* [Debug a waiting pod](#debugging-waiting-pods)
|
||||
* [Debug a crashing pod](#debugging-crashing-pods-or-otherwise-unhealthy-pods)
|
||||
|
||||
#### Debuging Pending Pods
|
||||
If a Pod is stuck in ```Pending``` it means that it can not be scheduled onto a node. Generally this is because
|
||||
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
|
||||
```kubectl describe ...``` command above. There should be messages from the scheduler about why it can not schedule
|
||||
your pod. Reasons include:
|
||||
|
||||
You don't have enough resources. You may have exhausted the supply of CPU or Memory in your cluster, in this case
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster.
|
||||
|
||||
You are using ```hostPort```. When you bind a Pod to a ```hostPort``` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, ```hostPort``` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
```hostPort``` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
|
||||
|
||||
|
||||
#### Debugging Waiting Pods
|
||||
If a Pod is stuck in the ```Waiting``` state, then it has been scheduled to a worker node, but it can't run on that machine.
|
||||
Again, the information from ```kubectl describe ...``` should be informative. The most common cause of ```Waiting``` pods
|
||||
is a failure to pull the image. Make sure that you have the name of the image correct. Have you pushed it to the repository?
|
||||
Does it work if you run a manual ```docker pull <image>``` on your machine?
|
||||
|
||||
#### Debugging Crashing or otherwise unhealthy pods
|
||||
|
||||
Let's suppose that ```container-2``` has been crash looping and you don't know why, you can take a look at the logs of
|
||||
the current container:
|
||||
|
||||
```sh
|
||||
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
If your container has previously crashed, you can access the previous container's crash log with:
|
||||
```sh
|
||||
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
Alternately, you can run commands inside that container with ```exec```:
|
||||
|
||||
```sh
|
||||
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
Note that ```-c ${CONTAINER_NAME}``` is optional and can be omitted for Pods that only contain a single container.
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
```sh
|
||||
kubectl exec cassandra -- cat /var/log/cassandra/system.log
|
||||
```
|
||||
|
||||
|
||||
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
|
||||
but this should generally not be necessary given tools in the Kubernetes API. Indeed if you find yourself needing to ssh into a machine, please file a
|
||||
feature request on GitHub describing your use case and why these tools are insufficient.
|
||||
|
||||
### Debugging Replication Controllers
|
||||
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
|
||||
create pods, then please refer to the [instructions above](#debugging-pods)
|
||||
|
||||
You can also use ```kubectl describe rc ${CONTROLLER_NAME}``` to introspect events related to the replication
|
||||
controller.
|
||||
|
||||
### Debugging Services
|
||||
Services provide load balancing across a set of pods. There are several common problems that can make Services
|
||||
not work properly. The following instructions should help debug Service problems.
|
||||
|
||||
#### Verify that there are endpoints for the service
|
||||
For every Service object, the apiserver makes an ```endpoints`` resource available.
|
||||
|
||||
You can view this resource with:
|
||||
|
||||
```
|
||||
kubectl get endpoints ${SERVICE_NAME}
|
||||
```
|
||||
|
||||
Make sure that the endpoints match up with the number of containers that you expect to be a member of your service.
|
||||
For example, if your Service is for an nginx container with 3 replicas, you would expect to see three different
|
||||
IP addresses in the Service's endpoints.
|
||||
|
||||
#### Missing endpoints
|
||||
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
|
||||
a Service where the labels are:
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
- selector:
|
||||
name: nginx
|
||||
type: frontend
|
||||
```
|
||||
|
||||
You can use:
|
||||
```
|
||||
kubectl get pods --selector=name=nginx,type=frontend
|
||||
```
|
||||
|
||||
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
|
||||
|
||||
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
|
||||
have the right ports exposed. If your service has a ```containerPort``` specified, but the Pods that are
|
||||
selected don't have that port listed, then they won't be added to the endpoints list.
|
||||
|
||||
Verify that the pod's ```containerPort``` matches up with the Service's ```containerPort```
|
||||
|
||||
#### Network traffic isn't forwarded
|
||||
If you can connect to the service, but the connection is immediately dropped, and there are endpoints
|
||||
in the endpoints list, it's likely that the proxy can't contact your pods.
|
||||
|
||||
There are three things to
|
||||
check:
|
||||
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods)
|
||||
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the ```containerPort``` field needs to be 8080.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
227
docs/user-guide/compute-resources.md
Normal file
@@ -0,0 +1,227 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Compute Resources
|
||||
|
||||
** Table of Contents**
|
||||
- Compute Resources
|
||||
- [Container and Pod Resource Limits](#container-and-pod-resource-limits)
|
||||
- [How Pods with Resource Limits are Scheduled](#how-pods-with-resource-limits-are-scheduled)
|
||||
- [How Pods with Resource Limits are Run](#how-pods-with-resource-limits-are-run)
|
||||
- [Monitoring Compute Resource Usage](#monitoring-compute-resource-usage)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Detecting Resource Starved Containers](#detecting-resource-starved-containers)
|
||||
- [Planned Improvements](#planned-improvements)
|
||||
|
||||
When specifying a [pod](pods.md), you can optionally specify how much CPU and memory (RAM) each
|
||||
container needs. When containers have resource limits, the scheduler is able to make better
|
||||
decisions about which nodes to place pods on, and contention for resources can be handled in a
|
||||
consistent manner.
|
||||
|
||||
*CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified
|
||||
in units of cores. Memory is specified in units of bytes.
|
||||
|
||||
CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute
|
||||
resources are measureable quantities which can be requested, allocated, and consumed. They are
|
||||
distinct from [API resources](working-with-resources.md). API resources, such as pods and
|
||||
[services](services.md) are objects that can be written to and retrieved from the Kubernetes API
|
||||
server.
|
||||
|
||||
## Container and Pod Resource Limits
|
||||
|
||||
Each container of a Pod can optionally specify `spec.container[].resources.limits.cpu` and/or
|
||||
`spec.container[].resources.limits.memory`. The `spec.container[].resources.requests` field is not
|
||||
currently used and need not be set.
|
||||
|
||||
Specifying resource limits is optional. In some clusters, an unset value may be replaced with a
|
||||
default value when a pod is created or updated. The default value depends on how the cluster is
|
||||
configured.
|
||||
|
||||
Although limits can only be specified on individual containers, it is convenient to talk about pod
|
||||
resource limits. A *pod resource limit* for a particular resource type is the sum of the resource
|
||||
limits of that type for each container in the pod, with unset values treated as zero.
|
||||
|
||||
The following pod has two containers. Each has a limit of 0.5 core of cpu and 128MiB
|
||||
(2<sup>20</sup> bytes) of memory. The pod can be said to have a limit of 1 core and 256MiB of
|
||||
memory.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: db
|
||||
image: mysql
|
||||
resources:
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "500m"
|
||||
- name: wp
|
||||
image: wordpress
|
||||
resources:
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "500m"
|
||||
```
|
||||
|
||||
## How Pods with Resource Limits are Scheduled
|
||||
|
||||
When a pod is created, the kubernetes scheduler selects a node for the pod to
|
||||
run on. Each node has a maximum capacity for each of the resource types: the
|
||||
amount of CPU and memory it can provide for pods. The scheduler ensures that,
|
||||
for each resource type (CPU and memory), the sum of the resource limits of the
|
||||
containers scheduled to the node is less than the capacity of the node. Note
|
||||
that although actual memory or CPU resource usage on nodes is very low, the
|
||||
scheduler will still refuse to place pods onto nodes if the capacity check
|
||||
fails. This protects against a resource shortage on a node when resource usage
|
||||
later increases, such as due to a daily peak in request rate.
|
||||
|
||||
Note: Although the scheduler normally spreads pods out across nodes, there are currently some cases
|
||||
where pods with no limits (unset values) might all land on the same node.
|
||||
|
||||
## How Pods with Resource Limits are Run
|
||||
|
||||
When kubelet starts a container of a pod, it passes the CPU and memory limits to the container
|
||||
runner (Docker or rkt).
|
||||
|
||||
When using Docker:
|
||||
- The `spec.container[].resources.limits.cpu` is multiplied by 1024, converted to an integer, and
|
||||
used as the value of the [`--cpu-shares`](
|
||||
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
|
||||
command.
|
||||
- The `spec.container[].resources.limits.memory` is converted to an integer, and used as the value
|
||||
of the [`--memory`](https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag
|
||||
to the `docker run` command.
|
||||
|
||||
**TODO: document behavior for rkt**
|
||||
|
||||
If a container exceeds its memory limit, it may be terminated. If it is restartable, it will be
|
||||
restarted by kubelet, as will any other type of runtime failure.
|
||||
|
||||
A container may or may not be allowed to exceed its CPU limit for extended periods of time.
|
||||
However, it will not be killed for excessive CPU usage.
|
||||
|
||||
To determine if a container cannot be scheduled or is being killed due to resource limits, see the
|
||||
"Troubleshooting" section below.
|
||||
|
||||
## Monitoring Compute Resource Usage
|
||||
|
||||
The resource usage of a pod is reported as part of the Pod status.
|
||||
|
||||
If [optional monitoring](../cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
|
||||
then pod resource usage can be retrieved from the monitoring system.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If the scheduler cannot find any node where a pod can fit, then the pod will remain unscheduled
|
||||
until a place can be found. An event will be produced each time the scheduler fails to find a
|
||||
place for the pod, like this:
|
||||
```
|
||||
$ kubectl describe pods/frontend | grep -A 3 Events
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Tue, 30 Jun 2015 09:01:41 -0700 Tue, 30 Jun 2015 09:39:27 -0700 128 {scheduler } failedScheduling Error scheduling: For each of these fitness predicates, pod frontend failed on at least one node: PodFitsResources.
|
||||
```
|
||||
|
||||
If a pod or pods are pending with this message, then there are several things to try:
|
||||
- Add more nodes to the cluster.
|
||||
- Terminate unneeded pods to make room for pending pods.
|
||||
- Check that the pod is not larger than all the nodes. For example, if all the nodes
|
||||
have a capacity of `cpu: 1`, then a pod with a limit of `cpu: 1.1` will never be scheduled.
|
||||
|
||||
You can check node capacities with the `kubectl get nodes -o <format>` command.
|
||||
Here are some example command lines that extract just the necessary information:
|
||||
- `kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'`
|
||||
- `kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'`
|
||||
|
||||
The [resource quota](admin/resource-quota.md) feature can be configured
|
||||
to limit the total amount of resources that can be consumed. If used in conjunction
|
||||
with namespaces, it can prevent one team from hogging all the resources.
|
||||
|
||||
### Detecting Resource Starved Containers
|
||||
To check if a container is being killed because it is hitting a resource limit, call `kubectl describe pod`
|
||||
on the pod you are interested in:
|
||||
|
||||
```
|
||||
[12:54:41] $ ./cluster/kubectl.sh describe pod simmemleak-hra99
|
||||
Name: simmemleak-hra99
|
||||
Namespace: default
|
||||
Image(s): saadali/simmemleak
|
||||
Node: kubernetes-minion-tf0f/10.240.216.66
|
||||
Labels: name=simmemleak
|
||||
Status: Running
|
||||
Reason:
|
||||
Message:
|
||||
IP: 10.244.2.75
|
||||
Replication Controllers: simmemleak (1/1 replicas created)
|
||||
Containers:
|
||||
simmemleak:
|
||||
Image: saadali/simmemleak
|
||||
Limits:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
State: Running
|
||||
Started: Tue, 07 Jul 2015 12:54:41 -0700
|
||||
Ready: False
|
||||
Restart Count: 5
|
||||
Conditions:
|
||||
Type Status
|
||||
Ready False
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-minion-tf0f
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
|
||||
```
|
||||
|
||||
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
|
||||
|
||||
Once [#10861](https://github.com/GoogleCloudPlatform/kubernetes/issues/10861) is resolved the reason for the termination of the last container will also be printed in this output.
|
||||
|
||||
Until then you can call `get pod` with the `-o template -t ...` option to fetch the status of previously terminated containers:
|
||||
```
|
||||
[13:59:01] $ ./cluster/kubectl.sh get pod -o template -t '{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
|
||||
Container Name: simmemleak
|
||||
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]][13:59:03] clusterScaleDoc ~/go/src/github.com/GoogleCloudPlatform/kubernetes $
|
||||
```
|
||||
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
|
||||
|
||||
## Planned Improvements
|
||||
|
||||
The current system only allows resource quantities to be specified on a container.
|
||||
It is planned to improve accounting for resources which are shared by all containers in a pod,
|
||||
such as [EmptyDir volumes](volumes.md#emptydir).
|
||||
|
||||
The current system only supports container limits for CPU and Memory.
|
||||
It is planned to add new resource types, including a node disk space
|
||||
resource, and a framework for adding custom [resource types](design/resources.md#resource-types).
|
||||
|
||||
The current system does not facilitate overcommitment of resources because resources reserved
|
||||
with container limits are assured. It is planned to support multiple levels of [Quality of
|
||||
Service](https://github.com/GoogleCloudPlatform/kubernetes/issues/168).
|
||||
|
||||
Currently, one unit of CPU means different things on different cloud providers, and on different
|
||||
machine types within the same cloud providers. For example, on AWS, the capacity of a node
|
||||
is reported in [ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
|
||||
cores. We plan to revise the definition of the cpu resource to allow for more consistency
|
||||
across providers and platforms.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
123
docs/user-guide/container-environment.md
Normal file
@@ -0,0 +1,123 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Kubernetes Container Environment
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [Kubernetes Container Environment](#kubernetes-container-environment)
|
||||
- [Overview](#overview)
|
||||
- [Cluster Information](#cluster-information)
|
||||
- [Container Information](#container-information)
|
||||
- [Cluster Information](#cluster-information)
|
||||
- [Container Hooks](#container-hooks)
|
||||
- [Hook Details](#hook-details)
|
||||
- [Hook Handler Execution](#hook-handler-execution)
|
||||
- [Hook delivery guarantees](#hook-delivery-guarantees)
|
||||
- [Hook Handler Implementations](#hook-handler-implementations)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
|
||||
## Overview
|
||||
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode). In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
|
||||
|
||||
This cluster information makes it possible to build applications that are *cluster aware*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analogous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
|
||||
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](images.md) and one or more [volumes](volumes.md).
|
||||
|
||||
|
||||
The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system.
|
||||
|
||||
## Cluster Information
|
||||
There are two types of information that are available within the container environment. There is information about the container itself, and there is information about other objects in the system.
|
||||
|
||||
### Container Information
|
||||
Currently, the only information about the container that is available to the container is the Pod name for the pod in which the container is running. This ID is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image.
|
||||
|
||||
In the future, we anticipate expanding this information with richer information about the container. Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server.
|
||||
|
||||
### Cluster Information
|
||||
Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables. The set of environment variables matches the syntax of Docker links.
|
||||
|
||||
For a service named **foo** that maps to a container port named **bar**, the following variables are defined:
|
||||
|
||||
```sh
|
||||
FOO_SERVICE_HOST=<the host the service is running on>
|
||||
FOO_SERVICE_PORT=<the port the service is running on>
|
||||
```
|
||||
|
||||
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns) is enabled). Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
|
||||
|
||||
## Container Hooks
|
||||
*NB*: Container hooks are under active development, we anticipate adding additional hooks as the Kubernetes container management system evolves.*
|
||||
|
||||
Container hooks provide information to the container about events in its management lifecycle. For example, immediately after a container is started, it receives a *PostStart* hook. These hooks are broadcast *into* the container with information about the life-cycle of the container. They are different from the events provided by Docker and other systems which are *output* from the container. Output events provide a log of what has already happened. Input hooks provide real-time notification about things that are happening, but no historical log.
|
||||
|
||||
### Hook Details
|
||||
There are currently two container hooks that are surfaced to containers, and two proposed hooks:
|
||||
|
||||
*PreStart - ****Proposed***
|
||||
|
||||
This hook is sent immediately before a container is created. It notifies that the container will be created immediately after the call completes. No parameters are passed. *Note - *Some event handlers (namely ‘exec’ are incompatible with this event)
|
||||
|
||||
*PostStart*
|
||||
|
||||
This hook is sent immediately after a container is created. It notifies the container that it has been created. No parameters are passed to the handler.
|
||||
|
||||
*PostRestart - ****Proposed***
|
||||
|
||||
This hook is called before the PostStart handler, when a container has been restarted, rather than started for the first time. No parameters are passed to the handler.
|
||||
|
||||
*PreStop*
|
||||
|
||||
This hook is called immediately before a container is terminated. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent.
|
||||
|
||||
A single parameter named reason is passed to the handler which contains the reason for termination. Currently the valid values for reason are:
|
||||
|
||||
* ```Delete``` - indicating an API call to delete the pod containing this container.
|
||||
* ```Health``` - indicating that a health check of the container failed.
|
||||
* ```Dependency``` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted. Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD.
|
||||
|
||||
Eventually, user specified reasons may be [added to the API](https://github.com/GoogleCloudPlatform/kubernetes/issues/137).
|
||||
|
||||
|
||||
### Hook Handler Execution
|
||||
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook. These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod. If your hook handler blocks, no other management (including health checks) will occur until the hook handler completes. Blocking hook handlers do *not* affect management of other Pods. Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop)
|
||||
|
||||
For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs. The details of this parameter passing is handler implementation dependent (see below).
|
||||
|
||||
### Hook delivery guarantees
|
||||
Hook delivery is "at least one", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
|
||||
correctly.
|
||||
|
||||
We expect double delivery to be rare, but in some cases if the ```kubelet``` restarts in the middle of sending a hook, the hook may be resent after the kubelet comes back up.
|
||||
|
||||
Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend.
|
||||
|
||||
### Hook Handler Implementations
|
||||
Hook handlers are the way that hooks are surfaced to containers. Containers can select the type of hook handler they would like to implement. Kubernetes currently supports two different hook handler types:
|
||||
|
||||
* Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroup and namespaces of the container. Resources consumed by the command are counted against the container. Commands which print "ok" to standard out (stdout) are treated as healthy, any other output is treated as container failures (and will cause kubelet to forcibly restart the container). Parameters are passed to the command as traditional linux command line flags (e.g. pre-stop.sh --reason=HEALTH)
|
||||
|
||||
* HTTP - Executes an HTTP request against a specific endpoint on the container. HTTP error codes (5xx) and non-response/failure to connect are treated as container failures. Parameters are passed to the http endpoint as query args (e.g. http://some.server.com/some/path?reason=HEALTH)
|
||||
|
||||
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
108
docs/user-guide/containers.md
Normal file
@@ -0,0 +1,108 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Containers with Kubernetes
|
||||
|
||||
## Containers and commands
|
||||
|
||||
So far the Pods we've seen have all used the `image` field to indicate what process Kubernetes
|
||||
should run in a container. In this case, Kubernetes runs the image's default command. If we want
|
||||
to run a particular command or override the image's defaults, there are two additional fields that
|
||||
we can use:
|
||||
|
||||
1. `Command`: Controls the actual command run by the image
|
||||
2. `Args`: Controls the arguments passed to the command
|
||||
|
||||
### How docker handles command and arguments
|
||||
|
||||
Docker images have metadata associated with them that is used to store information about the image.
|
||||
The image author may use this to define defaults for the command and arguments to run a container
|
||||
when the user does not supply values. Docker calls the fields for commands and arguments
|
||||
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
|
||||
describe here, mostly due to the fact that the docker API allows users to specify both of these
|
||||
fields as either a string array or a string and there are subtle differences in how those cases are
|
||||
handled. We encourage the curious to check out [docker's documentation]() for this feature.
|
||||
|
||||
Kubernetes allows you to override both the image's default command (docker `Entrypoint`) and args
|
||||
(docker `Cmd`) with the `Command` and `Args` fields of `Container`. The rules are:
|
||||
|
||||
1. If you do not supply a `Command` or `Args` for a container, the defaults defined by the image
|
||||
will be used
|
||||
2. If you supply a `Command` but no `Args` for a container, only the supplied `Command` will be
|
||||
used; the image's default arguments are ignored
|
||||
3. If you supply only `Args`, the image's default command will be used with the arguments you
|
||||
supply
|
||||
4. If you supply a `Command` **and** `Args`, the image's defaults will be ignored and the values
|
||||
you supply will be used
|
||||
|
||||
Here are examples for these rules in table format
|
||||
|
||||
| Image `Entrypoint` | Image `Cmd` | Container `Command` | Container `Args` | Command Run |
|
||||
|--------------------|------------------|---------------------|--------------------|------------------|
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||
|
||||
|
||||
## Capabilities
|
||||
|
||||
By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. We can have fine grain control over the capabilities using cap-add and cap-drop.More details [here](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration).
|
||||
|
||||
The relationship between Docker's capabilities and [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html)
|
||||
|
||||
| Docker's capabilities | Linux capabilities |
|
||||
| ---- | ---- |
|
||||
| SETPCAP | CAP_SETPCAP |
|
||||
| SYS_MODULE | CAP_SYS_MODULE |
|
||||
| SYS_RAWIO | CAP_SYS_RAWIO |
|
||||
| SYS_PACCT | CAP_SYS_PACCT |
|
||||
| SYS_ADMIN | CAP_SYS_ADMIN |
|
||||
| SYS_NICE | CAP_SYS_NICE |
|
||||
| SYS_RESOURCE | CAP_SYS_RESOURCE |
|
||||
| SYS_TIME | CAP_SYS_TIME |
|
||||
| SYS_TTY_CONFIG | CAP_SYS_TTY_CONFIG |
|
||||
| MKNOD | CAP_MKNOD |
|
||||
| AUDIT_WRITE | CAP_AUDIT_WRITE |
|
||||
| AUDIT_CONTROL | CAP_AUDIT_CONTROL |
|
||||
| MAC_OVERRIDE | CAP_MAC_OVERRIDE |
|
||||
| MAC_ADMIN | CAP_MAC_ADMIN |
|
||||
| NET_ADMIN | CAP_NET_ADMIN |
|
||||
| SYSLOG | CAP_SYSLOG |
|
||||
| CHOWN | CAP_CHOWN |
|
||||
| NET_RAW | CAP_NET_RAW |
|
||||
| DAC_OVERRIDE | CAP_DAC_OVERRIDE |
|
||||
| FOWNER | CAP_FOWNER |
|
||||
| DAC_READ_SEARCH | CAP_DAC_READ_SEARCH |
|
||||
| FSETID | CAP_FSETID |
|
||||
| KILL | CAP_KILL |
|
||||
| SETGID | CAP_SETGID |
|
||||
| SETUID | CAP_SETUID |
|
||||
| LINUX_IMMUTABLE | CAP_LINUX_IMMUTABLE |
|
||||
| NET_BIND_SERVICE | CAP_NET_BIND_SERVICE |
|
||||
| NET_BROADCAST | CAP_NET_BROADCAST |
|
||||
| IPC_LOCK | CAP_IPC_LOCK |
|
||||
| IPC_OWNER | CAP_IPC_OWNER |
|
||||
| SYS_CHROOT | CAP_SYS_CHROOT |
|
||||
| SYS_PTRACE | CAP_SYS_PTRACE |
|
||||
| SYS_BOOT | CAP_SYS_BOOT |
|
||||
| LEASE | CAP_LEASE |
|
||||
| SETFCAP | CAP_SETFCAP |
|
||||
| WAKE_ALARM | CAP_WAKE_ALARM |
|
||||
| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND |
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
508
docs/user-guide/debugging-services.md
Normal file
@@ -0,0 +1,508 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# My Service isn't working - how to debug
|
||||
|
||||
An issue that comes up rather frequently for new installations of Kubernetes is
|
||||
that `Services` are not working properly. You've run all your `Pod`s and
|
||||
`ReplicationController`s, but you get no response when you try to access them.
|
||||
This document will hopefully help you to figure out what's going wrong.
|
||||
|
||||
## Conventions
|
||||
|
||||
Throughout this doc you will see various commands that you can run. Some
|
||||
commands need to be run within `Pod`, others on a Kubernetes `Node`, and others
|
||||
can run anywhere you have `kubectl` and credentials for the cluster. To make it
|
||||
clear what is expected, this document will use the following conventions.
|
||||
|
||||
If the command "COMMAND" is expected to run in a `Pod` and produce "OUTPUT":
|
||||
|
||||
```sh
|
||||
pod$ COMMAND
|
||||
OUTPUT
|
||||
```
|
||||
|
||||
If the command "COMMAND" is expected to run on a `Node` and produce "OUTPUT":
|
||||
|
||||
```sh
|
||||
node$ COMMAND
|
||||
OUTPUT
|
||||
```
|
||||
|
||||
If the command is "kubectl ARGS":
|
||||
|
||||
```sh
|
||||
$ kubectl ARGS
|
||||
OUTPUT
|
||||
```
|
||||
|
||||
## Running commands in a Pod
|
||||
|
||||
For many steps here will will want to see what a `Pod` running in the cluster
|
||||
sees. Kubernetes does not directly support interactive `Pod`s (yet), but you can
|
||||
approximate it:
|
||||
|
||||
```sh
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
> apiVersion: v1
|
||||
> kind: Pod
|
||||
> metadata:
|
||||
> name: busybox-sleep
|
||||
> spec:
|
||||
> containers:
|
||||
> - name: busybox
|
||||
> image: busybox
|
||||
> args:
|
||||
> - sleep
|
||||
> - "1000000"
|
||||
> EOF
|
||||
pods/busybox-sleep
|
||||
```
|
||||
|
||||
Now, when you need to run a command (even an interactive shell) in a `Pod`-like
|
||||
context:
|
||||
|
||||
```sh
|
||||
$ kubectl exec busybox-sleep hostname
|
||||
busybox-sleep
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
$ kubectl exec -ti busybox-sleep sh
|
||||
/ #
|
||||
```
|
||||
|
||||
## Setup
|
||||
|
||||
For the purposes of this walk-through, let's run some `Pod`s.
|
||||
|
||||
```sh
|
||||
$ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
|
||||
--labels=app=hostnames \
|
||||
--port=9376 \
|
||||
--replicas=3
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
hostnames hostnames gcr.io/google_containers/serve_hostname app=hostnames 3
|
||||
```
|
||||
|
||||
Note that this is the same as if you had started the `ReplicationController` with
|
||||
the following YAML:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: hostnames
|
||||
spec:
|
||||
selector:
|
||||
app: hostnames
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hostnames
|
||||
spec:
|
||||
containers:
|
||||
- name: hostnames
|
||||
image: gcr.io/google_containers/serve_hostname
|
||||
ports:
|
||||
- containerPort: 9376
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
Confirm your `Pod`s are running:
|
||||
|
||||
```sh
|
||||
$ kubectl get pods -l app=hostnames
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hostnames-0uton 1/1 Running 0 12s
|
||||
hostnames-bvc05 1/1 Running 0 12s
|
||||
hostnames-yp2kp 1/1 Running 0 12s
|
||||
```
|
||||
|
||||
## Does the Service exist?
|
||||
|
||||
The astute reader will have noticed that we did not actually create a `Service`
|
||||
yet - that is intentional. This is a step that sometimes gets forgotten, and
|
||||
is the first thing to check.
|
||||
|
||||
So what would happen if I tried to access a non-existent `Service`? Assuming you
|
||||
have another `Pod` that consumes this `Service` by name you would get something
|
||||
like:
|
||||
|
||||
```sh
|
||||
pod$ wget -qO- hostnames
|
||||
wget: bad address 'hostname'
|
||||
```
|
||||
|
||||
or:
|
||||
|
||||
```sh
|
||||
pod$ echo $HOSTNAMES_SERVICE_HOST
|
||||
|
||||
```
|
||||
|
||||
So the first thing to check is whether that `Service` actually exists:
|
||||
|
||||
```sh
|
||||
$ kubectl get svc hostnames
|
||||
Error from server: service "hostnames" not found
|
||||
```
|
||||
|
||||
So we have a culprit, let's create the `Service`:
|
||||
|
||||
```sh
|
||||
$ kubectl expose rc hostnames --port=80 --target-port=9376
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
hostnames app=hostnames app=hostnames 80/TCP
|
||||
```
|
||||
|
||||
And read it back, just to be sure:
|
||||
|
||||
```sh
|
||||
$ kubectl get svc hostnames
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
hostnames app=hostnames app=hostnames 10.0.1.175 80/TCP
|
||||
```
|
||||
|
||||
As before, this is the same as if you had started the `Service` with YAML:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: hostnames
|
||||
spec:
|
||||
selector:
|
||||
app: hostnames
|
||||
ports:
|
||||
- name: default
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
```
|
||||
|
||||
Now you can confirm that the `Service` exists.
|
||||
|
||||
## Does the Service work by DNS?
|
||||
|
||||
From a `Pod` in the same `Namespace`:
|
||||
|
||||
```sh
|
||||
pod$ nslookup hostnames
|
||||
Server: 10.0.0.10
|
||||
Address: 10.0.0.10#53
|
||||
|
||||
Name: hostnames.default.svc.cluster.local
|
||||
Address: 10.0.1.175
|
||||
```
|
||||
|
||||
If this fails, perhaps because your `Pod` and `Service` are in different
|
||||
`Namespace`s, try a namespace-qualified name:
|
||||
|
||||
```sh
|
||||
pod$ nslookup hostnames.default
|
||||
Server: 10.0.0.10
|
||||
Address: 10.0.0.10#53
|
||||
|
||||
Name: hostnames.default.svc.cluster.local
|
||||
Address: 10.0.1.175
|
||||
```
|
||||
|
||||
If this still fails, try a fully-qualified name:
|
||||
|
||||
```sh
|
||||
pod$ nslookup hostnames.default.svc.cluster.local
|
||||
Server: 10.0.0.10
|
||||
Address: 10.0.0.10#53
|
||||
|
||||
Name: hostnames.default.svc.cluster.local
|
||||
Address: 10.0.1.175
|
||||
```
|
||||
|
||||
You can also try this from a `Node` in the cluster (note: 10.0.0.10 is my DNS
|
||||
`Service`):
|
||||
|
||||
```sh
|
||||
node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
|
||||
Server: 10.0.0.10
|
||||
Address: 10.0.0.10#53
|
||||
|
||||
Name: hostnames.default.svc.cluster.local
|
||||
Address: 10.0.1.175
|
||||
```
|
||||
|
||||
If you are able to do a fully-qualified name lookup but not a relative one, you
|
||||
need to check that your `kubelet` is running with the right flags.
|
||||
The `--cluster_dns` flag needs to point to your DNS `Service`'s IP and the
|
||||
`--cluster_domain` flag needs to be your cluster's domain - we assumed
|
||||
"cluster.local" in this document, but your might be different, in which case
|
||||
you should change that in all of the commands.
|
||||
|
||||
### Does any Service exist in DNS?
|
||||
|
||||
If the above still fails - DNS lookups are not working for your `Service` - we
|
||||
can take a step back and see what else is not working. The Kubernetes master
|
||||
`Service` should always work:
|
||||
|
||||
```sh
|
||||
pod$ nslookup kubernetes.default
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
Name: kubernetes
|
||||
Address 1: 10.0.0.1
|
||||
```
|
||||
|
||||
If this fails, you might need to go to the kube-proxy section of this doc, or
|
||||
even go back to the top of this document and start over, but instead of
|
||||
debugging your own `Service`, debug DNS.
|
||||
|
||||
## Does the Service work by IP?
|
||||
|
||||
The next thing to test is whether your `Service` worksat all. From a
|
||||
`Node` in your cluster, access the `Service`'s IP (from `kubectl get` above).
|
||||
|
||||
```sh
|
||||
node$ curl 10.0.1.175:80
|
||||
hostnames-0uton
|
||||
|
||||
node$ curl 10.0.1.175:80
|
||||
hostnames-yp2kp
|
||||
|
||||
node$ curl 10.0.1.175:80
|
||||
hostnames-bvc05
|
||||
```
|
||||
|
||||
If your `Service` is working, you should get correct responses. If not, there
|
||||
are a number of things that could be going wrong. Read on.
|
||||
|
||||
## Is the Service correct?
|
||||
|
||||
It might sound silly, but you should really double and triple check that your
|
||||
`Service` is correct and matches your `Pods`. Read back your `Service` and
|
||||
verify it:
|
||||
|
||||
```sh
|
||||
$ kubectl get service hostnames -o json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "hostnames",
|
||||
"namespace": "default",
|
||||
"selfLink": "/api/v1/namespaces/default/services/hostnames",
|
||||
"uid": "428c8b6c-24bc-11e5-936d-42010af0a9bc",
|
||||
"resourceVersion": "347189",
|
||||
"creationTimestamp": "2015-07-07T15:24:29Z",
|
||||
"labels": {
|
||||
"app": "hostnames"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"ports": [
|
||||
{
|
||||
"name": "default",
|
||||
"protocol": "TCP",
|
||||
"port": 80,
|
||||
"targetPort": 9376,
|
||||
"nodePort": 0
|
||||
}
|
||||
],
|
||||
"selector": {
|
||||
"app": "hostnames"
|
||||
},
|
||||
"clusterIP": "10.0.1.175",
|
||||
"type": "ClusterIP",
|
||||
"sessionAffinity": "None"
|
||||
},
|
||||
"status": {
|
||||
"loadBalancer": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Is the port you are trying to access in `spec.ports[]`? Is the `targetPort`
|
||||
correct for your `Pod`s? If you meant it to be a numeric port, is it a number
|
||||
(9376) or a string "9376"? If you meant it to be a named port, do your `Pod`s
|
||||
expose a port with the same name? Is the port's `protocol` the same as the
|
||||
`Pod`'s?
|
||||
|
||||
## Does the Service have any Endpoints?
|
||||
|
||||
If you got this far, we assume that you have confirmed that your `Service`
|
||||
exists and resolves by DNS. Now let's check that the `Pod`s you ran are
|
||||
actually being selected by the `Service`.
|
||||
|
||||
Earlier we saw that the `Pod`s were running. We can re-check that:
|
||||
|
||||
```sh
|
||||
$ kubectl get pods -l app=hostnames
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hostnames-0uton 1/1 Running 0 1h
|
||||
hostnames-bvc05 1/1 Running 0 1h
|
||||
hostnames-yp2kp 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
The "AGE" column says that these `Pod`s are about an hour old, which implies that
|
||||
they are running fine and not crashing.
|
||||
|
||||
The `-l app=hostnames` argument is a label selector - just like our `Service`
|
||||
has. Inside the Kubernetes system is a control loop which evaluates the
|
||||
selector of every `Service` and save the results into an `Endpoints` object.
|
||||
|
||||
```sh
|
||||
$ kubectl get endpoints hostnames
|
||||
NAME ENDPOINTS
|
||||
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
|
||||
```
|
||||
|
||||
This confirms that the control loop has found the correct `Pod`s for your
|
||||
`Service`. If the `hostnames` row is blank, you should check that the
|
||||
`spec.selector` field of your `Service` actually selects for `metadata.labels`
|
||||
values on your `Pod`s.
|
||||
|
||||
## Are the Pods working?
|
||||
|
||||
At this point, we know that your `Service` exists and has selected your `Pod`s.
|
||||
Let's check that the `Pod`s are actually working - we can bypass the `Service`
|
||||
mechanism and go straight to the `Pod`s.
|
||||
|
||||
```sh
|
||||
pod$ wget -qO- 10.244.0.5:9376
|
||||
hostnames-0uton
|
||||
|
||||
pod $ wget -qO- 10.244.0.6:9376
|
||||
hostnames-bvc05
|
||||
|
||||
pod$ wget -qO- 10.244.0.7:9376
|
||||
hostnames-yp2kp
|
||||
```
|
||||
|
||||
We expect each `Pod` in the `Endpoints` list to return its own hostname. If
|
||||
this is not what happens (or whatever the correct behavior is for your own
|
||||
`Pod`s), you should investigate what's happening there. You might find
|
||||
`kubectl logs` to be useful or `kubectl exec` directly to your `Pod`s and check
|
||||
service from there.
|
||||
|
||||
## Is the kube-proxy working?
|
||||
|
||||
If you get here, your `Service` is running, has `Endpoints`, and your `Pod`s
|
||||
are actually serving. At this point, the whole `Service` proxy mechanism is
|
||||
suspect. Let's confirm it, piece by piece.
|
||||
|
||||
### Is kube-proxy running?
|
||||
|
||||
Confirm that `kube-proxy` is running on your `Node`s. You should get something
|
||||
like the below:
|
||||
|
||||
```sh
|
||||
node$ ps auxw | grep kube-proxy
|
||||
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
|
||||
```
|
||||
|
||||
Next, confirm that it is not failing something obvious, like contacting the
|
||||
master. To do this, you'll have to look at the logs. Accessing the logs
|
||||
depends on your `Node` OS. On some OSes it is a file, such as
|
||||
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
|
||||
should see something like:
|
||||
|
||||
```
|
||||
I0707 17:34:53.945651 30031 server.go:88] Running in resource-only container "/kube-proxy"
|
||||
I0707 17:34:53.945921 30031 proxier.go:121] Setting proxy IP to 10.240.115.247 and initializing iptables
|
||||
I0707 17:34:54.053023 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kubernetes: to [10.240.169.188:443]
|
||||
I0707 17:34:54.053175 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
|
||||
I0707 17:34:54.053284 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kube-dns:dns to [10.244.3.3:53]
|
||||
I0707 17:34:54.053310 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kube-dns:dns-tcp to [10.244.3.3:53]
|
||||
I0707 17:34:54.054780 30031 proxier.go:306] Adding new service "default/kubernetes:" at 10.0.0.1:443/TCP
|
||||
I0707 17:34:54.054903 30031 proxier.go:247] Proxying for service "default/kubernetes:" on TCP port 40074
|
||||
I0707 17:34:54.079181 30031 proxier.go:306] Adding new service "default/hostnames:default" at 10.0.1.175:80/TCP
|
||||
I0707 17:34:54.079273 30031 proxier.go:247] Proxying for service "default/hostnames:default" on TCP port 48577
|
||||
I0707 17:34:54.113665 30031 proxier.go:306] Adding new service "default/kube-dns:dns" at 10.0.0.10:53/UDP
|
||||
I0707 17:34:54.113776 30031 proxier.go:247] Proxying for service "default/kube-dns:dns" on UDP port 34149
|
||||
I0707 17:34:54.120224 30031 proxier.go:306] Adding new service "default/kube-dns:dns-tcp" at 10.0.0.10:53/TCP
|
||||
I0707 17:34:54.120297 30031 proxier.go:247] Proxying for service "default/kube-dns:dns-tcp" on TCP port 53476
|
||||
I0707 17:34:54.902313 30031 proxysocket.go:130] Accepted TCP connection from 10.244.3.3:42670 to 10.244.3.1:40074
|
||||
I0707 17:34:54.903107 30031 proxysocket.go:130] Accepted TCP connection from 10.244.3.3:42671 to 10.244.3.1:40074
|
||||
I0707 17:35:46.015868 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:57493
|
||||
I0707 17:35:46.017061 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:55471
|
||||
```
|
||||
|
||||
If you see error messages about not being able to contact the master, you
|
||||
should double-check your `Node` configuration and installation steps.
|
||||
|
||||
### Is kube-proxy writing iptables rules?
|
||||
|
||||
One of the main responsibilities of `kube-proxy` is to write the `iptables`
|
||||
rules which implement `Service`s. Let's check that those rules are getting
|
||||
written.
|
||||
|
||||
```
|
||||
node$ iptables-save | grep hostnames
|
||||
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
|
||||
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
|
||||
```
|
||||
|
||||
There should be 2 rules for each port on your `Service` (just one in this
|
||||
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
|
||||
not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and
|
||||
then look at the logs again.
|
||||
|
||||
### Is kube-proxy proxying?
|
||||
|
||||
Assuming you do see the above rules, try again to access your `Service` by IP:
|
||||
|
||||
```sh
|
||||
node$ curl 10.0.1.175:80
|
||||
hostnames-0uton
|
||||
```
|
||||
|
||||
If this fails, we can try accessing the proxy directly. Look back at the
|
||||
`iptables-save` output above, and extract the port number that `kube-proxy` is
|
||||
using for your `Service`. In the above examples it is "48577". Now connect to
|
||||
that:
|
||||
|
||||
```sh
|
||||
node$ curl localhost:48577
|
||||
hostnames-yp2kp
|
||||
```
|
||||
|
||||
If this still fails, look at the `kube-proxy` logs for specific lines like:
|
||||
|
||||
```
|
||||
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
|
||||
```
|
||||
|
||||
If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
|
||||
then look at the logs again.
|
||||
|
||||
## Seek help
|
||||
|
||||
If you get this far, something very strange is happening. Your `Service` is
|
||||
running, has `Endpoints`, and your `Pod`s are actually serving. You have DNS
|
||||
working, `iptables` rules installed, and `kube-proxy` does not seem to be
|
||||
misbehaving. And yet your `Service` is not working. You should probably let
|
||||
us know, so we can help investigate!
|
||||
|
||||
Contact us on
|
||||
[IRC](http://webchat.freenode.net/?channels=google-containers) or
|
||||
[email](https://groups.google.com/forum/#!forum/google-containers) or
|
||||
[GitHub](https://github.com/GoogleCloudPlatform/kubernetes).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
93
docs/user-guide/downward-api.md
Normal file
@@ -0,0 +1,93 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Downward API
|
||||
|
||||
It is sometimes useful for a container to have information about itself, but we
|
||||
want to be careful not to over-couple containers to Kubernetes. The downward
|
||||
API allows containers to consume information about themselves or the system and
|
||||
expose that information how they want it, without necessarily coupling to the
|
||||
kubernetes client or REST API.
|
||||
|
||||
An example of this is a "legacy" app that is already written assuming
|
||||
that a particular environment variable will hold a unique identifier. While it
|
||||
is often possible to "wrap" such applications, this is tedious and error prone,
|
||||
and violates the goal of low coupling. Instead, the user should be able to use
|
||||
the Pod's name, for example, and inject it into this well-known variable.
|
||||
|
||||
## Capabilities
|
||||
|
||||
The following information is available to a `Pod` through the downward API:
|
||||
|
||||
* The pod's name
|
||||
* The pod's namespace
|
||||
|
||||
More information will be exposed through this same API over time.
|
||||
|
||||
## Exposing pod information into a container
|
||||
|
||||
Containers consume information from the downward API using environment
|
||||
variables. In the future, containers will also be able to consume the downward
|
||||
API via a volume plugin.
|
||||
|
||||
### Environment variables
|
||||
|
||||
Most environment variables in the Kubernetes API use the `value` field to carry
|
||||
simple values. However, the alternate `valueFrom` field allows you to specify
|
||||
a `fieldRef` to select fields from the pod's definition. The `fieldRef` field
|
||||
is a structure that has an `apiVersion` field and a `fieldPath` field. The
|
||||
`fieldPath` field is an expression designating a field of the pod. The
|
||||
`apiVersion` field is the version of the API schema that the `fieldPath` is
|
||||
written in terms of. If the `apiVersion` field is not specified it is
|
||||
defaulted to the API version of the enclosing object.
|
||||
|
||||
The `fieldRef` is evaluated and the resulting value is used as the value for
|
||||
the environment variable. This allows users to publish their pod's name in any
|
||||
environment variable they want.
|
||||
|
||||
## Example
|
||||
|
||||
This is an example of a pod that consumes its name and namespace via the
|
||||
downward API:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dapi-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "-c", "env" ]
|
||||
env:
|
||||
- name: MY_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: MY_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
Some more thorough examples:
|
||||
* [environment variables](../examples/environment-guide/)
|
||||
* [downward API](../examples/downward-api/)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
52
docs/user-guide/downward-api/README.md
Normal file
@@ -0,0 +1,52 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Downward API example
|
||||
|
||||
Following this example, you will create a pod with a containers that consumes the pod's name and
|
||||
namespace using the [downward API](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/downward_api.md).
|
||||
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting
|
||||
started](../../docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the pod
|
||||
|
||||
Containers consume the downward API using environment variables. The downward API allows
|
||||
containers to be injected with the name and namespace of the pod the container is in.
|
||||
|
||||
Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
|
||||
downward API.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/downward-api/dapi-pod.yaml
|
||||
```
|
||||
|
||||
### Examine the logs
|
||||
|
||||
This pod runs the `env` command in a container that consumes the downward API. You can grep
|
||||
through the pod logs to see that the pod was injected with the correct values:
|
||||
|
||||
```shell
|
||||
$ kubectl logs dapi-test-pod | grep POD_
|
||||
2015-04-30T20:22:18.568024817Z POD_NAME=dapi-test-pod
|
||||
2015-04-30T20:22:18.568087688Z POD_NAMESPACE=default
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
19
docs/user-guide/downward-api/dapi-pod.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dapi-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "-c", "env" ]
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
restartPolicy: Never
|
108
docs/user-guide/environment-guide/README.md
Normal file
@@ -0,0 +1,108 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Environment Guide Example
|
||||
=========================
|
||||
This example demonstrates running pods, replication controllers, and
|
||||
services. It shows two types of pods: frontend and backend, with
|
||||
services on top of both. Accessing the frontend pod will return
|
||||
environment information about itself, and a backend pod that it has
|
||||
accessed through the service. The goal is to illuminate the
|
||||
environment metadata available to running containers inside the
|
||||
Kubernetes cluster. The documentation for the kubernetes environment
|
||||
is [here](../../docs/container-environment.md).
|
||||
|
||||

|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
This example assumes that you have a Kubernetes cluster installed and
|
||||
running, and that you have installed the `kubectl` command line tool
|
||||
somewhere in your path. Please see the [getting
|
||||
started](../../docs/getting-started-guides/) for installation instructions
|
||||
for your platform.
|
||||
|
||||
Optional: Build your own containers
|
||||
-----------------------------------
|
||||
The code for the containers is under
|
||||
[containers/](containers/)
|
||||
|
||||
Get everything running
|
||||
----------------------
|
||||
|
||||
kubectl create -f ./backend-rc.yaml
|
||||
kubectl create -f ./backend-srv.yaml
|
||||
kubectl create -f ./show-rc.yaml
|
||||
kubectl create -f ./show-srv.yaml
|
||||
|
||||
Query the service
|
||||
-----------------
|
||||
Use `kubectl describe service show-srv` to determine the public IP of
|
||||
your service.
|
||||
|
||||
> Note: If your platform does not support external load balancers,
|
||||
you'll need to open the proper port and direct traffic to the
|
||||
internal IP shown for the frontend service with the above command
|
||||
|
||||
Run `curl <public ip>:80` to query the service. You should get
|
||||
something like this back:
|
||||
|
||||
```
|
||||
Pod Name: show-rc-xxu6i
|
||||
Pod Namespace: default
|
||||
USER_VAR: important information
|
||||
|
||||
Kubenertes environment variables
|
||||
BACKEND_SRV_SERVICE_HOST = 10.147.252.185
|
||||
BACKEND_SRV_SERVICE_PORT = 5000
|
||||
KUBERNETES_RO_SERVICE_HOST = 10.147.240.1
|
||||
KUBERNETES_RO_SERVICE_PORT = 80
|
||||
KUBERNETES_SERVICE_HOST = 10.147.240.2
|
||||
KUBERNETES_SERVICE_PORT = 443
|
||||
KUBE_DNS_SERVICE_HOST = 10.147.240.10
|
||||
KUBE_DNS_SERVICE_PORT = 53
|
||||
|
||||
Found backend ip: 10.147.252.185 port: 5000
|
||||
Response from backend
|
||||
Backend Container
|
||||
Backend Pod Name: backend-rc-6qiya
|
||||
Backend Namespace: default
|
||||
```
|
||||
|
||||
First the frontend pod's information is printed. The pod name and
|
||||
[namespace](../../docs/design/namespaces.md) are retreived from the
|
||||
[Downward API](../../docs/downward-api.md). Next, `USER_VAR` is the name of
|
||||
an environment variable set in the [pod
|
||||
definition](show-rc.yaml). Then, the dynamic kubernetes environment
|
||||
variables are scanned and printed. These are used to find the backend
|
||||
service, named `backend-srv`. Finally, the frontend pod queries the
|
||||
backend service and prints the information returned. Again the backend
|
||||
pod returns its own pod name and namespace.
|
||||
|
||||
Try running the `curl` command a few times, and notice what
|
||||
changes. Ex: `watch -n 1 curl -s <ip>` Firstly, the frontend service
|
||||
is directing your request to different frontend pods each time. The
|
||||
frontend pods are always contacting the backend through the backend
|
||||
service. This results in a different backend pod servicing each
|
||||
request as well.
|
||||
|
||||
Cleanup
|
||||
-------
|
||||
kubectl delete rc,service -l type=show-type
|
||||
kubectl delete rc,service -l type=backend-type
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
30
docs/user-guide/environment-guide/backend-rc.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: backend-rc
|
||||
labels:
|
||||
type: backend-type
|
||||
spec:
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
type: backend-type
|
||||
spec:
|
||||
containers:
|
||||
- name: backend-container
|
||||
image: gcr.io/google-samples/env-backend:1.1
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
13
docs/user-guide/environment-guide/backend-srv.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: backend-srv
|
||||
labels:
|
||||
type: backend-type
|
||||
spec:
|
||||
ports:
|
||||
- port: 5000
|
||||
protocol: TCP
|
||||
selector:
|
||||
type: backend-type
|
39
docs/user-guide/environment-guide/containers/README.md
Normal file
@@ -0,0 +1,39 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Building
|
||||
--------
|
||||
For each container, the build steps are the same. The examples below
|
||||
are for the `show` container. Replace `show` with `backend` for the
|
||||
backend container.
|
||||
|
||||
GCR
|
||||
---
|
||||
docker build -t gcr.io/<project-name>/show .
|
||||
gcloud docker push gcr.io/<project-name>/show
|
||||
|
||||
Docker Hub
|
||||
----------
|
||||
docker build -t <username>/show .
|
||||
docker push <username>/show
|
||||
|
||||
Change Pod Definitions
|
||||
----------------------
|
||||
Edit both `show-rc.yaml` and `backend-rc.yaml` and replace the
|
||||
specified `image:` with the one that you built.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@@ -0,0 +1,2 @@
|
||||
FROM golang:onbuild
|
||||
EXPOSE 8080
|
@@ -0,0 +1,37 @@
|
||||
/*
|
||||
Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
)
|
||||
|
||||
func printInfo(resp http.ResponseWriter, req *http.Request) {
|
||||
name := os.Getenv("POD_NAME")
|
||||
namespace := os.Getenv("POD_NAMESPACE")
|
||||
fmt.Fprintf(resp, "Backend Container\n")
|
||||
fmt.Fprintf(resp, "Backend Pod Name: %v\n", name)
|
||||
fmt.Fprintf(resp, "Backend Namespace: %v\n", namespace)
|
||||
}
|
||||
|
||||
func main() {
|
||||
http.HandleFunc("/", printInfo)
|
||||
log.Fatal(http.ListenAndServe(":5000", nil))
|
||||
}
|
@@ -0,0 +1,2 @@
|
||||
FROM golang:onbuild
|
||||
EXPOSE 8080
|
95
docs/user-guide/environment-guide/containers/show/show.go
Normal file
@@ -0,0 +1,95 @@
|
||||
/*
|
||||
Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func getKubeEnv() (map[string]string, error) {
|
||||
environS := os.Environ()
|
||||
environ := make(map[string]string)
|
||||
for _, val := range environS {
|
||||
split := strings.Split(val, "=")
|
||||
if len(split) != 2 {
|
||||
return environ, fmt.Errorf("Some weird env vars")
|
||||
}
|
||||
environ[split[0]] = split[1]
|
||||
}
|
||||
for key := range environ {
|
||||
if !(strings.HasSuffix(key, "_SERVICE_HOST") ||
|
||||
strings.HasSuffix(key, "_SERVICE_PORT")) {
|
||||
delete(environ, key)
|
||||
}
|
||||
}
|
||||
return environ, nil
|
||||
}
|
||||
|
||||
func printInfo(resp http.ResponseWriter, req *http.Request) {
|
||||
kubeVars, err := getKubeEnv()
|
||||
if err != nil {
|
||||
http.Error(resp, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
backendHost := os.Getenv("BACKEND_SRV_SERVICE_HOST")
|
||||
backendPort := os.Getenv("BACKEND_SRV_SERVICE_PORT")
|
||||
backendRsp, backendErr := http.Get(fmt.Sprintf(
|
||||
"http://%v:%v/",
|
||||
backendHost,
|
||||
backendPort))
|
||||
if backendErr == nil {
|
||||
defer backendRsp.Body.Close()
|
||||
}
|
||||
|
||||
name := os.Getenv("POD_NAME")
|
||||
namespace := os.Getenv("POD_NAMESPACE")
|
||||
fmt.Fprintf(resp, "Pod Name: %v \n", name)
|
||||
fmt.Fprintf(resp, "Pod Namespace: %v \n", namespace)
|
||||
|
||||
envvar := os.Getenv("USER_VAR")
|
||||
fmt.Fprintf(resp, "USER_VAR: %v \n", envvar)
|
||||
|
||||
fmt.Fprintf(resp, "\nKubenertes environment variables\n")
|
||||
var keys []string
|
||||
for key := range kubeVars {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
for _, key := range keys {
|
||||
fmt.Fprintf(resp, "%v = %v \n", key, kubeVars[key])
|
||||
}
|
||||
|
||||
fmt.Fprintf(resp, "\nFound backend ip: %v port: %v\n", backendHost, backendPort)
|
||||
if backendErr == nil {
|
||||
fmt.Fprintf(resp, "Response from backend\n")
|
||||
io.Copy(resp, backendRsp.Body)
|
||||
} else {
|
||||
fmt.Fprintf(resp, "Error from backend: %v", backendErr.Error())
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
http.HandleFunc("/", printInfo)
|
||||
log.Fatal(http.ListenAndServe(":8080", nil))
|
||||
}
|
BIN
docs/user-guide/environment-guide/diagram.png
Normal file
After Width: | Height: | Size: 18 KiB |
32
docs/user-guide/environment-guide/show-rc.yaml
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: show-rc
|
||||
labels:
|
||||
type: show-type
|
||||
spec:
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
type: show-type
|
||||
spec:
|
||||
containers:
|
||||
- name: show-container
|
||||
image: gcr.io/google-samples/env-show:1.1
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: USER_VAR
|
||||
value: important information
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
15
docs/user-guide/environment-guide/show-srv.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: show-srv
|
||||
labels:
|
||||
type: show-type
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 8080
|
||||
selector:
|
||||
type: show-type
|
29
docs/user-guide/identifiers.md
Normal file
@@ -0,0 +1,29 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Identifiers
|
||||
All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID.
|
||||
|
||||
For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md).
|
||||
|
||||
## Names
|
||||
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names.
|
||||
|
||||
## UIDs
|
||||
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
255
docs/user-guide/images.md
Normal file
@@ -0,0 +1,255 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Images
|
||||
|
||||
Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/userguide/dockerimages/).
|
||||
|
||||
You create your Docker image and push it to a registry before referring to it in a kubernetes pod.
|
||||
|
||||
The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [Images](#images)
|
||||
- [Updating Images](#updating-images)
|
||||
- [Using a Private Registry](#using-a-private-registry)
|
||||
- [Using Google Container Registry](#using-google-container-registry)
|
||||
- [Configuring Nodes to Authenticate to a Private Repository](#configuring-nodes-to-authenticate-to-a-private-repository)
|
||||
- [Pre-pulling Images](#pre-pulling-images)
|
||||
- [Specifying ImagePullSecrets on a Pod](#specifying-imagepullsecrets-on-a-pod)
|
||||
- [Use Cases](#use-cases)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Updating Images
|
||||
|
||||
The default pull policy is `PullIfNotPresent` which causes the Kubelet to not
|
||||
pull an image if it already exists. If you would like to always force a pull
|
||||
you must set a pull image policy of `PullAlways` or specify a `:latest` tag on
|
||||
your image.
|
||||
|
||||
## Using a Private Registry
|
||||
|
||||
Private registries may require keys to read images from them.
|
||||
Credentials can be provided in several ways:
|
||||
- Using Google Container Registry
|
||||
- Per-cluster
|
||||
- automatically configured on Google Compute Engine or Google Container Engine
|
||||
- all pods can read the project's private registry
|
||||
- Configuring Nodes to Authenticate to a Private Registry
|
||||
- all pods can read any configured private registries
|
||||
- requires node configuration by cluster administrator
|
||||
- Pre-pulling Images
|
||||
- all pods can use any images cached on a node
|
||||
- requires root access to all nodes to setup
|
||||
- Specifying ImagePullSecrets on a Pod
|
||||
- only pods which provide own keys can access the private registry
|
||||
Each option is described in more detail below.
|
||||
|
||||
|
||||
### Using Google Container Registry
|
||||
|
||||
Kubernetes has native support for the [Google Container
|
||||
Registry (GCR)](https://cloud.google.com/tools/container-registry/), when running on Google Compute
|
||||
Engine (GCE). If you are running your cluster on GCE or Google Container Engine (GKE), simply
|
||||
use the full image name (e.g. gcr.io/my_project/image:tag).
|
||||
|
||||
All pods in a cluster will have read access to images in this registry.
|
||||
|
||||
The kubelet kubelet will authenticate to GCR using the instance's
|
||||
Google service account. The service account on the instance
|
||||
will have a `https://www.googleapis.com/auth/devstorage.read_only`,
|
||||
so it can pull from the project's GCR, but not push.
|
||||
|
||||
### Configuring Nodes to Authenticate to a Private Repository
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
Docker stores keys for private registries in the `$HOME/.dockercfg` file. If you put this
|
||||
in the `$HOME` of `root` on a kubelet, then docker will use it.
|
||||
|
||||
Here are the recommended steps to configuring your nodes to use a private registry. In this
|
||||
example, run these on your desktop/laptop:
|
||||
1. run `docker login [server]` for each set of credentials you want to use.
|
||||
1. view `$HOME/.dockercfg` in an editor to ensure it contains just the credentials you want to use.
|
||||
1. get a list of your nodes
|
||||
- for example: `nodes=$(kubectl get nodes -o template --template='{{range.items}}{{.metadata.name}} {{end}}')`
|
||||
1. copy your local `.dockercfg` to the home directory of roon on each node.
|
||||
- for example: `for n in $nodes; do scp ~/.dockercfg root@$n:/root/.dockercfg; done`
|
||||
|
||||
Verify by creating a pod that uses a private image, e.g.:
|
||||
```
|
||||
$ cat <<EOF > private-image-test-1.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: private-image-test-1
|
||||
spec:
|
||||
containers:
|
||||
- name: uses-private-image
|
||||
image: $PRIVATE_IMAGE_NAME
|
||||
command: [ "echo", "SUCCESS" ]
|
||||
imagePullPolicy: Always
|
||||
EOF
|
||||
$ kubectl create -f private-image-test-1.yaml
|
||||
pods/private-image-test-1
|
||||
$
|
||||
```
|
||||
If everything is working, then, after a few moments, you should see:
|
||||
```
|
||||
$ kubectl logs private-image-test-1
|
||||
SUCCESS
|
||||
```
|
||||
|
||||
If it failed, then you will see:
|
||||
```
|
||||
$ kubectl describe pods/private-image-test-1 | grep "Failed"
|
||||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
||||
```
|
||||
|
||||
|
||||
You must ensure all nodes in the cluster have the same `.dockercfg`. Otherwise, pods will run on
|
||||
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
|
||||
template needs to include the `.dockercfg` or mount a drive that contains it.
|
||||
|
||||
All pods will have read access to images in any private registry once private
|
||||
registry keys are added to the `.dockercfg`.
|
||||
|
||||
**This was tested with a private docker repository as of 26 June with Kubernetes version v0.19.3.
|
||||
It should also work for a private registry such as quay.io, but that has not been tested.**
|
||||
|
||||
### Pre-pulling Images
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
Be default, the kubelet will try to pull each image from the specified registry.
|
||||
However, if the `imagePullPolicy` property of the container is set to `IfNotPresent` or `Never`,
|
||||
then a local image is used (preferentially or exclusively, respectively).
|
||||
|
||||
If you want to rely on pre-pulled images as a substitute for registry authentication,
|
||||
you must ensure all nodes in the cluster have the same pre-pulled images.
|
||||
|
||||
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
|
||||
|
||||
All pods will have read access to any pre-pulled images.
|
||||
|
||||
### Specifying ImagePullSecrets on a Pod
|
||||
|
||||
**Note:** This approach is currently the recommended approach for GKE, GCE, and any cloud-providers
|
||||
where node creation is automated.
|
||||
|
||||
Kubernetes supports specifying registry keys on a pod.
|
||||
|
||||
First, create a `.dockercfg`, such as running `docker login <registry.domain>`.
|
||||
Then put the resulting `.dockercfg` file into a [secret resource](secrets.md). For example:
|
||||
```
|
||||
$ docker login
|
||||
Username: janedoe
|
||||
Password: ●●●●●●●●●●●
|
||||
Email: jdoe@example.com
|
||||
WARNING: login credentials saved in /Users/jdoe/.dockercfg.
|
||||
Login Succeeded
|
||||
|
||||
$ echo $(cat ~/.dockercfg)
|
||||
{ "https://index.docker.io/v1/": { "auth": "ZmFrZXBhc3N3b3JkMTIK", "email": "jdoe@example.com" } }
|
||||
|
||||
$ cat ~/.dockercfg | base64
|
||||
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
|
||||
|
||||
$ cat > image-pull-secret.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: myregistrykey
|
||||
data:
|
||||
.dockercfg: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
|
||||
type: kubernetes.io/dockercfg
|
||||
EOF
|
||||
|
||||
$ kubectl create -f image-pull-secret.yaml
|
||||
secrets/myregistrykey
|
||||
$
|
||||
```
|
||||
|
||||
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
|
||||
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockercfg]: invalid value ...` it means
|
||||
the data was successfully un-base64 encoded, but could not be parsed as a dockercfg file.
|
||||
|
||||
This process only needs to be done one time (per namespace).
|
||||
|
||||
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
|
||||
section to a pod definition.
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: foo
|
||||
spec:
|
||||
containers:
|
||||
- name: foo
|
||||
image: janedoe/awesomeapp:v1
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
```
|
||||
This needs to be done for each pod that is using a private registry.
|
||||
However, setting of this field can be automated by setting the imagePullSecrets
|
||||
in a [serviceAccount](service-accounts.md) resource.
|
||||
|
||||
Currently, all pods will potentially have read access to any images which were
|
||||
pulled using imagePullSecrets. That is, imagePullSecrets does *NOT* protect your
|
||||
images from being seen by other users in the cluster. Our intent
|
||||
is to fix that.
|
||||
|
||||
You can use this in conjunction with a per-node `.dockerfile`. The credentials
|
||||
will be merged. This approach will work on Google Container Engine (GKE).
|
||||
|
||||
### Use Cases
|
||||
There are a number of solutions for configuring private registries. Here are some
|
||||
common use cases and suggested solutions.
|
||||
|
||||
1. Cluster running only non-proprietary (e.g open-source) images. No need to hide images.
|
||||
- Use public images on the Docker hub.
|
||||
- no configuration required
|
||||
- on GCE/GKE, a local mirror is automatically used for improved speed and availability
|
||||
1. Cluster running some proprietary images which should be hidden to those outside the company, but
|
||||
visible to all cluster users.
|
||||
- Use a hosted private [Docker registry](https://docs.docker.com/registry/)
|
||||
- may be hosted on the [Docker Hub](https://hub.docker.com/account/signup/), or elsewhere.
|
||||
- manually configure .dockercfg on each node as described above
|
||||
- Or, run an internal private registry behind your firewall with open read access.
|
||||
- no kubernetes configuration required
|
||||
- Or, when on GCE/GKE, use the project's Google Container Registry.
|
||||
- will work better with cluster autoscaling than manual node configuration
|
||||
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
|
||||
1. Cluster with a proprietary images, a few of which require stricter access control
|
||||
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
|
||||
- DO NOT use imagePullSecrets for this use case yet.
|
||||
1. A multi-tenant cluster where each tenant needs own private registry
|
||||
- NOT supported yet.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
BIN
docs/user-guide/influx.png
Normal file
After Width: | Height: | Size: 522 KiB |
BIN
docs/user-guide/k8s-ui-explore-filter.png
Normal file
After Width: | Height: | Size: 70 KiB |
BIN
docs/user-guide/k8s-ui-explore-groupby.png
Normal file
After Width: | Height: | Size: 71 KiB |
BIN
docs/user-guide/k8s-ui-explore-poddetail.png
Normal file
After Width: | Height: | Size: 52 KiB |
BIN
docs/user-guide/k8s-ui-explore.png
Normal file
After Width: | Height: | Size: 67 KiB |
BIN
docs/user-guide/k8s-ui-nodes.png
Normal file
After Width: | Height: | Size: 35 KiB |
BIN
docs/user-guide/k8s-ui-overview.png
Normal file
After Width: | Height: | Size: 76 KiB |
BIN
docs/user-guide/kibana.png
Normal file
After Width: | Height: | Size: 81 KiB |
168
docs/user-guide/kubeconfig-file.md
Normal file
@@ -0,0 +1,168 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# kubeconfig files
|
||||
In order to easily switch between multiple clusters, a kubeconfig file was defined. This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname.
|
||||
|
||||
Multiple kubeconfig files are allowed. At runtime they are loaded and merged together along with override options specified from the command line (see rules below).
|
||||
|
||||
## Related discussion
|
||||
https://github.com/GoogleCloudPlatform/kubernetes/issues/1755
|
||||
|
||||
## Example kubeconfig file
|
||||
```
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
api-version: v1
|
||||
server: http://cow.org:8080
|
||||
name: cow-cluster
|
||||
- cluster:
|
||||
certificate-authority: path/to/my/cafile
|
||||
server: https://horse.org:4443
|
||||
name: horse-cluster
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://pig.org:443
|
||||
name: pig-cluster
|
||||
contexts:
|
||||
- context:
|
||||
cluster: horse-cluster
|
||||
namespace: chisel-ns
|
||||
user: green-user
|
||||
name: federal-context
|
||||
- context:
|
||||
cluster: pig-cluster
|
||||
namespace: saw-ns
|
||||
user: black-user
|
||||
name: queen-anne-context
|
||||
current-context: federal-context
|
||||
kind: Config
|
||||
preferences:
|
||||
colors: true
|
||||
users:
|
||||
- name: blue-user
|
||||
user:
|
||||
token: blue-token
|
||||
- name: green-user
|
||||
user:
|
||||
client-certificate: path/to/my/client/cert
|
||||
client-key: path/to/my/client/key
|
||||
```
|
||||
|
||||
## Loading and merging rules
|
||||
The rules for loading and merging the kubeconfig files are straightforward, but there are a lot of them. The final config is built in this order:
|
||||
1. Get the kubeconfig from disk. This is done with the following hierarchy and merge rules:
|
||||
|
||||
|
||||
If the CommandLineLocation (the value of the `kubeconfig` command line option) is set, use this file only. No merging. Only one instance of this flag is allowed.
|
||||
|
||||
|
||||
Else, if EnvVarLocation (the value of $KUBECONFIG) is available, use it as a list of files that should be merged.
|
||||
Merge files together based on the following rules.
|
||||
Empty filenames are ignored. Files with non-deserializable content produced errors.
|
||||
The first file to set a particular value or map key wins and the value or map key is never changed.
|
||||
This means that the first file to set CurrentContext will have its context preserved. It also means that if two files specify a "red-user", only values from the first file's red-user are used. Even non-conflicting entries from the second file's "red-user" are discarded.
|
||||
|
||||
|
||||
Otherwise, use HomeDirectoryLocation (~/.kube/config) with no merging.
|
||||
1. Determine the context to use based on the first hit in this chain
|
||||
1. command line argument - the value of the `context` command line option
|
||||
1. current-context from the merged kubeconfig file
|
||||
1. Empty is allowed at this stage
|
||||
1. Determine the cluster info and user to use. At this point, we may or may not have a context. They are built based on the first hit in this chain. (run it twice, once for user, once for cluster)
|
||||
1. command line argument - `user` for user name and `cluster` for cluster name
|
||||
1. If context is present, then use the context's value
|
||||
1. Empty is allowed
|
||||
1. Determine the actual cluster info to use. At this point, we may or may not have a cluster info. Build each piece of the cluster info based on the chain (first hit wins):
|
||||
1. command line arguments - `server`, `api-version`, `certificate-authority`, and `insecure-skip-tls-verify`
|
||||
1. If cluster info is present and a value for the attribute is present, use it.
|
||||
1. If you don't have a server location, error.
|
||||
1. Determine the actual user info to use. User is built using the same rules as cluster info, EXCEPT that you can only have one authentication technique per user.
|
||||
1. Load precedence is 1) command line flag, 2) user fields from kubeconfig
|
||||
1. The command line flags are: `client-certificate`, `client-key`, `username`, `password`, and `token`.
|
||||
1. If there are two conflicting techniques, fail.
|
||||
1. For any information still missing, use default values and potentially prompt for authentication information
|
||||
|
||||
## Manipulation of kubeconfig via `kubectl config <subcommand>`
|
||||
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
|
||||
See [user-guide/kubectl/kubectl_config.md](user-guide/kubectl/kubectl_config.md) for help.
|
||||
|
||||
### Example
|
||||
```
|
||||
$kubectl config set-credentials myself --username=admin --password=secret
|
||||
$kubectl config set-cluster local-server --server=http://localhost:8080
|
||||
$kubectl config set-context default-context --cluster=local-server --user=myself
|
||||
$kubectl config use-context default-context
|
||||
$kubectl config set contexts.default-context.namespace the-right-prefix
|
||||
$kubectl config view
|
||||
```
|
||||
produces this output
|
||||
```
|
||||
clusters:
|
||||
local-server:
|
||||
server: http://localhost:8080
|
||||
contexts:
|
||||
default-context:
|
||||
cluster: local-server
|
||||
namespace: the-right-prefix
|
||||
user: myself
|
||||
current-context: default-context
|
||||
preferences: {}
|
||||
users:
|
||||
myself:
|
||||
username: admin
|
||||
password: secret
|
||||
|
||||
```
|
||||
and a kubeconfig file that looks like this
|
||||
```
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: http://localhost:8080
|
||||
name: local-server
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local-server
|
||||
namespace: the-right-prefix
|
||||
user: myself
|
||||
name: default-context
|
||||
current-context: default-context
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: myself
|
||||
user:
|
||||
username: admin
|
||||
password: secret
|
||||
```
|
||||
|
||||
#### Commands for the example file
|
||||
```
|
||||
$kubectl config set preferences.colors true
|
||||
$kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
|
||||
$kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
|
||||
$kubectl config set-cluster pig-cluster --server=https://pig.org:443 --insecure-skip-tls-verify=true
|
||||
$kubectl config set-credentials blue-user --token=blue-token
|
||||
$kubectl config set-credentials green-user --client-certificate=path/to/my/client/cert --client-key=path/to/my/client/key
|
||||
$kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
|
||||
$kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
|
||||
$kubectl config use-context federal-context
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
125
docs/user-guide/labels.md
Normal file
@@ -0,0 +1,125 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Labels
|
||||
|
||||
_Labels_ are key/value pairs that are attached to objects, such as pods.
|
||||
Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but which do not directly imply semantics to the core system.
|
||||
Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time.
|
||||
Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
|
||||
```
|
||||
"labels": {
|
||||
"key1" : "value1",
|
||||
"key2" : "value2"
|
||||
}
|
||||
```
|
||||
|
||||
We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](annotations.md).
|
||||
|
||||
|
||||
## Motivation
|
||||
|
||||
Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings.
|
||||
|
||||
Service deployments and batch processing pipelines are often multi-dimensional entities (e.g., multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-services per tier). Management often requires cross-cutting operations, which breaks encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather than by users.
|
||||
|
||||
Example labels:
|
||||
|
||||
* `"release" : "stable"`, `"release" : "canary"`, ...
|
||||
* `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"`
|
||||
* `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "middleware"`
|
||||
* `"partition" : "customerA"`, `"partition" : "customerB"`, ...
|
||||
* `"track" : "daily"`, `"track" : "weekly"`
|
||||
|
||||
These are just examples; you are free to develop your own conventions.
|
||||
|
||||
|
||||
## Syntax and character set
|
||||
|
||||
_Labels_ are key value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`).
|
||||
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. ```kube-scheduler```, ```kube-controller-manager```, ```kube-apiserver```, ```kubectl```, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for kubernetes core components.
|
||||
|
||||
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
|
||||
|
||||
## Label selectors
|
||||
|
||||
Unlike [names and UIDs](identifiers.md), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
|
||||
|
||||
Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
|
||||
|
||||
The API currently supports two types of selectors: _equality-based_ and _set-based_.
|
||||
A label selector can be made of multiple _requirements_ which are comma-separated. In the case of multiple requirements, all must be satisfied so comma separator acts as an AND logical operator.
|
||||
|
||||
An empty label selector (that is, one with zero requirements) selects every object in the collection.
|
||||
|
||||
### _Equality-based_ requirement
|
||||
|
||||
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must have all of the specified labels (both keys and values), though they may have additional labels as well.
|
||||
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ and are simply synonyms. While the latter represents _inequality_. For example:
|
||||
```
|
||||
environment = production
|
||||
tier != frontend
|
||||
```
|
||||
|
||||
The former selects all resources with key equal to `environment` and value equal to `production`.
|
||||
The latter selects all resources with key equal to `tier` and value distinct from `frontend`.
|
||||
One could filter for resources in `production` but not `frontend` using the comma operator: `environment=production,tier!=frontend`
|
||||
|
||||
|
||||
### _Set-based_ requirement
|
||||
|
||||
_Set-based_ label requirements allow filtering keys according to a set of values. Matching objects must have all of the specified labels (i.e. all keys and at least one of the values specified for each key). Three kind of operators are supported: `in`,`notin` and exists (only the key identifier). For example:
|
||||
```
|
||||
environment in (production, qa)
|
||||
tier notin (frontend, backend)
|
||||
partition
|
||||
```
|
||||
The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`.
|
||||
The second example selects all resources with key equal to `tier` and value other than `frontend` and `backend`.
|
||||
The third example selects all resources including a label with key `partition`; no values are checked.
|
||||
Similarly the comma separator acts as an _AND_ operator for example filtering resource with a `partition` key (not matter the value) and with `environment` different than `qa`. For example: `partition,environment notin (qa)`.
|
||||
The _set-based_ label selector is a general form of equality since `environment=production` is equivalent to `environment in (production)`; similarly for `!=` and `notin`.
|
||||
|
||||
_Set-based_ requirements can be mixed with _equality-based_ requirements. For example: `partition in (customerA, customerB),environment!=qa`.
|
||||
|
||||
|
||||
## API
|
||||
|
||||
LIST and WATCH operations may specify label selectors to filter the sets of objects returned using a query parameter. Both requirements are permitted:
|
||||
|
||||
* _equality-based_ requirements: `?labelSelector=key1%3Dvalue1,key2%3Dvalue2`
|
||||
* _set-based_ requirements: `?labelSelector=key+in+%28value1%2Cvalue2%29%2Ckey2+notin+%28value3%29`
|
||||
|
||||
Kubernetes also currently supports two objects that use label selectors to keep track of their members, `service`s and `replicationcontroller`s:
|
||||
|
||||
* `service`: A [service](services.md) is a configuration unit for the proxies that run on every worker node. It is named and points to one or more pods.
|
||||
* `replicationcontroller`: A [replication controller](replication-controller.md) ensures that a specified number of pod "replicas" are running at any one time.
|
||||
|
||||
The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationcontroller` is monitoring is also defined with a label selector. For management convenience and consistency, `services` and `replicationcontrollers` may themselves have labels and would generally carry the labels their corresponding pods have in common.
|
||||
|
||||
Sets identified by labels could be overlapping (think Venn diagrams). For instance, a service might target all pods with `"tier": "frontend"` and `"environment" : "prod"`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `replicationcontroller` (with `replicas` set to 9) for the bulk of the replicas with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "stable"` and another `replicationcontroller` (with `replicas` set to 1) for the canary with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "canary"`. Now the service is covering both the canary and non-canary pods. But you can mess with the `replicationcontrollers` separately to test things out, monitor the results, etc.
|
||||
|
||||
Note that the superset described in the previous example is also heterogeneous. In long-lived, highly available, horizontally scaled, distributed, continuously evolving service applications, heterogeneity is inevitable, due to canaries, incremental rollouts, live reconfiguration, simultaneous updates and auto-scaling, hardware upgrades, and so on.
|
||||
|
||||
Pods (and other objects) may belong to multiple sets simultaneously, which enables representation of service substructure and/or superstructure. In particular, labels are intended to facilitate the creation of non-hierarchical, multi-dimensional deployment structures. They are useful for a variety of management purposes (e.g., configuration, deployment) and for application introspection and analysis (e.g., logging, monitoring, alerting, analytics). Without the ability to form sets by intersecting labels, many implicitly related, overlapping flat sets would need to be created, for each subset and/or superset desired, which would lose semantic information and be difficult to keep consistent. Purely hierarchically nested sets wouldn't readily support slicing sets across different dimensions.
|
||||
|
||||
|
||||
## Future developments
|
||||
|
||||
Concerning API: we may extend such filtering to DELETE operations in the future.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
182
docs/user-guide/limitrange/README.md
Normal file
@@ -0,0 +1,182 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Limit Range
|
||||
========================================
|
||||
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
|
||||
for a variety of reasons.
|
||||
|
||||
For example:
|
||||
|
||||
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
|
||||
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
|
||||
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
|
||||
of memory as part of admission control.
|
||||
2. A cluster is shared by two communities in an organization that runs production and development workloads
|
||||
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
|
||||
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
|
||||
each namespace.
|
||||
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
|
||||
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
|
||||
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
|
||||
average node size in order to provide for more uniform scheduling and to limit waste.
|
||||
|
||||
This example demonstrates how limits can be applied to a Kubernetes namespace to control
|
||||
min/max resource limits per pod. In addition, this example demonstrates how you can
|
||||
apply default resource limits to pods in the absence of an end-user specified value.
|
||||
|
||||
For a detailed description of the Kubernetes resource model, see [Resources](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/resources.md)
|
||||
|
||||
Step 0: Prerequisites
|
||||
-----------------------------------------
|
||||
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides/) for how to get started.
|
||||
|
||||
Change to the `<kubernetes>/examples/limitrange` directory if you're not already there.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called limit-example:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f namespace.yaml
|
||||
namespaces/limit-example
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
limit-example <none> Active
|
||||
```
|
||||
|
||||
Step 2: Apply a limit to the namespace
|
||||
-----------------------------------------
|
||||
Let's create a simple limit in our namespace.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f limits.yaml --namespace=limit-example
|
||||
limitranges/mylimits
|
||||
```
|
||||
|
||||
Let's describe the limits that we have imposed in our namespace.
|
||||
|
||||
```shell
|
||||
$ kubectl describe limits mylimits --namespace=limit-example
|
||||
Name: mylimits
|
||||
Type Resource Min Max Default
|
||||
---- -------- --- --- ---
|
||||
Pod memory 6Mi 1Gi -
|
||||
Pod cpu 250m 2 -
|
||||
Container memory 6Mi 1Gi 100Mi
|
||||
Container cpu 250m 2 250m
|
||||
```
|
||||
|
||||
In this scenario, we have said the following:
|
||||
|
||||
1. The total memory usage of a pod across all of its container must fall between 6Mi and 1Gi.
|
||||
2. The total cpu usage of a pod across all of its containers must fall between 250m and 2 cores.
|
||||
3. A container in a pod may consume between 6Mi and 1Gi of memory. If the container does not
|
||||
specify an explicit resource limit, each container in a pod will get 100Mi of memory.
|
||||
4. A container in a pod may consume between 250m and 2 cores of cpu. If the container does
|
||||
not specify an explicit resource limit, each container in a pod will get 250m of cpu.
|
||||
|
||||
Step 3: Enforcing limits at point of creation
|
||||
-----------------------------------------
|
||||
The limits enumerated in a namespace are only enforced when a pod is created or updated in
|
||||
the cluster. If you change the limits to a different value range, it does not affect pods that
|
||||
were previously created in a namespace.
|
||||
|
||||
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
|
||||
of creation explaining why.
|
||||
|
||||
Let's first spin up a replication controller that creates a single container pod to demonstrate
|
||||
how default values are applied to each pod.
|
||||
|
||||
```shell
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
nginx nginx nginx run=nginx 1
|
||||
$ kubectl get pods --namespace=limit-example
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
nginx-ykj4j 10.246.1.3 10.245.1.3/ run=nginx Running About a minute
|
||||
nginx nginx Running 54 seconds
|
||||
$ kubectl get pods nginx-ykj4j --namespace=limit-example -o yaml | grep resources -C 5
|
||||
containers:
|
||||
- capabilities: {}
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 250m
|
||||
memory: 100Mi
|
||||
terminationMessagePath: /dev/termination-log
|
||||
volumeMounts:
|
||||
```
|
||||
|
||||
Note that our nginx container has picked up the namespace default cpu and memory resource limits.
|
||||
|
||||
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f invalid-pod.yaml --namespace=limit-example
|
||||
Error from server: Pod "invalid-pod" is forbidden: Maximum CPU usage per pod is 2, but requested 3
|
||||
```
|
||||
|
||||
Let's create a pod that falls within the allowed limit boundaries.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f valid-pod.yaml --namespace=limit-example
|
||||
pods/valid-pod
|
||||
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 5 resources
|
||||
containers:
|
||||
- capabilities: {}
|
||||
image: gcr.io/google_containers/serve_hostname
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
securityContext:
|
||||
capabilities: {}
|
||||
```
|
||||
|
||||
Note that this pod specifies explicit resource limits so it did not pick up the namespace default values.
|
||||
|
||||
Step 4: Cleanup
|
||||
----------------------------
|
||||
To remove the resources used by this example, you can just delete the limit-example namespace.
|
||||
|
||||
```shell
|
||||
$ kubectl delete namespace limit-example
|
||||
namespaces/limit-example
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
```
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Cluster operators that want to restrict the amount of resources a single container or pod may consume
|
||||
are able to define allowable ranges per Kubernetes namespace. In the absence of any hard limits,
|
||||
the Kubernetes system is able to apply default resource limits if desired in order to constrain the
|
||||
amount of resource a pod consumes on a node.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
12
docs/user-guide/limitrange/invalid-pod.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: invalid-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: kubernetes-serve-hostname
|
||||
image: gcr.io/google_containers/serve_hostname
|
||||
resources:
|
||||
limits:
|
||||
cpu: "3"
|
||||
memory: 100Mi
|
23
docs/user-guide/limitrange/limits.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mylimits
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
cpu: "2"
|
||||
memory: 1Gi
|
||||
min:
|
||||
cpu: 250m
|
||||
memory: 6Mi
|
||||
type: Pod
|
||||
- default:
|
||||
cpu: 250m
|
||||
memory: 100Mi
|
||||
max:
|
||||
cpu: "2"
|
||||
memory: 1Gi
|
||||
min:
|
||||
cpu: 250m
|
||||
memory: 6Mi
|
||||
type: Container
|
4
docs/user-guide/limitrange/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: limit-example
|
14
docs/user-guide/limitrange/valid-pod.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: valid-pod
|
||||
labels:
|
||||
name: valid-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: kubernetes-serve-hostname
|
||||
image: gcr.io/google_containers/serve_hostname
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
88
docs/user-guide/liveness/README.md
Normal file
@@ -0,0 +1,88 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
## Overview
|
||||
This example shows two types of pod health checks: HTTP checks and container execution checks.
|
||||
|
||||
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
|
||||
```
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/health
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
```
|
||||
Kubelet executes the command cat /tmp/health in the container and reports failure if the command returns a non-zero exit code.
|
||||
|
||||
Note that the container removes the /tmp/health file after 10 seconds,
|
||||
```
|
||||
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
|
||||
```
|
||||
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
|
||||
|
||||
|
||||
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
|
||||
```
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
```
|
||||
The Kubelet sends a HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails.
|
||||
|
||||
This [guide](../walkthrough/k8s201.md#health-checking) has more information on health checks.
|
||||
|
||||
## Get your hands dirty
|
||||
To show the health check is actually working, first create the pods:
|
||||
```
|
||||
# kubectl create -f exec-liveness.yaml
|
||||
# kubectl create -f http-liveness.yaml
|
||||
```
|
||||
|
||||
Check the status of the pods once they are created:
|
||||
```
|
||||
# kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
[...]
|
||||
liveness-exec 1/1 Running 0 13s
|
||||
liveness-http 1/1 Running 0 13s
|
||||
```
|
||||
Check the status half a minute later, you will see the container restart count being incremented:
|
||||
```
|
||||
# kubectl get pods
|
||||
mwielgus@mwielgusd:~/test/k2/kubernetes/examples/liveness$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
[...]
|
||||
liveness-exec 1/1 Running 1 36s
|
||||
liveness-http 1/1 Running 1 36s
|
||||
```
|
||||
At the bottom of the *kubectl describe* output there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
|
||||
|
||||
```
|
||||
# kubectl describe pods liveness-exec
|
||||
[...]
|
||||
Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
21
docs/user-guide/liveness/exec-liveness.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-exec
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
|
||||
image: gcr.io/google_containers/busybox
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/health
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
name: liveness
|
18
docs/user-guide/liveness/http-liveness.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-http
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- /server
|
||||
image: gcr.io/google_containers/liveness
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
name: liveness
|
4
docs/user-guide/liveness/image/Dockerfile
Normal file
@@ -0,0 +1,4 @@
|
||||
FROM scratch
|
||||
|
||||
ADD server /server
|
||||
|
13
docs/user-guide/liveness/image/Makefile
Normal file
@@ -0,0 +1,13 @@
|
||||
all: push
|
||||
|
||||
server: server.go
|
||||
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' ./server.go
|
||||
|
||||
container: server
|
||||
docker build -t gcr.io/google_containers/liveness .
|
||||
|
||||
push: container
|
||||
gcloud docker push gcr.io/google_containers/liveness
|
||||
|
||||
clean:
|
||||
rm -f server
|
46
docs/user-guide/liveness/image/server.go
Normal file
@@ -0,0 +1,46 @@
|
||||
/*
|
||||
Copyright 2014 The Kubernetes Authors All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// A simple server that is alive for 10 seconds, then reports unhealthy for
|
||||
// the rest of its (hopefully) short existence.
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
started := time.Now()
|
||||
http.HandleFunc("/started", func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(200)
|
||||
data := (time.Now().Sub(started)).String()
|
||||
w.Write([]byte(data))
|
||||
})
|
||||
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
|
||||
duration := time.Now().Sub(started)
|
||||
if duration.Seconds() > 10 {
|
||||
w.WriteHeader(500)
|
||||
w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds())))
|
||||
} else {
|
||||
w.WriteHeader(200)
|
||||
w.Write([]byte("ok"))
|
||||
}
|
||||
})
|
||||
log.Fatal(http.ListenAndServe(":8080", nil))
|
||||
}
|
26
docs/user-guide/logging-demo/Makefile
Normal file
@@ -0,0 +1,26 @@
|
||||
# Makefile for launching synthetic logging sources (any platform)
|
||||
# and for reporting the forwarding rules for the
|
||||
# Elasticsearch and Kibana pods for the GCE platform.
|
||||
# For examples of how to observe the ingested logs please
|
||||
# see the appropriate getting started guide e.g.
|
||||
# Google Cloud Logging: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/logging.md
|
||||
# With Elasticsearch and Kibana logging: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/logging-elasticsearch.md
|
||||
|
||||
.PHONY: up down logger-up logger-down logger10-up logger10-down
|
||||
|
||||
up: logger-up logger10-up
|
||||
|
||||
down: logger-down logger10-down
|
||||
|
||||
logger-up:
|
||||
kubectl create -f synthetic_0_25lps.yaml
|
||||
|
||||
logger-down:
|
||||
kubectl delete pod synthetic-logger-0.25lps-pod
|
||||
|
||||
logger10-up:
|
||||
kubectl create -f synthetic_10lps.yaml
|
||||
|
||||
logger10-down:
|
||||
kubectl delete pod synthetic-logger-10lps-pod
|
||||
|
32
docs/user-guide/logging-demo/README.md
Normal file
@@ -0,0 +1,32 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Elasticsearch/Kibana Logging Demonstration
|
||||
This directory contains two [pod](../../docs/pods.md) specifications which can be used as synthetic
|
||||
logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
|
||||
describes a pod that just emits a log message once every 4 seconds. The pod specification in
|
||||
[synthetic_10lps.yaml](synthetic_10lps.yaml)
|
||||
describes a pod that just emits 10 log lines per second.
|
||||
|
||||
To observe the ingested log lines when using Google Cloud Logging please see the getting
|
||||
started instructions
|
||||
at [Cluster Level Logging to Google Cloud Logging](../../docs/getting-started-guides/logging.md).
|
||||
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting
|
||||
started instructions
|
||||
at [Cluster Level Logging with Elasticsearch and Kibana](../../docs/getting-started-guides/logging-elasticsearch.md).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
BIN
docs/user-guide/logging-demo/synth-logger.png
Normal file
After Width: | Height: | Size: 87 KiB |
30
docs/user-guide/logging-demo/synthetic_0_25lps.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
# This pod specification creates an instance of a synthetic logger. The logger
|
||||
# is simply a program that writes out the hostname of the pod, a count which increments
|
||||
# by one on each iteration (to help notice missing log enteries) and the date using
|
||||
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
|
||||
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
|
||||
# and could have been written out as:
|
||||
# i="0"
|
||||
# while true
|
||||
# do
|
||||
# echo -n "`hostname`: $i: "
|
||||
# date --rfc-3339 ns
|
||||
# sleep 4
|
||||
# i=$[$i+1]
|
||||
# done
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: synth-logging-source
|
||||
name: synthetic-logger-0.25lps-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: synth-lgr
|
||||
image: ubuntu:14.04
|
||||
args:
|
||||
- bash
|
||||
- -c
|
||||
- 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
|
||||
4; i=$[$i+1]; done'
|
||||
|
30
docs/user-guide/logging-demo/synthetic_10lps.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
# This pod specification creates an instance of a synthetic logger. The logger
|
||||
# is simply a program that writes out the hostname of the pod, a count which increments
|
||||
# by one on each iteration (to help notice missing log enteries) and the date using
|
||||
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
|
||||
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
|
||||
# and could have been written out as:
|
||||
# i="0"
|
||||
# while true
|
||||
# do
|
||||
# echo -n "`hostname`: $i: "
|
||||
# date --rfc-3339 ns
|
||||
# sleep 4
|
||||
# i=$[$i+1]
|
||||
# done
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: synth-logging-source
|
||||
name: synthetic-logger-10lps-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: synth-lgr
|
||||
image: ubuntu:14.04
|
||||
args:
|
||||
- bash
|
||||
- -c
|
||||
- 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
|
||||
0.1; i=$[$i+1]; done'
|
||||
|
88
docs/user-guide/logging.md
Normal file
@@ -0,0 +1,88 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Logging
|
||||
|
||||
## Logging by Kubernetes Components
|
||||
Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [devel/logging.md](devel/logging.md).
|
||||
|
||||
## Examining the logs of running containers
|
||||
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
|
||||
this pod specification which has a container which writes out some text to standard
|
||||
output every second [counter-pod.yaml](../examples/blog-logging/counter-pod.yaml):
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: ubuntu:14.04
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
we can run the pod:
|
||||
```
|
||||
$ kubectl create -f counter-pod.yaml
|
||||
pods/counter
|
||||
```
|
||||
and then fetch the logs:
|
||||
```
|
||||
$ kubectl logs counter
|
||||
0: Tue Jun 2 21:37:31 UTC 2015
|
||||
1: Tue Jun 2 21:37:32 UTC 2015
|
||||
2: Tue Jun 2 21:37:33 UTC 2015
|
||||
3: Tue Jun 2 21:37:34 UTC 2015
|
||||
4: Tue Jun 2 21:37:35 UTC 2015
|
||||
5: Tue Jun 2 21:37:36 UTC 2015
|
||||
...
|
||||
```
|
||||
If a pod has more than one container then you need to specify which container's log files should
|
||||
be fetched e.g.
|
||||
```
|
||||
$ kubectl logs kube-dns-v3-7r1l9 etcd
|
||||
2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
|
||||
2015/06/23 00:43:10 etcdserver: compacted log at index 30003
|
||||
2015/06/23 00:43:10 etcdserver: saved snapshot at index 30003
|
||||
2015/06/23 02:05:42 etcdserver: start to snapshot (applied: 40004, lastsnap: 30003)
|
||||
2015/06/23 02:05:42 etcdserver: compacted log at index 40004
|
||||
2015/06/23 02:05:42 etcdserver: saved snapshot at index 40004
|
||||
2015/06/23 03:28:31 etcdserver: start to snapshot (applied: 50005, lastsnap: 40004)
|
||||
2015/06/23 03:28:31 etcdserver: compacted log at index 50005
|
||||
2015/06/23 03:28:31 etcdserver: saved snapshot at index 50005
|
||||
2015/06/23 03:28:56 filePurge: successfully removed file default.etcd/member/wal/0000000000000000-0000000000000000.wal
|
||||
2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005)
|
||||
2015/06/23 04:51:03 etcdserver: compacted log at index 60006
|
||||
2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006
|
||||
...
|
||||
```
|
||||
|
||||
## Cluster level logging to Google Cloud Logging
|
||||
The getting started guide [Cluster Level Logging to Google Cloud Logging](getting-started-guides/logging.md)
|
||||
explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/)
|
||||
and shows how to query the ingested logs.
|
||||
|
||||
## Cluster level logging with Elasticsearch and Kibana
|
||||
The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](getting-started-guides/logging-elasticsearch.md)
|
||||
describes how to ingest cluster level logs into Elasticsearch and view them using Kibana.
|
||||
|
||||
## Ingesting Application Log Files
|
||||
Cluster level logging only collects the standard output and standard error output of the applications
|
||||
running in containers. The guide [Collecting log files within containers with Fluentd](../contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
BIN
docs/user-guide/monitoring-architecture.png
Normal file
After Width: | Height: | Size: 22 KiB |
76
docs/user-guide/monitoring.md
Normal file
@@ -0,0 +1,76 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Resource Usage Monitoring in Kubernetes
|
||||
|
||||
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](pods.md), [services](services.md), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/GoogleCloudPlatform/heapster), a project meant to provide a base monitoring platform on Kubernetes.
|
||||
|
||||
### Overview
|
||||
|
||||
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes’ [Kubelet](../DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization) and [Google Cloud Monitoring](https://cloud.google.com/monitoring/). The overall architecture of the service can be seen below:
|
||||
|
||||

|
||||
|
||||
Let’s look at some of the other components in more detail.
|
||||
|
||||
### cAdvisor
|
||||
|
||||
cAdvisor is an open source container resource usage and performance analysis agent. It is purpose built for containers and supports Docker containers natively. In Kubernetes, cadvisor is integrated into the Kubelet binary. cAdvisor auto-discovers all containers in the machine and collects CPU, memory, filesystem, and network usage statistics. cAdvisor also provides the overall machine usage by analyzing the “root” container on the machine.
|
||||
|
||||
On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisor’s UI that shows the overall machine usage:
|
||||
|
||||

|
||||
|
||||
### Kubelet
|
||||
|
||||
The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from cAdvisor. It then exposes the aggregated pod resource usage statistics via a REST API.
|
||||
|
||||
## Storage Backends
|
||||
### InfluxDB and Grafana
|
||||
|
||||
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
|
||||
|
||||
The Grafana container serves Grafana’s UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/storage-schema.md#metrics).
|
||||
|
||||
Here is a video showing how to monitor a kubernetes cluster using heapster, InfluxDB and Grafana:
|
||||
|
||||
[](http://www.youtube.com/watch?v=SZgqjMrxo3g)
|
||||
|
||||
Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
|
||||
|
||||

|
||||
|
||||
### Google Cloud Monitoring
|
||||
|
||||
Google Cloud Monitoring is a hosted monitoring service that allows you to visualize and alert on important metrics in your application. Heapster can be setup to automatically push all collected metrics to Google Cloud Monitoring. These metrics are then available in the [Cloud Monitoring Console](https://app.google.stackdriver.com/). This storage backend is the easiest to setup and maintain. The monitoring console allows you to easily create and customize dashboards using the exported data.
|
||||
|
||||
Here is a video showing how to setup and run a Google Cloud Monitoring backed Heapster:
|
||||
|
||||
[](http://www.youtube.com/watch?v=xSMNR2fcoLs)
|
||||
|
||||
Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
|
||||
|
||||

|
||||
|
||||
## Try it out!
|
||||
Now that you’ve learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/GoogleCloudPlatform/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues. Heapster and Kubernetes developers hang out in the [#google-containers](http://webchat.freenode.net/?channels=google-containers) IRC channel on freenode.net. You can also reach us on the [google-containers Google Groups mailing list](https://groups.google.com/forum/#!forum/google-containers).
|
||||
|
||||
***
|
||||
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
|
||||
*This article was originally posted in [Kubernetes blog](http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html).*
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
49
docs/user-guide/multi-pod.yaml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: redis
|
||||
redis-sentinel: "true"
|
||||
role: master
|
||||
name: redis-master
|
||||
spec:
|
||||
containers:
|
||||
- name: master
|
||||
image: kubernetes/redis:v1
|
||||
env:
|
||||
- name: MASTER
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
resources:
|
||||
limits:
|
||||
cpu: "0.5"
|
||||
volumeMounts:
|
||||
- mountPath: /redis-master-data
|
||||
name: data
|
||||
- name: sentinel
|
||||
image: kubernetes/redis:v1
|
||||
env:
|
||||
- name: SENTINEL
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 26379
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: redis-proxy
|
||||
role: proxy
|
||||
name: redis-proxy
|
||||
spec:
|
||||
containers:
|
||||
- name: proxy
|
||||
image: kubernetes/redis-proxy:v1
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
name: api
|
26
docs/user-guide/namespaces.md
Normal file
@@ -0,0 +1,26 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Namespaces
|
||||
|
||||
Namespaces help different projects, teams, or customers to share a kubernetes cluster. First, they provide a scope for [Names](identifiers.md). Second, as our access control code develops, it is expected that it will be convenient to attach authorization and other policy to namespaces.
|
||||
|
||||
Use of multiple namespaces is optional. For small teams, they may not be needed.
|
||||
|
||||
Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](design/namespaces.md).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
79
docs/user-guide/node-selection/README.md
Normal file
@@ -0,0 +1,79 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
## Node selection example
|
||||
|
||||
This example shows how to assign a pod to a specific node or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it.
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes that you have a basic understanding of kubernetes pods and that you have [turned up a Kubernetes cluster](https://github.com/GoogleCloudPlatform/kubernetes#documentation).
|
||||
|
||||
### Step One: Attach label to the node
|
||||
|
||||
Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to.
|
||||
|
||||
Then, to add a label to the node you've chosen, run `kubectl label nodes <node-name> <label-key>=<label-value>`. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
|
||||
|
||||
If this fails with an "invalid command" error, you're likely using an older version of kubectl that doesn't have the `label` command. In that case, see the [previous version](https://github.com/GoogleCloudPlatform/kubernetes/blob/a053dbc313572ed60d89dae9821ecab8bfd676dc/examples/node-selection/README.md) of this guide for instructions on how to manually set labels on a node.
|
||||
|
||||
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](../../docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
|
||||
|
||||
You can verify that it worked by re-running `kubectl get nodes` and checking that the node now has a label.
|
||||
|
||||
### Step Two: Add a nodeSelector field to your pod configuration
|
||||
|
||||
Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
|
||||
|
||||
<pre>
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
</pre>
|
||||
|
||||
Then add a nodeSelector like so:
|
||||
|
||||
<pre>
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
<b>nodeSelector:
|
||||
disktype: ssd</b>
|
||||
</pre>
|
||||
|
||||
When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the pod was assigned to.
|
||||
|
||||
### Conclusion
|
||||
|
||||
While this example only covered one node, you can attach labels to as many nodes as you want. Then when you schedule a pod with a nodeSelector, it can be scheduled on any of the nodes that satisfy that nodeSelector. Be careful that it will match at least one node, however, because if it doesn't the pod won't be scheduled at all.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
13
docs/user-guide/node-selection/pod.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
nodeSelector:
|
||||
disktype: ssd
|
47
docs/user-guide/overview.md
Normal file
@@ -0,0 +1,47 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Kubernetes User Documentation
|
||||
|
||||
Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. It provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure that the state of the cluster continually matches the user's intentions.
|
||||
|
||||
Today, Kubernetes supports just [Docker](http://www.docker.io) containers, but other container image formats and container runtimes will be supported in the future (e.g., [Rocket](https://coreos.com/blog/rocket/) support is in progress). Similarly, while Kubernetes currently focuses on continuously-running stateless (e.g. web server or in-memory object cache) and "cloud native" stateful applications (e.g. NoSQL datastores), in the near future it will support all the other workload types commonly found in production cluster environments, such as batch, stream processing, and traditional databases.
|
||||
|
||||
In Kubernetes, all containers run inside [pods](pods.md). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](volumes.md), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient available capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](replication-controller.md), which we discuss next.
|
||||
|
||||
Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](replication-controller.md). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails.
|
||||
|
||||
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md).
|
||||
|
||||
Kubernetes supports a unique [networking model](admin/networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
|
||||
|
||||
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](admin/dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
|
||||
|
||||
Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the object’s name, and the object’s [namespace](namespaces.md). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.
|
||||
|
||||
Other details:
|
||||
|
||||
* [API](api.md)
|
||||
* [Client libraries](client-libraries.md)
|
||||
* [Command-line interface](user-guide/kubectl/kubectl.md)
|
||||
* [UI](ui.md)
|
||||
* [Images and registries](images.md)
|
||||
* [Container environment](container-environment.md)
|
||||
* [Logging](logging.md)
|
||||
* Monitoring using [CAdvisor](https://github.com/google/cadvisor) and [Heapster](https://github.com/GoogleCloudPlatform/heapster)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
216
docs/user-guide/persistent-volumes.md
Normal file
@@ -0,0 +1,216 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Persistent Volumes and Claims
|
||||
|
||||
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](volumes.md) is suggested.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [Persistent Volumes and Claims](#persistent-volumes-and-claims)
|
||||
- [Introduction](#introduction)
|
||||
- [Lifecycle of a volume and claim](#lifecycle-of-a-volume-and-claim)
|
||||
- [Provisioning](#provisioning)
|
||||
- [Binding](#binding)
|
||||
- [Using](#using)
|
||||
- [Releasing](#releasing)
|
||||
- [Reclaiming](#reclaiming)
|
||||
- [Types of Persistent Volumes](#types-of-persistent-volumes)
|
||||
- [Persistent Volumes](#persistent-volumes)
|
||||
- [Capacity](#capacity)
|
||||
- [Access Modes](#access-modes)
|
||||
- [Recycling Policy](#recycling-policy)
|
||||
- [Phase](#phase)
|
||||
- [PersistentVolumeClaims](#persistentvolumeclaims)
|
||||
- [Access Modes](#access-modes)
|
||||
- [Resources](#resources)
|
||||
- [<a name="claims-as-volumes"></a> Claims As Volumes](#<a-name="claims-as-volumes"></a>-claims-as-volumes)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Introduction
|
||||
|
||||
Managing storage is a distinct problem from managing compute. The `PersistentVolume` subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources: `PersistentVolume` and `PersistentVolumeClaim`.
|
||||
|
||||
A `PersistentVolume` (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
|
||||
|
||||
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).
|
||||
|
||||
Please see the [detailed walkthrough with working examples](../examples/persistent-volumes/).
|
||||
|
||||
|
||||
## Lifecycle of a volume and claim
|
||||
|
||||
PVs are resources in the cluster. PVC are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle:
|
||||
|
||||
### Provisioning
|
||||
|
||||
A cluster administrator creates some number of PVs. They carry the details of the real storage that is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
|
||||
|
||||
### Binding
|
||||
|
||||
A user creates a `PersistentVolumeClaim` with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. The user will always get at least what they asked for, but the volume may be in excess of what was requested.
|
||||
|
||||
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
|
||||
|
||||
### Using
|
||||
|
||||
Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, the user specifies which mode desired when using their claim as a volume in a pod.
|
||||
|
||||
Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as she needs it. Users schedule Pods and access their claimed PVs by including a persistentVolumeClaim in their Pod's volumes block. [See below for syntax details](#claims-as-volumes).
|
||||
|
||||
### Releasing
|
||||
|
||||
When a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource. The volume is considered "released" when the claim is deleted, but it is not yet available for another claim. The previous claimant's data remains on the volume which must be handled according to policy.
|
||||
|
||||
### Reclaiming
|
||||
|
||||
A `PersistentVolume's` reclaim policy tells the cluster what to do with the volume after it's released. Currently, volumes can either be Retained or Recycled. Retention allows for manual reclamation of the resource. For those volume plugins that support it, recycling performs a basic scrub ("rm -rf /thevolume/*") on the volume and makes it available again for a new claim.
|
||||
|
||||
## Types of Persistent Volumes
|
||||
|
||||
`PersistentVolume`s are implemented as plugins. Kubernetes currently supports the following plugins:
|
||||
|
||||
* GCEPersistentDisk
|
||||
* AWSElasticBlockStore
|
||||
* NFS
|
||||
* iSCSI
|
||||
* RBD (Ceph Block Device)
|
||||
* Glusterfs
|
||||
* HostPath (single node testing only)
|
||||
|
||||
|
||||
## Persistent Volumes
|
||||
|
||||
Each PV contains a spec and status, which is the specification and status of the volume.
|
||||
|
||||
|
||||
```
|
||||
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0003
|
||||
spec:
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Recycle
|
||||
nfs:
|
||||
path: /tmp
|
||||
server: 172.17.0.2
|
||||
|
||||
```
|
||||
|
||||
### Capacity
|
||||
|
||||
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](design/resources.md) to understand the units expected by `capacity`.
|
||||
|
||||
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
|
||||
|
||||
### Access Modes
|
||||
|
||||
`PersistentVolume`s can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.
|
||||
|
||||
The access modes are:
|
||||
|
||||
* ReadWriteOnce -- the volume can be mounted as read-write by a single node
|
||||
* ReadOnlyMany -- the volume can be mounted read-only by many nodes
|
||||
* ReadWriteMany -- the volume can be mounted as read-write by many nodes
|
||||
|
||||
In the CLI, the access modes are abbreviated to:
|
||||
|
||||
* RWO - ReadWriteOnce
|
||||
* ROX - ReadOnlyMany
|
||||
* RWX - ReadWriteMany
|
||||
|
||||
> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
|
||||
|
||||
|
||||
### Recycling Policy
|
||||
|
||||
Current recycling policies are:
|
||||
|
||||
* Retain -- manual reclamation
|
||||
* Recycle -- basic scrub ("rm -rf /thevolume/*")
|
||||
|
||||
Currently, NFS and HostPath support recycling.
|
||||
|
||||
### Phase
|
||||
|
||||
A volume will be in one of the following phases:
|
||||
|
||||
* Available -- a free resource that is not yet bound to a claim
|
||||
* Bound -- the volume is bound to a claim
|
||||
* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster
|
||||
* Failed -- the volume has failed its automatic reclamation
|
||||
|
||||
The CLI will show the name of the PVC bound to the PV.
|
||||
|
||||
## PersistentVolumeClaims
|
||||
|
||||
Each PVC contains a spec and status, which is the specification and status of the claim.
|
||||
|
||||
```
|
||||
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: myclaim
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
|
||||
```
|
||||
|
||||
### Access Modes
|
||||
|
||||
Claims use the same conventions as volumes when requesting storage with specific access modes.
|
||||
|
||||
### Resources
|
||||
|
||||
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](design/resources.md) applies to both volumes and claims.
|
||||
|
||||
## <a name="claims-as-volumes"></a> Claims As Volumes
|
||||
|
||||
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the pod.
|
||||
|
||||
```
|
||||
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: myfrontend
|
||||
image: dockerfile/nginx
|
||||
volumeMounts:
|
||||
- mountPath: "/var/www/html"
|
||||
name: mypd
|
||||
volumes:
|
||||
- name: mypd
|
||||
persistentVolumeClaim:
|
||||
claimName: myclaim
|
||||
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
118
docs/user-guide/persistent-volumes/README.md
Normal file
@@ -0,0 +1,118 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# How To Use Persistent Volumes
|
||||
|
||||
The purpose of this guide is to help you become familiar with Kubernetes Persistent Volumes. By the end of the guide, we'll have
|
||||
nginx serving content from your persistent volume.
|
||||
|
||||
This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running.
|
||||
|
||||
## Provisioning
|
||||
|
||||
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
|
||||
must first create storage (create their Google Compute Engine (GCE) disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
|
||||
|
||||
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. ```HostPath``` was included
|
||||
for ease of development and testing. You'll create a local ```HostPath``` for this example.
|
||||
|
||||
> IMPORTANT! For ```HostPath``` to work, you will need to run a single node cluster. Kubernetes does not
|
||||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the ```HostPath``` resides.
|
||||
|
||||
|
||||
```
|
||||
|
||||
// this will be nginx's webroot
|
||||
$ mkdir /tmp/data01
|
||||
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
|
||||
|
||||
```
|
||||
|
||||
PVs are created by posting them to the API server.
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/persistent-volumes/volumes/local-01.yaml
|
||||
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
|
||||
pv0001 type=local 10737418240 RWO Available
|
||||
```
|
||||
|
||||
## Requesting storage
|
||||
|
||||
Users of Kubernetes request persistent storage for their pods. They don't know how the underlying cluster is provisioned.
|
||||
They just know they can rely on their claim to storage and can manage its lifecycle independently from the many pods that may use it.
|
||||
|
||||
Claims must be created in the same namespace as the pods that use them.
|
||||
|
||||
```
|
||||
|
||||
$ kubectl create -f examples/persistent-volumes/claims/claim-01.yaml
|
||||
|
||||
$ kubectl get pvc
|
||||
NAME LABELS STATUS VOLUME
|
||||
myclaim-1 map[]
|
||||
|
||||
|
||||
# A background process will attempt to match this claim to a volume.
|
||||
# The eventual state of your claim will look something like this:
|
||||
|
||||
$ kubectl get pvc
|
||||
NAME LABELS STATUS VOLUME
|
||||
myclaim-1 map[] Bound pv0001
|
||||
|
||||
$ kubectl get pv
|
||||
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
|
||||
pv0001 type=local 10737418240 RWO Bound default/myclaim-1
|
||||
```
|
||||
|
||||
## Using your claim as a volume
|
||||
|
||||
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/persistent-volumes/simpletest/pod.yaml
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mypod 1/1 Running 0 1h
|
||||
|
||||
$ kubectl create -f examples/persistent-volumes/simpletest/service.json
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
frontendservice <none> name=frontendhttp 10.0.0.241 3000/TCP
|
||||
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.2 443/TCP
|
||||
|
||||
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
|
||||
need to disable SELinux (setenforce 0).
|
||||
|
||||
```
|
||||
|
||||
curl 10.0.0.241:3000
|
||||
I love Kubernetes storage!
|
||||
|
||||
```
|
||||
|
||||
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join
|
||||
```#google-containers``` on IRC and ask!
|
||||
|
||||
Enjoy!
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
10
docs/user-guide/persistent-volumes/claims/claim-01.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: myclaim-1
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 3Gi
|
10
docs/user-guide/persistent-volumes/claims/claim-02.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: myclaim-2
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
17
docs/user-guide/persistent-volumes/claims/claim-03.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"kind": "PersistentVolumeClaim",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "myclaim-3"
|
||||
}, "spec": {
|
||||
"accessModes": [
|
||||
"ReadWriteOnce",
|
||||
"ReadOnlyMany"
|
||||
],
|
||||
"resources": {
|
||||
"requests": {
|
||||
"storage": "10G"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
10
docs/user-guide/persistent-volumes/simpletest/namespace.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion":"v1",
|
||||
"metadata": {
|
||||
"name": "myns",
|
||||
"labels": {
|
||||
"name": "development"
|
||||
}
|
||||
}
|
||||
}
|
20
docs/user-guide/persistent-volumes/simpletest/pod.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mypod
|
||||
labels:
|
||||
name: frontendhttp
|
||||
spec:
|
||||
containers:
|
||||
- name: myfrontend
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/var/www/html"
|
||||
name: mypd
|
||||
volumes:
|
||||
- name: mypd
|
||||
persistentVolumeClaim:
|
||||
claimName: myclaim-1
|
19
docs/user-guide/persistent-volumes/simpletest/service.json
Normal file
@@ -0,0 +1,19 @@
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "frontendservice"
|
||||
},
|
||||
"spec": {
|
||||
"ports": [
|
||||
{
|
||||
"protocol": "TCP",
|
||||
"port": 3000,
|
||||
"targetPort": "http-server"
|
||||
}
|
||||
],
|
||||
"selector": {
|
||||
"name": "frontendhttp"
|
||||
}
|
||||
}
|
||||
}
|
13
docs/user-guide/persistent-volumes/volumes/gce.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pv0003
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
- ReadOnlyMany
|
||||
gcePersistentDisk:
|
||||
pdName: "abc123"
|
||||
fsType: "ext4"
|
13
docs/user-guide/persistent-volumes/volumes/local-01.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pv0001
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: "/tmp/data01"
|
14
docs/user-guide/persistent-volumes/volumes/local-02.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pv0002
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
capacity:
|
||||
storage: 8Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: "/tmp/data02"
|
||||
persistentVolumeReclaimPolicy: Recycle
|
12
docs/user-guide/persistent-volumes/volumes/nfs.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0003
|
||||
spec:
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
nfs:
|
||||
path: /tmp
|
||||
server: 172.17.0.2
|
124
docs/user-guide/pod-states.md
Normal file
@@ -0,0 +1,124 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# The life of a pod
|
||||
|
||||
Updated: 4/14/2015
|
||||
|
||||
This document covers the lifecycle of a pod. It is not an exhaustive document, but an introduction to the topic.
|
||||
|
||||
## Pod Phase
|
||||
|
||||
As consistent with the overall [API convention](api-conventions.md#typical-status-properties), phase is a simple, high-level summary of the phase of the lifecycle of a pod. It is not intended to be a comprehensive rollup of observations of container-level or even pod-level conditions or other state, nor is it intended to be a comprehensive state machine.
|
||||
|
||||
The number and meanings of `PodPhase` values are tightly guarded. Other than what is documented here, nothing should be assumed about pods with a given `PodPhase`.
|
||||
|
||||
* Pending: The pod has been accepted by the system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.
|
||||
* Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
|
||||
* Succeeded: All containers in the pod have terminated in success, and will not be restarted.
|
||||
* Failed: All containers in the pod have terminated, at least one container has terminated in failure (exited with non-zero exit status or was terminated by the system).
|
||||
|
||||
## Pod Conditions
|
||||
|
||||
A pod containing containers that specify readiness probes will also report the Ready condition. Condition status values may be `True`, `False`, or `Unknown`.
|
||||
|
||||
## Container Probes
|
||||
|
||||
A [Probe](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#Probe) is a diagnostic performed periodically by the kubelet on a container. Specifically the diagnostic is one of three [Handlers](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#Handler):
|
||||
|
||||
* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
|
||||
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
|
||||
* `HTTPGetAction`: performs an HTTP Get againsts the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
|
||||
|
||||
Each probe will have one of three results:
|
||||
|
||||
* `Success`: indicates that the container passed the diagnostic.
|
||||
* `Failure`: indicates that the container failed the diagnostic.
|
||||
* `Unknown`: indicates that the diagnostic failed so no action should be taken.
|
||||
|
||||
Currently, the kubelet optionally performs two independent diagnostics on running containers which trigger action:
|
||||
|
||||
* `LivenessProbe`: indicates whether the container is *live*, i.e. still running. The LivenessProbe hints to the kubelet when a container is unhealthy. If the LivenessProbe fails, the kubelet will kill the container and the container will be subjected to it's [RestartPolicy](#restartpolicy). The default state of Liveness before the initial delay is `Success`. The state of Liveness for a container when no probe is provided is assumed to be `Success`.
|
||||
* `ReadinessProbe`: indicates whether the container is *ready* to service requests. If the ReadinessProbe fails, the endpoints controller will remove the pod's IP address from the endpoints of all services that match the pod. Thus, the ReadinessProbe is sometimes useful to signal to the endpoints controller that even though a pod may be running, it should not receive traffic from the proxy (e.g. the container has a long startup time before it starts listening or the container is down for maintenance). The default state of Readiness before the initial delay is `Failure`. The state of Readiness for a container when no probe is provided is assumed to be `Success`.
|
||||
|
||||
## Container Statuses
|
||||
|
||||
More detailed information about the current (and previous) container statuses can be found in [ContainerStatuses](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#PodStatus). The information reported depends on the current [ContainerState](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#ContainerState), which may be Waiting, Running, or Terminated.
|
||||
|
||||
## RestartPolicy
|
||||
|
||||
The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. As discussed in the [pods document](pods.md#durability-of-pods-or-lack-thereof), once bound to a node, a pod will never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
|
||||
|
||||
The only controller we have today is [`ReplicationController`](replication-controller.md). `ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`. `ReplicationController` should refuse to instantiate any pod that has a different restart policy.
|
||||
|
||||
There is a legitimate need for a controller which keeps pods with other policies alive. Pods having any of the other policies (`OnFailure` or `Never`) eventually terminate, at which point the controller should stop recreating them. Because of this fundamental distinction, let's hypothesize a new controller, called [`JobController`](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624) for the sake of this document, which can implement this policy.
|
||||
|
||||
## Pod lifetime
|
||||
|
||||
In general, pods which are created do not disappear until someone destroys them. This might be a human or a `ReplicationController`. The only exception to this rule is that pods with a `PodPhase` of `Succeeded` or `Failed` for more than some duration (determined by the master) will expire and be automatically reaped.
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, some entity within the system (call it the NodeController for now) is responsible for applying policy (e.g. a timeout) and marking any pods on the lost node as `Failed`.
|
||||
|
||||
## Examples
|
||||
|
||||
* Pod is `Running`, 1 container, container exits success
|
||||
* Log completion event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: pod becomes `Succeeded`
|
||||
* Never: pod becomes `Succeeded`
|
||||
|
||||
* Pod is `Running`, 1 container, container exits failure
|
||||
* Log failure event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: pod becomes `Failed`
|
||||
|
||||
* Pod is `Running`, 2 containers, container 1 exits failure
|
||||
* Log failure event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: pod stays `Running`
|
||||
* When container 2 exits...
|
||||
* Log failure event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: pod becomes `Failed`
|
||||
|
||||
* Pod is `Running`, container becomes OOM
|
||||
* Container terminates in failure
|
||||
* Log OOM event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: log failure event, pod becomes `Failed`
|
||||
|
||||
* Pod is `Running`, a disk dies
|
||||
* All containers are killed
|
||||
* Log appropriate event
|
||||
* Pod becomes `Failed`
|
||||
* If running under a controller, pod will be recreated elsewhere
|
||||
|
||||
* Pod is `Running`, its node is segmented out
|
||||
* NodeController waits for timeout
|
||||
* NodeController marks pod `Failed`
|
||||
* If running under a controller, pod will be recreated elsewhere
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
12
docs/user-guide/pod.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
97
docs/user-guide/pods.md
Normal file
@@ -0,0 +1,97 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Pods
|
||||
|
||||
In Kubernetes, rather than individual application containers, _pods_ are the smallest deployable units that can be created, scheduled, and managed.
|
||||
|
||||
## What is a _pod_?
|
||||
|
||||
A _pod_ (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context. Within that context, the applications may also have individual cgroup isolations applied. A pod models an application-specific "logical host" in a containerized environment. It may contain one or more applications which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual host.
|
||||
|
||||
The context of the pod can be defined as the conjunction of several Linux namespaces:
|
||||
|
||||
* PID namespace (applications within the pod can see each other's processes)
|
||||
* network namespace (applications within the pod have access to the same IP and port space)
|
||||
* IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate)
|
||||
* UTS namespace (applications within the pod share a hostname)
|
||||
|
||||
Applications within a pod also have access to shared volumes, which are defined at the pod level and made available in each application's filesystem. Additionally, a pod may define top-level cgroup isolations which form an outer bound to any individual isolation applied to constituent applications.
|
||||
|
||||
In terms of [Docker](https://www.docker.com/) constructs, a pod consists of a colocated group of Docker containers with shared [volumes](volumes.md). PID namespace sharing is not yet implemented with Docker.
|
||||
|
||||
Like individual application containers, pods are considered to be relatively ephemeral rather than durable entities. As discussed in [life of a pod](pod-states.md), pods are scheduled to nodes and remain there until termination (according to restart policy) or deletion. When a node dies, the pods scheduled to that node are deleted. Specific pods are never rescheduled to new nodes; instead, they must be replaced (see [replication controller](replication-controller.md) for more details). (In the future, a higher-level API may support pod migration.)
|
||||
|
||||
## Motivation for pods
|
||||
|
||||
### Resource sharing and communication
|
||||
|
||||
Pods facilitate data sharing and communication among their constituents.
|
||||
|
||||
The applications in the pod all use the same network namespace/IP and port space, and can find and communicate with each other using localhost. Each pod has an IP address in a flat shared networking namespace that has full communication with other physical computers and containers across the network. The hostname is set to the pod's Name for the application containers within the pod. [More details on networking](admin/networking.md).
|
||||
|
||||
In addition to defining the application containers that run in the pod, the pod specifies a set of shared storage volumes. Volumes enable data to survive container restarts and to be shared among the applications within the pod.
|
||||
|
||||
### Management
|
||||
|
||||
Pods also simplify application deployment and management by providing a higher-level abstraction than the raw, low-level container interface. Pods serve as units of deployment and horizontal scaling/replication. Co-location (co-scheduling), fate sharing, coordinated replication, resource sharing, and dependency management are handled automatically.
|
||||
|
||||
## Uses of pods
|
||||
|
||||
Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located, co-managed helper programs, such as:
|
||||
|
||||
* content management systems, file and data loaders, local cache managers, etc.
|
||||
* log and checkpoint backup, compression, rotation, snapshotting, etc.
|
||||
* data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
|
||||
* proxies, bridges, and adapters
|
||||
* controllers, managers, configurators, and updaters
|
||||
|
||||
Individual pods are not intended to run multiple instances of the same application, in general.
|
||||
|
||||
## Alternatives considered
|
||||
|
||||
_Why not just run multiple programs in a single (Docker) container?_
|
||||
|
||||
1. Transparency. Making the containers within the pod visible to the infrastructure enables the infrastructure to provide services to those containers, such as process management and resource monitoring. This facilitates a number of conveniences for users.
|
||||
2. Decoupling software dependencies. The individual containers may be rebuilt and redeployed independently. Kubernetes may even support live updates of individual containers someday.
|
||||
3. Ease of use. Users don't need to run their own process managers, worry about signal and exit-code propagation, etc.
|
||||
4. Efficiency. Because the infrastructure takes on more responsibility, containers can be lighter weight.
|
||||
|
||||
_Why not support affinity-based co-scheduling of containers?_
|
||||
|
||||
That approach would provide co-location, but would not provide most of the benefits of pods, such as resource sharing, IPC, guaranteed fate sharing, and simplified management.
|
||||
|
||||
## Durability of pods (or lack thereof)
|
||||
|
||||
Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
|
||||
|
||||
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](replication-controller.md)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
|
||||
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
|
||||
Pod is exposed as a primitive in order to facilitate:
|
||||
|
||||
* scheduler and controller pluggability
|
||||
* support for pod-level operations without the need to "proxy" them via controller APIs
|
||||
* decoupling of pod lifetime from controller lifetime, such as for bootstrapping
|
||||
* decoupling of controllers and services — the endpoint controller just watches pods
|
||||
* clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller"
|
||||
* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](https://github.com/GoogleCloudPlatform/kubernetes/issues/3949)
|
||||
|
||||
The current best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service. If you find this cumbersome, please comment on [issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
103
docs/user-guide/replication-controller.md
Normal file
@@ -0,0 +1,103 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Replication Controller
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [Replication Controller](#replication-controller)
|
||||
- [What is a _replication controller_?](#what-is-a-_replication-controller_?)
|
||||
- [How does a replication controller work?](#how-does-a-replication-controller-work?)
|
||||
- [Pod template](#pod-template)
|
||||
- [Labels](#labels)
|
||||
- [Responsibilities of the replication controller](#responsibilities-of-the-replication-controller)
|
||||
- [Common usage patterns](#common-usage-patterns)
|
||||
- [Rescheduling](#rescheduling)
|
||||
- [Scaling](#scaling)
|
||||
- [Rolling updates](#rolling-updates)
|
||||
- [Multiple release tracks](#multiple-release-tracks)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## What is a _replication controller_?
|
||||
|
||||
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
|
||||
|
||||
As discussed in [life of a pod](pod-states.md), `ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`. (Note: If `RestartPolicy` is not set, the default value is `Always`.) `ReplicationController` should refuse to instantiate any pod that has a different restart policy. As discussed in [issue #503](https://github.com/GoogleCloudPlatform/kubernetes/issues/503#issuecomment-50169443), we expect other types of controllers to be added to Kubernetes to handle other types of workloads, such as build/test and batch workloads, in the future.
|
||||
|
||||
A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services.
|
||||
|
||||
## How does a replication controller work?
|
||||
|
||||
### Pod template
|
||||
|
||||
A replication controller creates new pods from a template, which is currently inline in the `ReplicationController` object, but which we plan to extract into its own resource [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170).
|
||||
|
||||
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive, as demonstrated by the use cases explained below.
|
||||
|
||||
Pods created by a replication controller are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but replication controllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the replication controller itself.
|
||||
|
||||
### Labels
|
||||
|
||||
The population of pods that a replication controller is monitoring is defined with a [label selector](labels.md#label-selectors), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled to their definition. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that approach increases complexity of management operations, for both clients and the system.
|
||||
|
||||
The replication controller should verify that the pods created from the specified template have labels that match its label selector. Though it isn't verified yet, you should also ensure that only one replication controller controls any given pod, by ensuring that the label selectors of replication controllers do not target overlapping sets.
|
||||
|
||||
Note that replication controllers may themselves have labels and would generally carry the labels their corresponding pods have in common, but these labels do not affect the behavior of the replication controllers.
|
||||
|
||||
Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
|
||||
Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. (Note that the client tool, kubectl, provides a single operation, [stop](user-guide/kubectl/kubectl_stop.md) to delete both the replication controller and the pods it controls. However, there is no such operation in the API at the moment)
|
||||
|
||||
## Responsibilities of the replication controller
|
||||
|
||||
The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://github.com/GoogleCloudPlatform/kubernetes/issues/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://github.com/GoogleCloudPlatform/kubernetes/issues/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](https://github.com/GoogleCloudPlatform/kubernetes/issues/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170)).
|
||||
|
||||
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
## Common usage patterns
|
||||
|
||||
### Rescheduling
|
||||
|
||||
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a replication controller will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent).
|
||||
|
||||
### Scaling
|
||||
|
||||
The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
|
||||
|
||||
### Rolling updates
|
||||
|
||||
The replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
|
||||
As explained in [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353), the recommended approach is to create a new replication controller with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
|
||||
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
|
||||
|
||||
The two replication controllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
|
||||
|
||||
Rolling update is implemented in the client tool
|
||||
[kubectl](user-guide/kubectl/kubectl_rolling-update.md)
|
||||
|
||||
### Multiple release tracks
|
||||
|
||||
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
|
||||
|
||||
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a replication controller with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another replication controller with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the replication controllers separately to test things out, monitor the results, etc.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
19
docs/user-guide/replication.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
171
docs/user-guide/resourcequota/README.md
Normal file
@@ -0,0 +1,171 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Resource Quota
|
||||
========================================
|
||||
This example demonstrates how resource quota and limits can be applied to a Kubernetes namespace.
|
||||
|
||||
This example assumes you have a functional Kubernetes setup.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called quota-example:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f namespace.yaml
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
quota-example <none> Active
|
||||
```
|
||||
|
||||
Step 2: Apply a quota to the namespace
|
||||
-----------------------------------------
|
||||
By default, a pod will run with unbounded CPU and memory limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to restrict how much of the cluster resources a given namespace may consume
|
||||
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
|
||||
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
|
||||
and API resources (pods, services, etc.) that a namespace may consume.
|
||||
|
||||
Let's create a simple quota in our namespace:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f quota.yaml --namespace=quota-example
|
||||
```
|
||||
|
||||
Once your quota is applied to a namespace, the system will restrict any creation of content
|
||||
in the namespace until the quota usage has been calculated. This should happen quickly.
|
||||
|
||||
You can describe your current quota usage to see what resources are being consumed in your
|
||||
namespace.
|
||||
|
||||
```
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 0 20
|
||||
memory 0 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 0 10
|
||||
replicationcontrollers 0 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
```
|
||||
|
||||
Step 3: Applying default resource limits
|
||||
-----------------------------------------
|
||||
Pod authors rarely specify resource limits for their pods.
|
||||
|
||||
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
|
||||
cpu and memory by creating an nginx container.
|
||||
|
||||
To demonstrate, lets create a replication controller that runs nginx:
|
||||
|
||||
```shell
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
nginx nginx nginx run=nginx 1
|
||||
```
|
||||
|
||||
Now let's look at the pods that were created.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
```
|
||||
|
||||
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
|
||||
|
||||
```shell
|
||||
kubectl describe rc nginx --namespace=quota-example
|
||||
Name: nginx
|
||||
Image(s): nginx
|
||||
Selector: run=nginx
|
||||
Labels: run=nginx
|
||||
Replicas: 0 current / 1 desired
|
||||
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Mon, 01 Jun 2015 22:49:31 -0400 Mon, 01 Jun 2015 22:52:22 -0400 7 {replication-controller } failedCreate Error creating: Pod "nginx-" is forbidden: Limited to 1Gi memory, but pod has no specified memory limit
|
||||
```
|
||||
|
||||
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
|
||||
do not specify any memory usage.
|
||||
|
||||
So let's set some default limits for the amount of cpu and memory a pod can consume:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f limits.yaml --namespace=quota-example
|
||||
limitranges/limits
|
||||
$ kubectl describe limits limits --namespace=quota-example
|
||||
Name: limits
|
||||
Namespace: quota-example
|
||||
Type Resource Min Max Default
|
||||
---- -------- --- --- ---
|
||||
Container memory - - 512Mi
|
||||
Container cpu - - 100m
|
||||
```
|
||||
|
||||
Now any time a pod is created in this namespace, if it has not specified any resource limits, the default
|
||||
amount of cpu and memory per container will be applied as part of admission control.
|
||||
|
||||
Now that we have applied default limits for our namespace, our replication controller should be able to
|
||||
create its pods.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-t9cap 1/1 Running 0 49s
|
||||
```
|
||||
|
||||
And if we print out our quota usage in the namespace:
|
||||
|
||||
```shell
|
||||
kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: default
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 100m 20
|
||||
memory 536870912 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 1 10
|
||||
replicationcontrollers 1 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
```
|
||||
|
||||
You can now see the pod that was created is consuming explicit amounts of resources, and the usage is being
|
||||
tracked by the Kubernetes system properly.
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
|
||||
by the namespace quota.
|
||||
|
||||
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
|
||||
meet your end goal.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
10
docs/user-guide/resourcequota/limits.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: limits
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
cpu: 100m
|
||||
memory: 512Mi
|
||||
type: Container
|
4
docs/user-guide/resourcequota/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: quota-example
|
14
docs/user-guide/resourcequota/quota.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: quota
|
||||
spec:
|
||||
hard:
|
||||
cpu: "20"
|
||||
memory: 1Gi
|
||||
persistentvolumeclaims: "10"
|
||||
pods: "10"
|
||||
replicationcontrollers: "20"
|
||||
resourcequotas: "1"
|
||||
secrets: "10"
|
||||
services: "5"
|
504
docs/user-guide/secrets.md
Normal file
@@ -0,0 +1,504 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Secrets
|
||||
|
||||
Objects of type `secret` are intended to hold sensitive information, such as
|
||||
passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
|
||||
is safer and more flexible than putting it verbatim in a `pod` definition or in
|
||||
a docker image.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [Secrets](#secrets)
|
||||
- [Overview of Secrets](#overview-of-secrets)
|
||||
- [Service Accounts Automatically Create and Use Secrets with API Credentials](#service-accounts-automatically-create-and-use-secrets-with-api-credentials)
|
||||
- [Creating a Secret Manually](#creating-a-secret-manually)
|
||||
- [Manually specifying a Secret to be Mounted on a Pod](#manually-specifying-a-secret-to-be-mounted-on-a-pod)
|
||||
- [Manually specifying an imagePullSecret](#manually-specifying-an-imagepullsecret)
|
||||
- [Automatic use of Manually Created Secrets](#automatic-use-of-manually-created-secrets)
|
||||
- [Details](#details)
|
||||
- [Restrictions](#restrictions)
|
||||
- [Consuming Secret Values](#consuming-secret-values)
|
||||
- [Secret and Pod Lifetime interaction](#secret-and-pod-lifetime-interaction)
|
||||
- [Use cases](#use-cases)
|
||||
- [Use-Case: Pod with ssh keys](#use-case:-pod-with-ssh-keys)
|
||||
- [Use-Case: Pods with prod / test credentials](#use-case:-pods-with-prod-/-test-credentials)
|
||||
- [Use-case: Secret visible to one container in a pod](#use-case:-secret-visible-to-one-container-in-a-pod)
|
||||
- [Security Properties](#security-properties)
|
||||
- [Protections](#protections)
|
||||
- [Risks](#risks)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Overview of Secrets
|
||||
|
||||
|
||||
Creation of secrets can be manual (done by the user) or automatic (done by
|
||||
automation built into the cluster).
|
||||
|
||||
A secret can be used with a pod in two ways: either as files in a volume mounted on one or more of
|
||||
its containers, or used by kubelet when pulling images for the pod.
|
||||
|
||||
To use a secret, a pod needs to reference the secret. This reference
|
||||
can likewise be added manually or automatically.
|
||||
|
||||
A single Pod may use various combination of the above options.
|
||||
|
||||
### Service Accounts Automatically Create and Use Secrets with API Credentials
|
||||
|
||||
Kubernetes automatically creates secrets which contain credentials for
|
||||
accessing the API and it automatically modifies your pods to use this type of
|
||||
secret.
|
||||
|
||||
The automatic creation and use of API credentials can be disabled or overridden
|
||||
if desired. However, if all you need to do is securely access the apiserver,
|
||||
this is the recommended workflow.
|
||||
|
||||
See the [Service Account](service-accounts.md) documentation for more
|
||||
information on how Service Accounts work.
|
||||
|
||||
### Creating a Secret Manually
|
||||
|
||||
This is an example of a simple secret, in yaml format:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret
|
||||
type: Opaque
|
||||
data:
|
||||
password: dmFsdWUtMg0K
|
||||
username: dmFsdWUtMQ0K
|
||||
```
|
||||
|
||||
The data field is a map. Its keys must match
|
||||
[DNS_SUBDOMAIN](design/identifiers.md), except that leading dots are also
|
||||
allowed. The values are arbitrary data, encoded using base64. The values of
|
||||
username and password in the example above, before base64 encoding,
|
||||
are `value-1` and `value-2`, respectively, with carriage return and newline characters at the end.
|
||||
|
||||
Create the secret using [`kubectl create`](user-guide/kubectl/kubectl_create.md).
|
||||
|
||||
Once the secret is created, you can:
|
||||
- create pods that automatically use it via a [Service Account](service-accounts.md).
|
||||
- modify your pod specification to use the secret
|
||||
|
||||
### Manually specifying a Secret to be Mounted on a Pod
|
||||
|
||||
This is an example of a pod that mounts a secret in a volume:
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "mypod",
|
||||
"namespace": "myns"
|
||||
},
|
||||
"spec": {
|
||||
"containers": [{
|
||||
"name": "mypod",
|
||||
"image": "redis",
|
||||
"volumeMounts": [{
|
||||
"name": "foo",
|
||||
"mountPath": "/etc/foo",
|
||||
"readOnly": true
|
||||
}]
|
||||
}],
|
||||
"volumes": [{
|
||||
"name": "foo",
|
||||
"secret": {
|
||||
"secretName": "mysecret"
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Each secret you want to use needs its own `spec.volumes`.
|
||||
|
||||
If there are multiple containers in the pod, then each container needs its
|
||||
own `volumeMounts` block, but only one `spec.volumes` is needed per secret.
|
||||
|
||||
You can package many files into one secret, or use many secrets,
|
||||
whichever is convenient.
|
||||
|
||||
### Manually specifying an imagePullSecret
|
||||
Use of imagePullSecrets is desribed in the [images documentation](images.md#specifying-imagepullsecrets-on-a-pod)
|
||||
### Automatic use of Manually Created Secrets
|
||||
|
||||
*This feature is planned but not implemented. See [issue
|
||||
9902](https://github.com/GoogleCloudPlatform/kubernetes/issues/9902).*
|
||||
|
||||
You can reference manually created secrets from a [service account](service-accounts.md).
|
||||
Then, pods which use that service account will have
|
||||
`volumeMounts` and/or `imagePullSecrets` added to them.
|
||||
The secrets will be mounted at **TBD**.
|
||||
|
||||
## Details
|
||||
### Restrictions
|
||||
Secret volume sources are validated to ensure that the specified object
|
||||
reference actually points to an object of type `Secret`. Therefore, a secret
|
||||
needs to be created before any pods that depend on it.
|
||||
|
||||
Secret API objects reside in a namespace. They can only be referenced by pods
|
||||
in that same namespace.
|
||||
|
||||
Individual secrets are limited to 1MB in size. This is to discourage creation
|
||||
of very large secrets which would exhaust apiserver and kubelet memory.
|
||||
However, creation of many smaller secrets could also exhaust memory. More
|
||||
comprehensive limits on memory usage due to secrets is a planned feature.
|
||||
|
||||
Kubelet only supports use of secrets for Pods it gets from the API server.
|
||||
This includes any pods created using kubectl, or indirectly via a replication
|
||||
controller. It does not include pods created via the kubelets
|
||||
`--manifest-url` flag, its `--config` flag, or its REST API (these are
|
||||
not common ways to create pods.)
|
||||
|
||||
### Consuming Secret Values
|
||||
|
||||
Inside the container that mounts a secret volume, the secret keys appear as
|
||||
files and the secret values are base-64 decoded and stored inside these files.
|
||||
This is the result of commands
|
||||
executed inside the container from the example above:
|
||||
|
||||
```
|
||||
$ ls /etc/foo/
|
||||
username
|
||||
password
|
||||
$ cat /etc/foo/username
|
||||
value-1
|
||||
$ cat /etc/foo/password
|
||||
value-2
|
||||
```
|
||||
|
||||
The program in a container is responsible for reading the secret(s) from the
|
||||
files. Currently, if a program expects a secret to be stored in an environment
|
||||
variable, then the user needs to modify the image to populate the environment
|
||||
variable from the file as an step before running the main program. Future
|
||||
versions of Kubernetes are expected to provide more automation for populating
|
||||
environment variables from files.
|
||||
|
||||
|
||||
### Secret and Pod Lifetime interaction
|
||||
|
||||
When a pod is created via the API, there is no check whether a referenced
|
||||
secret exists. Once a pod is scheduled, the kubelet will try to fetch the
|
||||
secret value. If the secret cannot be fetched because it does not exist or
|
||||
because of a temporary lack of connection to the API server, kubelet will
|
||||
periodically retry. It will report an event about the pod explaining the
|
||||
reason it is not started yet. Once the a secret is fetched, the kubelet will
|
||||
create and mount a volume containing it. None of the pod's containers will
|
||||
start until all the pod's volumes are mounted.
|
||||
|
||||
Once the kubelet has started a pod's containers, its secret volumes will not
|
||||
change, even if the secret resource is modified. To change the secret used,
|
||||
the original pod must be deleted, and a new pod (perhaps with an identical
|
||||
`PodSpec`) must be created. Therefore, updating a secret follows the same
|
||||
workflow as deploying a new container image. The `kubectl rolling-update`
|
||||
command can be used ([man page](user-guide/kubectl/kubectl_rolling-update.md)).
|
||||
|
||||
The [`resourceVersion`](api-conventions.md#concurrency-control-and-consistency)
|
||||
of the secret is not specified when it is referenced.
|
||||
Therefore, if a secret is updated at about the same time as pods are starting,
|
||||
then it is not defined which version of the secret will be used for the pod. It
|
||||
is not possible currently to check what resource version of a secret object was
|
||||
used when a pod was created. It is planned that pods will report this
|
||||
information, so that a replication controller restarts ones using an old
|
||||
`resourceVersion`. In the interim, if this is a concern, it is recommended to not
|
||||
update the data of existing secrets, but to create new ones with distinct names.
|
||||
|
||||
## Use cases
|
||||
|
||||
### Use-Case: Pod with ssh keys
|
||||
|
||||
To create a pod that uses an ssh key stored as a secret, we first need to create a secret:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "ssh-key-secret"
|
||||
},
|
||||
"data": {
|
||||
"id-rsa": "dmFsdWUtMg0KDQo=",
|
||||
"id-rsa.pub": "dmFsdWUtMQ0K"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** The serialized JSON and YAML values of secret data are encoded as
|
||||
base64 strings. Newlines are not valid within these strings and must be
|
||||
omitted.
|
||||
|
||||
Now we can create a pod which references the secret with the ssh key and
|
||||
consumes it in a volume:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "secret-test-pod",
|
||||
"labels": {
|
||||
"name": "secret-test"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"volumes": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"secret": {
|
||||
"secretName": "ssh-key-secret"
|
||||
}
|
||||
}
|
||||
],
|
||||
"containers": [
|
||||
{
|
||||
"name": "ssh-test-container",
|
||||
"image": "mySshImage",
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"readOnly": true,
|
||||
"mountPath": "/etc/secret-volume"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
When the container's command runs, the pieces of the key will be available in:
|
||||
|
||||
/etc/secret-volume/id-rsa.pub
|
||||
/etc/secret-volume/id-rsa
|
||||
|
||||
The container is then free to use the secret data to establish an ssh connection.
|
||||
|
||||
### Use-Case: Pods with prod / test credentials
|
||||
|
||||
This example illustrates a pod which consumes a secret containing prod
|
||||
credentials and another pod which consumes a secret with test environment
|
||||
credentials.
|
||||
|
||||
The secrets:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items":
|
||||
[{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-secret"
|
||||
},
|
||||
"data": {
|
||||
"password": "dmFsdWUtMg0KDQo=",
|
||||
"username": "dmFsdWUtMQ0K"
|
||||
}
|
||||
},
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "test-db-secret"
|
||||
},
|
||||
"data": {
|
||||
"password": "dmFsdWUtMg0KDQo=",
|
||||
"username": "dmFsdWUtMQ0K"
|
||||
}
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
The pods:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items":
|
||||
[{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-client-pod",
|
||||
"labels": {
|
||||
"name": "prod-db-client"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"volumes": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"secret": {
|
||||
"secretName": "prod-db-secret"
|
||||
}
|
||||
}
|
||||
],
|
||||
"containers": [
|
||||
{
|
||||
"name": "db-client-container",
|
||||
"image": "myClientImage",
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"readOnly": true,
|
||||
"mountPath": "/etc/secret-volume"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "test-db-client-pod",
|
||||
"labels": {
|
||||
"name": "test-db-client"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"volumes": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"secret": {
|
||||
"secretName": "test-db-secret"
|
||||
}
|
||||
}
|
||||
],
|
||||
"containers": [
|
||||
{
|
||||
"name": "db-client-container",
|
||||
"image": "myClientImage",
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"readOnly": true,
|
||||
"mountPath": "/etc/secret-volume"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
Both containers will have the following files present on their filesystems:
|
||||
```
|
||||
/etc/secret-volume/username
|
||||
/etc/secret-volume/password
|
||||
```
|
||||
|
||||
Note how the specs for the two pods differ only in one field; this facilitates
|
||||
creating pods with different capabilities from a common pod config template.
|
||||
|
||||
You could further simplify the base pod specification by using two service accounts:
|
||||
one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
|
||||
`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-client-pod",
|
||||
"labels": {
|
||||
"name": "prod-db-client"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"serviceAccount": "prod-db-client",
|
||||
"containers": [
|
||||
{
|
||||
"name": "db-client-container",
|
||||
"image": "myClientImage",
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Use-case: Secret visible to one container in a pod
|
||||
<a name="use-case-two-containers"></a>
|
||||
|
||||
Consider a program that needs to handle HTTP requests, do some complex business
|
||||
logic, and then sign some messages with an HMAC. Because it has complex
|
||||
application logic, there might be an unnoticed remote file reading exploit in
|
||||
the server, which could expose the private key to an attacker.
|
||||
|
||||
This could be divided into two processes in two containers: a frontend container
|
||||
which handles user interaction and business logic, but which cannot see the
|
||||
private key; and a signer container that can see the private key, and responds
|
||||
to simple signing requests from the frontend (e.g. over localhost networking).
|
||||
|
||||
With this partitioned approach, an attacker now has to trick the application
|
||||
server into doing something rather arbitrary, which may be harder than getting
|
||||
it to read a file.
|
||||
|
||||
<!-- TODO: explain how to do this while still using automation. -->
|
||||
|
||||
## Security Properties
|
||||
|
||||
### Protections
|
||||
|
||||
Because `secret` objects can be created independently of the `pods` that use
|
||||
them, there is less risk of the secret being exposed during the workflow of
|
||||
creating, viewing, and editing pods. The system can also take additional
|
||||
precautions with `secret` objects, such as avoiding writing them to disk where
|
||||
possible.
|
||||
|
||||
A secret is only sent to a node if a pod on that node requires it. It is not
|
||||
written to disk. It is stored in a tmpfs. It is deleted once the pod that
|
||||
depends on it is deleted.
|
||||
|
||||
On most Kubernetes-project-maintained distributions, communication between user
|
||||
to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS.
|
||||
Secrets are protected when transmitted over these channels.
|
||||
|
||||
There may be secrets for several pods on the same node. However, only the
|
||||
secrets that a pod requests are potentially visible within its containers.
|
||||
Therefore, one Pod does not have access to the secrets of another pod.
|
||||
|
||||
There may be several containers in a pod. However, each container in a pod has
|
||||
to request the secret volume in its `volumeMounts` for it to be visible within
|
||||
the container. This can be used to construct useful [security partitions at the
|
||||
Pod level](#use-case-two-containers).
|
||||
|
||||
### Risks
|
||||
|
||||
- Applications still need to protect the value of secret after reading it from the volume,
|
||||
such as not accidentally logging it or transmitting it to an untrusted party.
|
||||
- A user who can create a pod that uses a secret can also see the value of that secret. Even
|
||||
if apiserver policy does not allow that user to read the secret object, the user could
|
||||
run a pod which exposes the secret.
|
||||
If multiple replicas of etcd are run, then the secrets will be shared between them.
|
||||
By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured.
|
||||
- It is not possible currently to control which users of a kubernetes cluster can
|
||||
access a secret. Support for this is planned.
|
||||
- Currently, anyone with root on any node can read any secret from the apiserver,
|
||||
by impersonating the kubelet. It is a planned feature to only send secrets to
|
||||
nodes that actually require them, to restrict the impact of a root exploit on a
|
||||
single node.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
78
docs/user-guide/secrets/README.md
Normal file
@@ -0,0 +1,78 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Secrets example
|
||||
|
||||
Following this example, you will create a secret and a pod that consumes that secret in a volume.
|
||||
You can learn more about secrets [Here](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/secrets.md).
|
||||
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting
|
||||
started](../../docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the secret
|
||||
|
||||
A secret contains a set of named byte arrays.
|
||||
|
||||
Use the [`examples/secrets/secret.yaml`](secret.yaml) file to create a secret:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/secrets/secret.yaml
|
||||
```
|
||||
|
||||
You can use `kubectl` to see information about the secret:
|
||||
|
||||
```shell
|
||||
$ kubectl get secrets
|
||||
NAME TYPE DATA
|
||||
test-secret Opaque 2
|
||||
|
||||
$ kubectl describe secret test-secret
|
||||
Name: test-secret
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
data-1: 9 bytes
|
||||
data-2: 11 bytes
|
||||
```
|
||||
|
||||
## Step Two: Create a pod that consumes a secret
|
||||
|
||||
Pods consume secrets in volumes. Now that you have created a secret, you can create a pod that
|
||||
consumes it.
|
||||
|
||||
Use the [`examples/secrets/secret-pod.yaml`](secret-pod.yaml) file to create a Pod that consumes the secret.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/secrets/secret-pod.yaml
|
||||
```
|
||||
|
||||
This pod runs a binary that displays the content of one of the pieces of secret data in the secret
|
||||
volume:
|
||||
|
||||
```shell
|
||||
$ kubectl logs secret-test-pod
|
||||
2015-04-29T21:17:24.712206409Z content of file "/etc/secret-volume/data-1": value-1
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
18
docs/user-guide/secrets/secret-pod.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: kubernetes/mounttest:0.1
|
||||
command: [ "/mt", "--file_content=/etc/secret-volume/data-1" ]
|
||||
volumeMounts:
|
||||
# name must match the volume name below
|
||||
- name: secret-volume
|
||||
mountPath: /etc/secret-volume
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: test-secret
|
||||
restartPolicy: Never
|
7
docs/user-guide/secrets/secret.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: test-secret
|
||||
data:
|
||||
data-1: dmFsdWUtMQ0K
|
||||
data-2: dmFsdWUtMg0KDQo=
|
22
docs/user-guide/security-context.md
Normal file
@@ -0,0 +1,22 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Security Contexts
|
||||
|
||||
A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. See [security context design](design/security_context.md) for more details.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
BIN
docs/user-guide/services-detail.png
Normal file
After Width: | Height: | Size: 67 KiB |
570
docs/user-guide/services-detail.svg
Normal file
@@ -0,0 +1,570 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
|
||||
<svg
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:cc="http://creativecommons.org/ns#"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
width="744.09448819"
|
||||
height="1052.3622047"
|
||||
id="svg2"
|
||||
version="1.1"
|
||||
inkscape:version="0.48.3.1 r9886"
|
||||
sodipodi:docname="services_detail.svg"
|
||||
inkscape:export-filename="/usr/local/google/home/thockin/src/kubernetes/docs/services_overview.png"
|
||||
inkscape:export-xdpi="76.910004"
|
||||
inkscape:export-ydpi="76.910004">
|
||||
<defs
|
||||
id="defs4" />
|
||||
<sodipodi:namedview
|
||||
id="base"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1.0"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pageshadow="2"
|
||||
inkscape:zoom="0.99604166"
|
||||
inkscape:cx="436.19361"
|
||||
inkscape:cy="503.28586"
|
||||
inkscape:document-units="px"
|
||||
inkscape:current-layer="layer1"
|
||||
showgrid="false"
|
||||
inkscape:window-width="1228"
|
||||
inkscape:window-height="848"
|
||||
inkscape:window-x="364"
|
||||
inkscape:window-y="24"
|
||||
inkscape:window-maximized="0" />
|
||||
<metadata
|
||||
id="metadata7">
|
||||
<rdf:RDF>
|
||||
<cc:Work
|
||||
rdf:about="">
|
||||
<dc:format>image/svg+xml</dc:format>
|
||||
<dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
|
||||
<dc:title />
|
||||
</cc:Work>
|
||||
</rdf:RDF>
|
||||
</metadata>
|
||||
<g
|
||||
inkscape:label="Layer 1"
|
||||
inkscape:groupmode="layer"
|
||||
id="layer1">
|
||||
<g
|
||||
transform="matrix(1,0,0,-1.1300076,-23.256225,1365.3668)"
|
||||
id="g4178-3-98">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.82215285;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174-3-7"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-8"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
id="g4324">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.99999976;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="M 340.43856,497.06486 C 238.47092,383.2788 238.47092,383.2788 238.47092,383.2788"
|
||||
id="path4174-3-2"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-9"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="matrix(0.74560707,-0.66638585,0.75302107,0.84254166,-563.80429,-49.094063)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(-1,0,0,1,718.68427,0.32076964)"
|
||||
id="g4324-8">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.99999976;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="M 340.43856,497.06486 C 238.47092,383.2788 238.47092,383.2788 238.47092,383.2788"
|
||||
id="path4174-3-2-7"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-9-3"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="matrix(0.74560707,-0.66638585,0.75302107,0.84254166,-563.80429,-49.094063)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(1,0,0,1.3566066,10.430689,-549.99231)"
|
||||
id="g4178-3-9">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.57569385;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174-3-8"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.57569408;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-5"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(1,0,0,0.83995083,5.8686441,145.11325)"
|
||||
id="g4178">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:3.27336383;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:3.27336407;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
id="g3937"
|
||||
transform="translate(-27.782873,191.54649)">
|
||||
<g
|
||||
transform="translate(0,6.5250001e-6)"
|
||||
id="g3868">
|
||||
<rect
|
||||
style="fill:#85bff1;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985"
|
||||
width="224.28572"
|
||||
height="118.57142"
|
||||
x="30.000006"
|
||||
y="60.933609" />
|
||||
<g
|
||||
id="g3861">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757"
|
||||
sodipodi:role="line">Backend Pod 1</tspan></text>
|
||||
<text
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3855"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:24px"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
id="tspan3857"
|
||||
sodipodi:role="line">labels: app=MyApp</tspan><tspan
|
||||
id="tspan3859"
|
||||
style="font-size:24px"
|
||||
y="160.93361"
|
||||
x="37.14286"
|
||||
sodipodi:role="line">port: 9376</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="g3868-7"
|
||||
transform="translate(246.07142,6.5250001e-6)">
|
||||
<rect
|
||||
style="fill:#85bff1;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985-1"
|
||||
width="224.28572"
|
||||
height="118.57142"
|
||||
x="30.000006"
|
||||
y="60.933609" />
|
||||
<g
|
||||
id="g3861-9">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755-3"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757-5"
|
||||
sodipodi:role="line">Backend Pod 2</tspan></text>
|
||||
<text
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3855-6"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:24px"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
id="tspan3857-1"
|
||||
sodipodi:role="line">labels: app=MyApp</tspan><tspan
|
||||
id="tspan3859-9"
|
||||
style="font-size:24px"
|
||||
y="160.93361"
|
||||
x="37.14286"
|
||||
sodipodi:role="line">port: 9376</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="g3868-3"
|
||||
transform="translate(492.14285,6.5250001e-6)">
|
||||
<rect
|
||||
style="fill:#85bff1;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985-2"
|
||||
width="224.28572"
|
||||
height="118.57142"
|
||||
x="30.000006"
|
||||
y="60.933609" />
|
||||
<g
|
||||
id="g3861-3">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755-5"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757-2"
|
||||
sodipodi:role="line">Backend Pod 3</tspan></text>
|
||||
<text
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3855-4"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:24px"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
id="tspan3857-7"
|
||||
sodipodi:role="line">labels: app=MyApp</tspan><tspan
|
||||
id="tspan3859-7"
|
||||
style="font-size:24px"
|
||||
y="160.93361"
|
||||
x="37.14286"
|
||||
sodipodi:role="line">port: 9376</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(-0.5569815,0.8305249,-0.93849945,-0.62939332,1043.1434,624.89979)"
|
||||
id="g4178-3-4">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.82215285;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174-3-9"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-1"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(1,0,0,1.1300076,19.868644,-230.41621)"
|
||||
id="g4178-3">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.82215285;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174-3"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="translate(9.4642913,66)"
|
||||
id="g4090">
|
||||
<rect
|
||||
y="704.50507"
|
||||
x="221.78571"
|
||||
height="58.571419"
|
||||
width="224.28572"
|
||||
id="rect2985-4"
|
||||
style="fill:#f1cb85;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
|
||||
<g
|
||||
transform="translate(249.2817,652.74516)"
|
||||
id="g3861-6">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="36.710861"
|
||||
y="91.845612"
|
||||
id="text3755-32"
|
||||
sodipodi:linespacing="125%"
|
||||
inkscape:transform-center-x="-70"
|
||||
inkscape:transform-center-y="-11.264"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan3757-9"
|
||||
x="36.710861"
|
||||
y="91.845612"
|
||||
style="font-size:32px;text-align:start;text-anchor:start">Client</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="translate(24.285715,159.42857)"
|
||||
id="g4114">
|
||||
<path
|
||||
inkscape:connector-curvature="0"
|
||||
style="fill:#ededed;fill-opacity:1;stroke:#000000;stroke-width:1.99999988;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 282.87054,438.5755 c -23.66935,0 -42.875,11.54365 -42.875,25.78345 0,1.69709 0.29232,3.36317 0.8125,4.96869 -5.77989,-1.60822 -12.0611,-2.49777 -18.65625,-2.49777 -28.00873,0 -50.71875,15.92203 -50.71875,35.58653 0,19.66449 22.71002,35.61339 50.71875,35.61339 9.72296,0 18.78316,-1.93319 26.5,-5.26412 10.70208,13.21239 35.10628,22.45308 63.5,22.45308 23.13948,0 43.60406,-6.13049 56.1875,-15.55064 12.16376,6.53313 29.85326,10.63567 49.53125,10.63567 36.68749,0 66.40625,-14.27678 66.40625,-31.90702 0,-17.63023 -29.71876,-31.93387 -66.40625,-31.93387 -0.61492,0 -1.23284,0.0189 -1.84375,0.0268 0.72778,-1.79609 1.125,-3.66107 1.125,-5.55955 0,-15.93503 -26.86291,-28.84524 -60,-28.84524 -12.3074,0 -23.75966,1.77775 -33.28125,4.8344 -5.31552,-10.60488 -21.63938,-18.34385 -41,-18.34385 z"
|
||||
id="path4096" />
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="270.39322"
|
||||
y="507.15195"
|
||||
id="text4108"
|
||||
sodipodi:linespacing="125%"><tspan
|
||||
sodipodi:role="line"
|
||||
x="270.39322"
|
||||
y="507.15195"
|
||||
id="tspan4112"
|
||||
style="font-size:22px">iptables</tspan></text>
|
||||
</g>
|
||||
<g
|
||||
transform="translate(167.67856,-111.42858)"
|
||||
id="g4168">
|
||||
<rect
|
||||
y="588.79077"
|
||||
x="50.714287"
|
||||
height="58.571419"
|
||||
width="250.00002"
|
||||
id="rect2985-4-0"
|
||||
style="fill:#b9f185;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
|
||||
<g
|
||||
transform="translate(58.491433,534.63087)"
|
||||
id="g3861-6-2">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="36.710861"
|
||||
y="91.845612"
|
||||
id="text3755-32-8"
|
||||
sodipodi:linespacing="125%"
|
||||
inkscape:transform-center-x="-70"
|
||||
inkscape:transform-center-y="-11.264"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan3757-9-4"
|
||||
x="36.710861"
|
||||
y="91.845612"
|
||||
style="font-size:32px;text-align:start;text-anchor:start">kube-proxy</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="translate(-102.23193,-119.15421)"
|
||||
id="g4168-5">
|
||||
<g
|
||||
transform="translate(22.087429,-86.34177)"
|
||||
id="g4238">
|
||||
<rect
|
||||
style="fill:#edc1f8;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985-4-0-6"
|
||||
width="191.76952"
|
||||
height="58.571419"
|
||||
x="51.869534"
|
||||
y="588.79077" />
|
||||
<g
|
||||
id="g3861-6-2-6"
|
||||
transform="translate(39.107429,534.26287)">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755-32-8-8"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757-9-4-1"
|
||||
sodipodi:role="line">apiserver</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="354.03052"
|
||||
y="752.17395"
|
||||
id="text4777"
|
||||
sodipodi:linespacing="125%"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan4779"
|
||||
x="354.03052"
|
||||
y="752.17395"
|
||||
style="font-size:22px">3) connect to 10.0.0.1:1234</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="381.81412"
|
||||
y="563.21899"
|
||||
id="text4777-1"
|
||||
sodipodi:linespacing="125%"><tspan
|
||||
sodipodi:role="line"
|
||||
x="381.81412"
|
||||
y="563.21899"
|
||||
style="font-size:22px"
|
||||
id="tspan4804">4) redirect to (random)</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
x="381.81412"
|
||||
y="590.71899"
|
||||
style="font-size:22px"
|
||||
id="tspan3060">proxy port</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="-11.495128"
|
||||
y="476.92422"
|
||||
id="text4777-1-3"
|
||||
sodipodi:linespacing="125%"><tspan
|
||||
sodipodi:role="line"
|
||||
x="-11.495128"
|
||||
y="476.92422"
|
||||
style="font-size:22px"
|
||||
id="tspan4804-8">1) watch Services </tspan><tspan
|
||||
sodipodi:role="line"
|
||||
x="-11.495128"
|
||||
y="504.42422"
|
||||
style="font-size:22px"
|
||||
id="tspan3056">and Endpoints</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="53.554245"
|
||||
y="557.18707"
|
||||
id="text4777-1-3-5"
|
||||
sodipodi:linespacing="125%"><tspan
|
||||
sodipodi:role="line"
|
||||
x="53.554245"
|
||||
y="557.18707"
|
||||
style="font-size:22px"
|
||||
id="tspan4804-8-5">2) open proxy port </tspan><tspan
|
||||
sodipodi:role="line"
|
||||
x="53.554245"
|
||||
y="584.68707"
|
||||
style="font-size:22px"
|
||||
id="tspan3058">and set portal rules</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="450.63913"
|
||||
y="442.09073"
|
||||
id="text4777-1-2"
|
||||
sodipodi:linespacing="125%"><tspan
|
||||
sodipodi:role="line"
|
||||
x="450.63913"
|
||||
y="442.09073"
|
||||
style="font-size:22px"
|
||||
id="tspan4804-9">5) proxy to a backend</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
x="450.63913"
|
||||
y="469.59073"
|
||||
style="font-size:22px"
|
||||
id="tspan3060-8" /></text>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 25 KiB |
BIN
docs/user-guide/services-overview.png
Normal file
After Width: | Height: | Size: 42 KiB |
417
docs/user-guide/services-overview.svg
Normal file
@@ -0,0 +1,417 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
|
||||
<svg
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:cc="http://creativecommons.org/ns#"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
width="744.09448819"
|
||||
height="1052.3622047"
|
||||
id="svg2"
|
||||
version="1.1"
|
||||
inkscape:version="0.48.3.1 r9886"
|
||||
sodipodi:docname="services_overview.svg"
|
||||
inkscape:export-filename="/usr/local/google/home/thockin/src/kubernetes/docs/services_overview.png"
|
||||
inkscape:export-xdpi="76.910004"
|
||||
inkscape:export-ydpi="76.910004">
|
||||
<defs
|
||||
id="defs4" />
|
||||
<sodipodi:namedview
|
||||
id="base"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1.0"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pageshadow="2"
|
||||
inkscape:zoom="1.0318369"
|
||||
inkscape:cx="351.19865"
|
||||
inkscape:cy="624.90035"
|
||||
inkscape:document-units="px"
|
||||
inkscape:current-layer="g4090"
|
||||
showgrid="false"
|
||||
inkscape:window-width="1228"
|
||||
inkscape:window-height="848"
|
||||
inkscape:window-x="364"
|
||||
inkscape:window-y="24"
|
||||
inkscape:window-maximized="0" />
|
||||
<metadata
|
||||
id="metadata7">
|
||||
<rdf:RDF>
|
||||
<cc:Work
|
||||
rdf:about="">
|
||||
<dc:format>image/svg+xml</dc:format>
|
||||
<dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
|
||||
<dc:title />
|
||||
</cc:Work>
|
||||
</rdf:RDF>
|
||||
</metadata>
|
||||
<g
|
||||
inkscape:label="Layer 1"
|
||||
inkscape:groupmode="layer"
|
||||
id="layer1">
|
||||
<g
|
||||
id="g4324">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.99999976;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="M 340.43856,497.06486 C 238.47092,383.2788 238.47092,383.2788 238.47092,383.2788"
|
||||
id="path4174-3-2"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-9"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="matrix(0.74560707,-0.66638585,0.75302107,0.84254166,-563.80429,-49.094063)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(-1,0,0,1,718.68427,0.32076964)"
|
||||
id="g4324-8">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.99999976;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="M 340.43856,497.06486 C 238.47092,383.2788 238.47092,383.2788 238.47092,383.2788"
|
||||
id="path4174-3-2-7"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-9-3"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="matrix(0.74560707,-0.66638585,0.75302107,0.84254166,-563.80429,-49.094063)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(1,0,0,1.3566066,10.430689,-549.99231)"
|
||||
id="g4178-3-9">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.57569385;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174-3-8"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.57569408;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-5"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
id="g3937"
|
||||
transform="translate(-27.782873,191.54649)">
|
||||
<g
|
||||
transform="translate(0,6.5250001e-6)"
|
||||
id="g3868">
|
||||
<rect
|
||||
style="fill:#85bff1;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985"
|
||||
width="224.28572"
|
||||
height="118.57142"
|
||||
x="30.000006"
|
||||
y="60.933609" />
|
||||
<g
|
||||
id="g3861">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757"
|
||||
sodipodi:role="line">Backend Pod 1</tspan></text>
|
||||
<text
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3855"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:24px"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
id="tspan3857"
|
||||
sodipodi:role="line">labels: app=MyApp</tspan><tspan
|
||||
id="tspan3859"
|
||||
style="font-size:24px"
|
||||
y="160.93361"
|
||||
x="37.14286"
|
||||
sodipodi:role="line">port: 9376</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="g3868-7"
|
||||
transform="translate(246.07142,6.5250001e-6)">
|
||||
<rect
|
||||
style="fill:#85bff1;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985-1"
|
||||
width="224.28572"
|
||||
height="118.57142"
|
||||
x="30.000006"
|
||||
y="60.933609" />
|
||||
<g
|
||||
id="g3861-9">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755-3"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757-5"
|
||||
sodipodi:role="line">Backend Pod 2</tspan></text>
|
||||
<text
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3855-6"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:24px"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
id="tspan3857-1"
|
||||
sodipodi:role="line">labels: app=MyApp</tspan><tspan
|
||||
id="tspan3859-9"
|
||||
style="font-size:24px"
|
||||
y="160.93361"
|
||||
x="37.14286"
|
||||
sodipodi:role="line">port: 9376</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="g3868-3"
|
||||
transform="translate(492.14285,6.5250001e-6)">
|
||||
<rect
|
||||
style="fill:#85bff1;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985-2"
|
||||
width="224.28572"
|
||||
height="118.57142"
|
||||
x="30.000006"
|
||||
y="60.933609" />
|
||||
<g
|
||||
id="g3861-3">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755-5"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757-2"
|
||||
sodipodi:role="line">Backend Pod 3</tspan></text>
|
||||
<text
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3855-4"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:24px"
|
||||
y="130.93361"
|
||||
x="37.14286"
|
||||
id="tspan3857-7"
|
||||
sodipodi:role="line">labels: app=MyApp</tspan><tspan
|
||||
id="tspan3859-7"
|
||||
style="font-size:24px"
|
||||
y="160.93361"
|
||||
x="37.14286"
|
||||
sodipodi:role="line">port: 9376</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(-0.5569815,0.8305249,-0.93849945,-0.62939332,1043.1434,624.89979)"
|
||||
id="g4178-3-4">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.82215285;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174-3-9"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9-1"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(1,0,0,1.1300076,5.8686441,-230.41621)"
|
||||
id="g4178-3">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;stroke-width:2.82215285;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
d="m 337.14286,757.95172 c 0,-71.30383 0,-71.30383 0,-71.30383"
|
||||
id="path4174-3"
|
||||
inkscape:connector-curvature="0" />
|
||||
<path
|
||||
sodipodi:type="star"
|
||||
style="fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:2.82215309;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="path4176-9"
|
||||
sodipodi:sides="3"
|
||||
sodipodi:cx="308.85715"
|
||||
sodipodi:cy="753.79077"
|
||||
sodipodi:r1="10"
|
||||
sodipodi:r2="5"
|
||||
sodipodi:arg1="2.6179939"
|
||||
sodipodi:arg2="3.6651914"
|
||||
inkscape:flatsided="true"
|
||||
inkscape:rounded="0"
|
||||
inkscape:randomized="0"
|
||||
d="m 300.19689,758.79077 8.66026,-15 8.66025,15 z"
|
||||
transform="translate(28.571429,-62.857143)"
|
||||
inkscape:transform-center-y="-2.5" />
|
||||
</g>
|
||||
<g
|
||||
transform="translate(11.472239,-104.6279)"
|
||||
id="g4090">
|
||||
<rect
|
||||
y="704.50507"
|
||||
x="221.78571"
|
||||
height="58.571419"
|
||||
width="224.28572"
|
||||
id="rect2985-4"
|
||||
style="fill:#f1cb85;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
|
||||
<g
|
||||
transform="translate(217.6177,652.82516)"
|
||||
id="g3861-6">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="67.574867"
|
||||
y="91.765617"
|
||||
id="text3755-32"
|
||||
sodipodi:linespacing="125%"
|
||||
inkscape:transform-center-x="-70"
|
||||
inkscape:transform-center-y="-11.264"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan3757-9"
|
||||
x="67.574867"
|
||||
y="91.765617"
|
||||
style="font-size:32px;text-align:start;text-anchor:start">Client </tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="translate(167.67856,-111.42858)"
|
||||
id="g4168">
|
||||
<rect
|
||||
y="588.79077"
|
||||
x="50.714287"
|
||||
height="58.571419"
|
||||
width="250.00002"
|
||||
id="rect2985-4-0"
|
||||
style="fill:#b9f185;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
|
||||
<g
|
||||
transform="translate(34.747433,534.26287)"
|
||||
id="g3861-6-2">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
x="60.454861"
|
||||
y="92.213608"
|
||||
id="text3755-32-8"
|
||||
sodipodi:linespacing="125%"
|
||||
inkscape:transform-center-x="-70"
|
||||
inkscape:transform-center-y="-11.264"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan3757-9-4"
|
||||
x="60.454861"
|
||||
y="92.213608"
|
||||
style="font-size:32px;text-align:start;text-anchor:start">kube-proxy</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="translate(-102.23193,-119.15421)"
|
||||
id="g4168-5">
|
||||
<g
|
||||
transform="translate(22.087429,-86.34177)"
|
||||
id="g4238">
|
||||
<rect
|
||||
style="fill:#edc1f8;fill-opacity:1;stroke:#000000;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
|
||||
id="rect2985-4-0-6"
|
||||
width="191.76952"
|
||||
height="58.571419"
|
||||
x="51.869534"
|
||||
y="588.79077" />
|
||||
<g
|
||||
id="g3861-6-2-6"
|
||||
transform="translate(39.107429,534.26287)">
|
||||
<text
|
||||
inkscape:transform-center-y="-11.264"
|
||||
inkscape:transform-center-x="-70"
|
||||
sodipodi:linespacing="125%"
|
||||
id="text3755-32-8-8"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Ubuntu Mono;-inkscape-font-specification:Ubuntu Mono"
|
||||
xml:space="preserve"><tspan
|
||||
style="font-size:32px;text-align:start;text-anchor:start"
|
||||
y="91.845612"
|
||||
x="36.710861"
|
||||
id="tspan3757-9-4-1"
|
||||
sodipodi:role="line">apiserver</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 17 KiB |
512
docs/user-guide/services.md
Normal file
@@ -0,0 +1,512 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Services in Kubernetes
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
- [Services in Kubernetes](#services-in-kubernetes)
|
||||
- [Overview](#overview)
|
||||
- [Defining a service](#defining-a-service)
|
||||
- [Services without selectors](#services-without-selectors)
|
||||
- [Virtual IPs and service proxies](#virtual-ips-and-service-proxies)
|
||||
- [Multi-Port Services](#multi-port-services)
|
||||
- [Choosing your own IP address](#choosing-your-own-ip-address)
|
||||
- [Why not use round-robin DNS?](#why-not-use-round-robin-dns?)
|
||||
- [Discovering services](#discovering-services)
|
||||
- [Environment variables](#environment-variables)
|
||||
- [DNS](#dns)
|
||||
- [Headless services](#headless-services)
|
||||
- [<a name="external"></a>External services](#<a-name="external"></a>external-services)
|
||||
- [Type = NodePort](#type-=-nodeport)
|
||||
- [Type = LoadBalancer](#type-=-loadbalancer)
|
||||
- [Shortcomings](#shortcomings)
|
||||
- [Future work](#future-work)
|
||||
- [The gory details of virtual IPs](#the-gory-details-of-virtual-ips)
|
||||
- [Avoiding collisions](#avoiding-collisions)
|
||||
- [IPs and VIPs](#ips-and-vips)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Overview
|
||||
|
||||
Kubernetes [`Pods`](pods.md) are mortal. They are born and they die, and they
|
||||
are not resurrected. [`ReplicationControllers`](replication-controller.md) in
|
||||
particular create and destroy `Pods` dynamically (e.g. when scaling up or down
|
||||
or when doing rolling updates). While each `Pod` gets its own IP address, even
|
||||
those IP addresses cannot be relied upon to be stable over time. This leads to
|
||||
a problem: if some set of `Pods` (let's call them backends) provides
|
||||
functionality to other `Pods` (let's call them frontends) inside the Kubernetes
|
||||
cluster, how do those frontends find out and keep track of which backends are
|
||||
in that set?
|
||||
|
||||
Enter `Services`.
|
||||
|
||||
A Kubernetes `Service` is an abstraction which defines a logical set of `Pods`
|
||||
and a policy by which to access them - sometimes called a micro-service. The
|
||||
set of `Pods` targeted by a `Service` is (usually) determined by a [`Label
|
||||
Selector`](labels.md#label-selectors) (see below for why you might want a
|
||||
`Service` without a selector).
|
||||
|
||||
As an example, consider an image-processing backend which is running with 3
|
||||
replicas. Those replicas are fungible - frontends do not care which backend
|
||||
they use. While the actual `Pods` that compose the backend set may change, the
|
||||
frontend clients should not need to be aware of that or keep track of the list
|
||||
of backends themselves. The `Service` abstraction enables this decoupling.
|
||||
|
||||
For Kubernetes-native applications, Kubernetes offers a simple `Endpoints` API
|
||||
that is updated whenever the set of `Pods` in a `Service` changes. For
|
||||
non-native applications, Kubernetes offers a virtual-IP-based bridge to Services
|
||||
which redirects to the backend `Pods`.
|
||||
|
||||
## Defining a service
|
||||
|
||||
A `Service` in Kubernetes is a REST object, similar to a `Pod`. Like all of the
|
||||
REST objects, a `Service` definition can be POSTed to the apiserver to create a
|
||||
new instance. For example, suppose you have a set of `Pods` that each expose
|
||||
port 9376 and carry a label "app=MyApp".
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
"spec": {
|
||||
"selector": {
|
||||
"app": "MyApp"
|
||||
},
|
||||
"ports": [
|
||||
{
|
||||
"protocol": "TCP",
|
||||
"port": 80,
|
||||
"targetPort": 9376
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This specification will create a new `Service` object named "my-service" which
|
||||
targets TCP port 9376 on any `Pod` with the "app=MyApp" label. This `Service`
|
||||
will also be assigned an IP address (sometimes called the "cluster IP"), which
|
||||
is used by the service proxies (see below). The `Service`'s selector will be
|
||||
evaluated continuously and the results will be POSTed to an `Endpoints` object
|
||||
also named "my-service".
|
||||
|
||||
Note that a `Service` can map an incoming port to any `targetPort`. By default
|
||||
the `targetPort` will be set to the same value as the `port` field. Perhaps
|
||||
more interesting is that `targetPort` can be a string, referring to the name of
|
||||
a port in the backend `Pods`. The actual port number assigned to that name can
|
||||
be different in each backend `Pod`. This offers a lot of flexibility for
|
||||
deploying and evolving your `Services`. For example, you can change the port
|
||||
number that pods expose in the next version of your backend software, without
|
||||
breaking clients.
|
||||
|
||||
Kubernetes `Services` support `TCP` and `UDP` for protocols. The default
|
||||
is `TCP`.
|
||||
|
||||
### Services without selectors
|
||||
|
||||
Services generally abstract access to Kubernetes `Pods`, but they can also
|
||||
abstract other kinds of backends. For example:
|
||||
|
||||
* You want to have an external database cluster in production, but in test
|
||||
you use your own databases.
|
||||
* You want to point your service to a service in another
|
||||
[`Namespace`](namespaces.md) or on another cluster.
|
||||
* You are migrating your workload to Kubernetes and some of your backends run
|
||||
outside of Kubernetes.
|
||||
|
||||
In any of these scenarios you can define a service without a selector:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
"spec": {
|
||||
"ports": [
|
||||
{
|
||||
"protocol": "TCP",
|
||||
"port": 80,
|
||||
"targetPort": 9376
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Because this has no selector, the corresponding `Endpoints` object will not be
|
||||
created. You can manually map the service to your own specific endpoints:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Endpoints",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
"subsets": [
|
||||
{
|
||||
"addresses": [
|
||||
{ "IP": "1.2.3.4" }
|
||||
],
|
||||
"ports": [
|
||||
{ "port": 80 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Accessing a `Service` without a selector works the same as if it had selector.
|
||||
The traffic will be routed to endpoints defined by the user (`1.2.3.4:80` in
|
||||
this example).
|
||||
|
||||
## Virtual IPs and service proxies
|
||||
|
||||
Every node in a Kubernetes cluster runs a `kube-proxy`. This application
|
||||
watches the Kubernetes master for the addition and removal of `Service`
|
||||
and `Endpoints` objects. For each `Service` it opens a port (randomly chosen)
|
||||
on the local node. Any connections made to that port will be proxied to one of
|
||||
the corresponding backend `Pods`. Which backend to use is decided based on the
|
||||
`SessionAffinity` of the `Service`. Lastly, it installs iptables rules which
|
||||
capture traffic to the `Service`'s cluster IP (which is virtual) and `Port` and
|
||||
redirects that traffic to the previously described port.
|
||||
|
||||
The net result is that any traffic bound for the `Service` is proxied to an
|
||||
appropriate backend without the clients knowing anything about Kubernetes or
|
||||
`Services` or `Pods`.
|
||||
|
||||

|
||||
|
||||
By default, the choice of backend is random. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
default is `"None"`).
|
||||
|
||||
As of Kubernetes 1.0, `Services` are a "layer 3" (TCP/UDP over IP) construct. We do not
|
||||
yet have a concept of "layer 7" (HTTP) services.
|
||||
|
||||
## Multi-Port Services
|
||||
|
||||
Many `Services` need to expose more than one port. For this case, Kubernetes
|
||||
supports multiple port definitions on a `Service` object. When using multiple
|
||||
ports you must give all of your ports names, so that endpoints can be
|
||||
disambiguated. For example:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
"spec": {
|
||||
"selector": {
|
||||
"app": "MyApp"
|
||||
},
|
||||
"ports": [
|
||||
{
|
||||
"name": "http",
|
||||
"protocol": "TCP",
|
||||
"port": 80,
|
||||
"targetPort": 9376
|
||||
},
|
||||
{
|
||||
"name": "https",
|
||||
"protocol": "TCP",
|
||||
"port": 443,
|
||||
"targetPort": 9377
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Choosing your own IP address
|
||||
|
||||
You can specify your own cluster IP address as part of a `Service` creation
|
||||
request. To do this, set the `spec.clusterIP` field. For example, if you
|
||||
already have an existing DNS entry that you wish to replace, or legacy systems
|
||||
that are configured for a specific IP address and difficult to re-configure.
|
||||
The IP address that a user chooses must be a valid IP address and within the
|
||||
service_cluster_ip_range CIDR range that is specified by flag to the API
|
||||
server. If the IP address value is invalid, the apiserver returns a 422 HTTP
|
||||
status code to indicate that the value is invalid.
|
||||
|
||||
### Why not use round-robin DNS?
|
||||
|
||||
A question that pops up every now and then is why we do all this stuff with
|
||||
virtual IPs rather than just use standard round-robin DNS. There are a few
|
||||
reasons:
|
||||
|
||||
* There is a long history of DNS libraries not respecting DNS TTLs and
|
||||
caching the results of name lookups.
|
||||
* Many apps do DNS lookups once and cache the results.
|
||||
* Even if apps and libraries did proper re-resolution, the load of every
|
||||
client re-resolving DNS over and over would be difficult to manage.
|
||||
|
||||
We try to discourage users from doing things that hurt themselves. That said,
|
||||
if enough people ask for this, we may implement it as an alternative.
|
||||
|
||||
## Discovering services
|
||||
|
||||
Kubernetes supports 2 primary modes of finding a `Service` - environment
|
||||
variables and DNS.
|
||||
|
||||
### Environment variables
|
||||
|
||||
When a `Pod` is run on a `Node`, the kubelet adds a set of environment variables
|
||||
for each active `Service`. It supports both [Docker links
|
||||
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
|
||||
[makeLinkVariables](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/kubelet/envvars/envvars.go#L49))
|
||||
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
|
||||
where the Service name is upper-cased and dashes are converted to underscores.
|
||||
|
||||
For example, the Service "redis-master" which exposes TCP port 6379 and has been
|
||||
allocated cluster IP address 10.0.0.11 produces the following environment
|
||||
variables:
|
||||
|
||||
```
|
||||
REDIS_MASTER_SERVICE_HOST=10.0.0.11
|
||||
REDIS_MASTER_SERVICE_PORT=6379
|
||||
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
|
||||
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
|
||||
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
|
||||
REDIS_MASTER_PORT_6379_TCP_PORT=6379
|
||||
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
|
||||
```
|
||||
|
||||
*This does imply an ordering requirement* - any `Service` that a `Pod` wants to
|
||||
access must be created before the `Pod` itself, or else the environment
|
||||
variables will not be populated. DNS does not have this restriction.
|
||||
|
||||
### DNS
|
||||
|
||||
An optional (though strongly recommended) [cluster
|
||||
add-on](../cluster/addons/README.md) is a DNS server. The
|
||||
DNS server watches the Kubernetes API for new `Services` and creates a set of
|
||||
DNS records for each. If DNS has been enabled throughout the cluster then all
|
||||
`Pods` should be able to do name resolution of `Services` automatically.
|
||||
|
||||
For example, if you have a `Service` called "my-service" in Kubernetes
|
||||
`Namespace` "my-ns" a DNS record for "my-service.my-ns" is created. `Pods`
|
||||
which exist in the "my-ns" `Namespace` should be able to find it by simply doing
|
||||
a name lookup for "my-service". `Pods` which exist in other `Namespaces` must
|
||||
qualify the name as "my-service.my-ns". The result of these name lookups is the
|
||||
cluster IP.
|
||||
|
||||
Kubernetes also supports DNS SRV (service) records for named ports. If the
|
||||
"my-service.my-ns" `Service` has a port named "http" with protocol `TCP`, you
|
||||
can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port
|
||||
number for "http".
|
||||
|
||||
## Headless services
|
||||
|
||||
Sometimes you don't need or want load-balancing and a single service IP. In
|
||||
this case, you can create "headless" services by specifying `"None"` for the
|
||||
cluster IP (`spec.clusterIP`).
|
||||
|
||||
For such `Services`, a cluster IP is not allocated. DNS is configured to return
|
||||
multiple A records (addresses) for the `Service` name, which point directly to
|
||||
the `Pods` backing the `Service`. Additionally, the kube proxy does not handle
|
||||
these services and there is no load balancing or proxying done by the platform
|
||||
for them. The endpoints controller will still create `Endpoints` records in
|
||||
the API.
|
||||
|
||||
This option allows developers to reduce coupling to the Kubernetes system, if
|
||||
they desire, but leaves them freedom to do discovery in their own way.
|
||||
Applications can still use a self-registration pattern and adapters for other
|
||||
discovery systems could easily be built upon this API.
|
||||
|
||||
##<a name="external"></a>External services
|
||||
|
||||
For some parts of your application (e.g. frontends) you may want to expose a
|
||||
Service onto an external (outside of your cluster, maybe public internet) IP
|
||||
address. Kubernetes supports two ways of doing this: `NodePort`s and
|
||||
`LoadBalancer`s.
|
||||
|
||||
Every `Service` has a `type` field which defines how the `Service` can be
|
||||
accessed. Valid values for this field are:
|
||||
|
||||
* `ClusterIP`: use a cluster-internal IP only - this is the default and is
|
||||
discussed above
|
||||
* `NodePort`: use a cluster IP, but also expose the service on a port on each
|
||||
node of the cluster (the same port on each node)
|
||||
* `LoadBalancer`: use a ClusterIP and a NodePort, but also ask the cloud
|
||||
provider for a load balancer which forwards to the `Service`
|
||||
|
||||
Note that while `NodePort`s can be TCP or UDP, `LoadBalancer`s only support TCP
|
||||
as of Kubernetes 1.0.
|
||||
|
||||
### Type = NodePort
|
||||
|
||||
If you set the `type` field to `"NodePort"`, the Kubernetes master will
|
||||
allocate a port from a flag-configured range (default: 30000-32767), and each
|
||||
node will proxy that port (the same port number on every node) into your `Service`.
|
||||
That port will be reported in your `Service`'s `spec.ports[*].nodePort` field.
|
||||
|
||||
If you want a specific port number, you can specify a value in the `nodePort`
|
||||
field, and the system will allocate you that port or else the API transaction
|
||||
will fail. The value you specify must be in the configured range for node
|
||||
ports.
|
||||
|
||||
This gives developers the freedom to set up their own load balancers, to
|
||||
configure cloud environments that are not fully supported by Kubernetes, or
|
||||
even to just expose one or more nodes' IPs directly.
|
||||
|
||||
### Type = LoadBalancer
|
||||
|
||||
On cloud providers which support external load balancers, setting the `type`
|
||||
field to `"LoadBalancer"` will provision a load balancer for your `Service`.
|
||||
The actual creation of the load balancer happens asynchronously, and
|
||||
information about the provisioned balancer will be published in the `Service`'s
|
||||
`status.loadBalancer` field. For example:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
"spec": {
|
||||
"selector": {
|
||||
"app": "MyApp"
|
||||
},
|
||||
"ports": [
|
||||
{
|
||||
"protocol": "TCP",
|
||||
"port": 80,
|
||||
"targetPort": 9376,
|
||||
"nodePort": 30061
|
||||
}
|
||||
],
|
||||
"clusterIP": "10.0.171.239",
|
||||
"type": "LoadBalancer"
|
||||
},
|
||||
"status": {
|
||||
"loadBalancer": {
|
||||
"ingress": [
|
||||
{
|
||||
"ip": "146.148.47.155"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Traffic from the external load balancer will be directed at the backend `Pods`,
|
||||
though exactly how that works depends on the cloud provider.
|
||||
|
||||
## Shortcomings
|
||||
|
||||
We expect that using iptables and userspace proxies for VIPs will work at
|
||||
small to medium scale, but may not scale to very large clusters with thousands
|
||||
of Services. See [the original design proposal for
|
||||
portals](https://github.com/GoogleCloudPlatform/kubernetes/issues/1107) for more
|
||||
details.
|
||||
|
||||
Using the kube-proxy obscures the source-IP of a packet accessing a `Service`.
|
||||
This makes some kinds of firewalling impossible.
|
||||
|
||||
LoadBalancers only support TCP, not UDP.
|
||||
|
||||
The `Type` field is designed as nested functionality - each level adds to the
|
||||
previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does
|
||||
not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does)
|
||||
but the current API requires it.
|
||||
|
||||
## Future work
|
||||
|
||||
In the future we envision that the proxy policy can become more nuanced than
|
||||
simple round robin balancing, for example master-elected or sharded. We also
|
||||
envision that some `Services` will have "real" load balancers, in which case the
|
||||
VIP will simply transport the packets there.
|
||||
|
||||
There's a
|
||||
[proposal](https://github.com/GoogleCloudPlatform/kubernetes/issues/3760) to
|
||||
eliminate userspace proxying in favor of doing it all in iptables. This should
|
||||
perform better and fix the source-IP obfuscation, though is less flexible than
|
||||
arbitrary userspace code.
|
||||
|
||||
We intend to have first-class support for L7 (HTTP) `Services`.
|
||||
|
||||
We intend to have more flexible ingress modes for `Services` which encompass
|
||||
the current `ClusterIP`, `NodePort`, and `LoadBalancer` modes and more.
|
||||
|
||||
## The gory details of virtual IPs
|
||||
|
||||
The previous information should be sufficient for many people who just want to
|
||||
use `Services`. However, there is a lot going on behind the scenes that may be
|
||||
worth understanding.
|
||||
|
||||
### Avoiding collisions
|
||||
|
||||
One of the primary philosophies of Kubernetes is that users should not be
|
||||
exposed to situations that could cause their actions to fail through no fault
|
||||
of their own. In this situation, we are looking at network ports - users
|
||||
should not have to choose a port number if that choice might collide with
|
||||
another user. That is an isolation failure.
|
||||
|
||||
In order to allow users to choose a port number for their `Services`, we must
|
||||
ensure that no two `Services` can collide. We do that by allocating each
|
||||
`Service` its own IP address.
|
||||
|
||||
To ensure each service receives a unique IP, an internal allocator atomically
|
||||
updates a global allocation map in etcd prior to each service. The map object
|
||||
must exist in the registry for services to get IPs, otherwise creations will
|
||||
fail with a message indicating an IP could not be allocated. A background
|
||||
controller is responsible for creating that map (to migrate from older versions
|
||||
of Kubernetes that used in memory locking) as well as checking for invalid
|
||||
assignments due to administrator intervention and cleaning up any IPs
|
||||
that were allocated but which no service currently uses.
|
||||
|
||||
### IPs and VIPs
|
||||
|
||||
Unlike `Pod` IP addresses, which actually route to a fixed destination,
|
||||
`Service` IPs are not actually answered by a single host. Instead, we use
|
||||
`iptables` (packet processing logic in Linux) to define virtual IP addresses
|
||||
which are transparently redirected as needed. When clients connect to the
|
||||
VIP, their traffic is automatically transported to an appropriate endpoint.
|
||||
The environment variables and DNS for `Services` are actually populated in
|
||||
terms of the `Service`'s VIP and port.
|
||||
|
||||
As an example, consider the image processing application described above.
|
||||
When the backend `Service` is created, the Kubernetes master assigns a virtual
|
||||
IP address, for example 10.0.0.1. Assuming the `Service` port is 1234, the
|
||||
`Service` is observed by all of the `kube-proxy` instances in the cluster.
|
||||
When a proxy sees a new `Service`, it opens a new random port, establishes an
|
||||
iptables redirect from the VIP to this new port, and starts accepting
|
||||
connections on it.
|
||||
|
||||
When a client connects to the VIP the iptables rule kicks in, and redirects
|
||||
the packets to the `Service proxy`'s own port. The `Service proxy` chooses a
|
||||
backend, and starts proxying traffic from the client to the backend.
|
||||
|
||||
This means that `Service` owners can choose any port they want without risk of
|
||||
collision. Clients can simply connect to an IP and port, without being aware
|
||||
of which `Pods` they are actually accessing.
|
||||
|
||||

|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
124
docs/user-guide/sharing-clusters.md
Normal file
@@ -0,0 +1,124 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Sharing Cluster Access
|
||||
|
||||
Client access to a running kubernetes cluster can be shared by copying
|
||||
the `kubectl` client config bundle ([.kubeconfig](kubeconfig-file.md)).
|
||||
This config bundle lives in `$HOME/.kube/config`, and is generated
|
||||
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
|
||||
|
||||
**1. Create a cluster**
|
||||
```bash
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
**2. Copy `kubeconfig` to new host**
|
||||
```bash
|
||||
scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
|
||||
```
|
||||
|
||||
**3. On new host, make copied `config` available to `kubectl`**
|
||||
|
||||
* Option A: copy to default location
|
||||
```bash
|
||||
mv /path/to/.kube/config $HOME/.kube/config
|
||||
```
|
||||
* Option B: copy to working directory (from which kubectl is run)
|
||||
```bash
|
||||
mv /path/to/.kube/config $PWD
|
||||
```
|
||||
* Option C: manually pass `kubeconfig` location to `.kubectl`
|
||||
```bash
|
||||
# via environment variable
|
||||
export KUBECONFIG=/path/to/.kube/config
|
||||
|
||||
# via commandline flag
|
||||
kubectl ... --kubeconfig=/path/to/.kube/config
|
||||
```
|
||||
|
||||
## Manually Generating `kubeconfig`
|
||||
|
||||
`kubeconfig` is generated by `kube-up` but you can generate your own
|
||||
using (any desired subset of) the following commands.
|
||||
|
||||
```bash
|
||||
# create kubeconfig entry
|
||||
kubectl config set-cluster $CLUSTER_NICK
|
||||
--server=https://1.1.1.1 \
|
||||
--certificate-authority=/path/to/apiserver/ca_file \
|
||||
--embed-certs=true \
|
||||
# Or if tls not needed, replace --certificate-authority and --embed-certs with
|
||||
--insecure-skip-tls-verify=true
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
|
||||
# create user entry
|
||||
kubectl config set-credentials $USER_NICK
|
||||
# bearer token credentials, generated on kube master
|
||||
--token=$token \
|
||||
# use either username|password or token, not both
|
||||
--username=$username \
|
||||
--password=$password \
|
||||
--client-certificate=/path/to/crt_file \
|
||||
--client-key=/path/to/key_file \
|
||||
--embed-certs=true
|
||||
--kubeconfig=/path/to/standalone/.kubeconfig
|
||||
|
||||
# create context entry
|
||||
kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK
|
||||
```
|
||||
Notes:
|
||||
* The `--embed-certs` flag is needed to generate a standalone
|
||||
`kubeconfig`, that will work as-is on another host.
|
||||
* `--kubeconfig` is both the preferred file to load config from and the file to
|
||||
save config too. In the above commands the `--kubeconfig` file could be
|
||||
omitted if you first run
|
||||
```bash
|
||||
export KUBECONFIG=/path/to/standalone/.kube/config
|
||||
```
|
||||
* The ca_file, key_file, and cert_file referenced above are generated on the
|
||||
kube master at cluster turnup. They can be found on the master under
|
||||
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
|
||||
|
||||
For more details on `kubeconfig` see [kubeconfig-file.md](kubeconfig-file.md),
|
||||
and/or run `kubectl config -h`.
|
||||
|
||||
## Merging `kubeconfig` Example
|
||||
|
||||
`kubectl` loads and merges config from the following locations (in order)
|
||||
|
||||
1. `--kubeconfig=path/to/.kube/config` commandline flag
|
||||
2. `KUBECONFIG=path/to/.kube/config` env variable
|
||||
3. `$PWD/.kubeconfig`
|
||||
4. `$HOME/.kube/config`
|
||||
|
||||
If you create clusters A, B on host1, and clusters C, D on host2, you can
|
||||
make all four clusters available on both hosts by running
|
||||
|
||||
```bash
|
||||
# on host2, copy host1's default kubeconfig, and merge it from env
|
||||
scp host1:/path/to/home1/.kube/config path/to/other/.kube/config
|
||||
|
||||
export $KUBECONFIG=path/to/other/.kube/config
|
||||
|
||||
# on host1, copy host2's default kubeconfig and merge it from env
|
||||
scp host2:/path/to/home2/.kube/config path/to/other/.kube/config
|
||||
|
||||
export $KUBECONFIG=path/to/other/.kube/config
|
||||
```
|
||||
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](kubeconfig-file.md).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
69
docs/user-guide/simple-nginx.md
Normal file
@@ -0,0 +1,69 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
## Running your first containers in Kubernetes
|
||||
|
||||
Ok, you've run one of the [getting started guides](../docs/getting-started-guides/) and you have
|
||||
successfully turned up a Kubernetes cluster. Now what? This guide will help you get oriented
|
||||
to Kubernetes and running your first containers on the cluster.
|
||||
|
||||
### Running a container (simple version)
|
||||
|
||||
From this point onwards, it is assumed that `kubectl` is on your path from one of the getting started guides.
|
||||
|
||||
The [`kubectl run`](../docs/user-guide/kubectl/kubectl_run.md) line below will create two [nginx](https://registry.hub.docker.com/_/nginx/) [pods](../docs/pods.md) listening on port 80. It will also create a [replication controller](../docs/replication-controller.md) named `my-nginx` to ensure that there are always two pods running.
|
||||
|
||||
```bash
|
||||
kubectl run my-nginx --image=nginx --replicas=2 --port=80
|
||||
```
|
||||
|
||||
Once the pods are created, you can list them to see what is up and running:
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
You can also see the replication controller that was created:
|
||||
```bash
|
||||
kubectl get rc
|
||||
```
|
||||
|
||||
To stop the two replicated containers, stop the replication controller:
|
||||
```bash
|
||||
kubectl stop rc my-nginx
|
||||
```
|
||||
|
||||
### Exposing your pods to the internet.
|
||||
On some platforms (for example Google Compute Engine) the kubectl command can integrate with your cloud provider to add a [public IP address](../docs/services.md#external-services) for the pods,
|
||||
to do this run:
|
||||
|
||||
```bash
|
||||
kubectl expose rc my-nginx --port=80 --type=LoadBalancer
|
||||
```
|
||||
|
||||
This should print the service that has been created, and map an external IP address to the service. Where to find this external IP address will depend on the environment you run in. For instance, for Google Compute Engine the external IP address is listed as part of the newly created service and can be retrieved by running
|
||||
|
||||
```bash
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a firewall to allow traffic on port 80.
|
||||
|
||||
### Next: Configuration files
|
||||
Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](simple-yaml.md)
|
||||
is given in a different document.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
100
docs/user-guide/simple-yaml.md
Normal file
@@ -0,0 +1,100 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
## Getting started with config files.
|
||||
|
||||
In addition to the imperative style commands described [elsewhere](simple-nginx.md), Kubernetes
|
||||
supports declarative YAML or JSON configuration files. Often times config files are preferable
|
||||
to imperative commands, since they can be checked into version control and changes to the files
|
||||
can be code reviewed, producing a more robust, reliable and archival system.
|
||||
|
||||
### Running a container from a pod configuration file
|
||||
|
||||
```bash
|
||||
cd kubernetes
|
||||
kubectl create -f pod.yaml
|
||||
```
|
||||
|
||||
Where pod.yaml contains something like:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
You can see your cluster's pods:
|
||||
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
and delete the pod you just created:
|
||||
|
||||
```bash
|
||||
kubectl delete pods nginx
|
||||
```
|
||||
|
||||
### Running a replicated set of containers from a configuration file
|
||||
To run replicated containers, you need a [Replication Controller](../docs/replication-controller.md).
|
||||
A replication controller is responsible for ensuring that a specific number of pods exist in the
|
||||
cluster.
|
||||
|
||||
```bash
|
||||
cd kubernetes
|
||||
kubectl create -f replication.yaml
|
||||
```
|
||||
|
||||
Where ```replication.yaml``` contains:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
To delete the replication controller (and the pods it created):
|
||||
```bash
|
||||
kubectl delete rc nginx
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
58
docs/user-guide/ui.md
Normal file
@@ -0,0 +1,58 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Kubernetes User Interface
|
||||
Kubernetes has a web-based user interface that displays the current cluster state graphically.
|
||||
|
||||
## Accessing the UI
|
||||
By default, the Kubernetes UI is deployed as a cluster addon. To access it, visit `https://<kubernetes-master>/ui`, which redirects to `https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/`.
|
||||
|
||||
If you find that you're not able to access the UI, it may be because the kube-ui service has not been started on your cluster. In that case, you can start it manually with:
|
||||
```sh
|
||||
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
|
||||
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
|
||||
```
|
||||
Normally, this should be taken care of automatically by the [`kube-addons.sh`](../cluster/saltbase/salt/kube-addons/kube-addons.sh) script that runs on the master.
|
||||
|
||||
## Using the UI
|
||||
The Kubernetes UI can be used to introspect your current cluster, such as checking how resources are used, or looking at error messages. You cannot, however, use the UI to modify your cluster.
|
||||
|
||||
### Node Resource Usage
|
||||
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
|
||||

|
||||
|
||||
### Dashboard Views
|
||||
Click on the "Views" button in the top-right of the page to see other views available, which include: Explore, Pods, Nodes, Replication Controllers, Services, and Events.
|
||||
|
||||
#### Explore View
|
||||
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
|
||||

|
||||
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
|
||||

|
||||
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
|
||||

|
||||
To see more details of each resource instance, simply click on it.
|
||||

|
||||
|
||||
### Other Views
|
||||
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
|
||||

|
||||
|
||||
## More Information
|
||||
For more information, see the [Kubernetes UI development document](../www/README.md) in the www directory.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
137
docs/user-guide/update-demo/README.md
Normal file
@@ -0,0 +1,137 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
<!--
|
||||
Copyright 2014 Google Inc. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
-->
|
||||
# Live update example
|
||||
This example demonstrates the usage of Kubernetes to perform a live update on a running group of [pods](../../docs/pods.md).
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/):
|
||||
|
||||
```bash
|
||||
$ cd kubernetes
|
||||
$ ./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
### Step One: Turn up the UX for the demo
|
||||
|
||||
You can use bash job control to run this in the background (note that you must use the default port -- 8001 -- for the following demonstration to work properly).
|
||||
This can sometimes spew to the output so you could also run it in a different terminal. You have to run `kubectl proxy` in the root of the
|
||||
Kubernetes repository. Otherwise you will get "404 page not found" errors as the paths will not match. You can find more information about `kubectl proxy`
|
||||
[here](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_proxy.md).
|
||||
|
||||
```
|
||||
$ kubectl proxy --www=examples/update-demo/local/ &
|
||||
+ kubectl proxy --www=examples/update-demo/local/
|
||||
I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
|
||||
```
|
||||
|
||||
Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet.
|
||||
|
||||
### Step Two: Run the replication controller
|
||||
Now we will turn up two replicas of an image. They all serve on internal port 80.
|
||||
|
||||
```bash
|
||||
$ kubectl create -f examples/update-demo/nautilus-rc.yaml
|
||||
```
|
||||
|
||||
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
|
||||
|
||||
### Step Three: Try scaling the replication controller
|
||||
|
||||
Now we will increase the number of replicas from two to four:
|
||||
|
||||
```bash
|
||||
$ kubectl scale rc update-demo-nautilus --replicas=4
|
||||
```
|
||||
|
||||
If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod.
|
||||
|
||||
### Step Four: Update the docker image
|
||||
We will now update the docker image to serve a different image by doing a rolling update to a new Docker image.
|
||||
|
||||
```bash
|
||||
$ kubectl rolling-update update-demo-nautilus --update-period=10s -f examples/update-demo/kitten-rc.yaml
|
||||
```
|
||||
The rolling-update command in kubectl will do 2 things:
|
||||
|
||||
1. Create a new [replication controller](../../docs/replication-controller.md) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
|
||||
2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.
|
||||
|
||||
Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.
|
||||
|
||||
### Step Five: Bring down the pods
|
||||
|
||||
```bash
|
||||
$ kubectl stop rc update-demo-kitten
|
||||
```
|
||||
|
||||
This first stops the replication controller by turning the target number of replicas to 0 and then deletes the controller.
|
||||
|
||||
### Step Six: Cleanup
|
||||
|
||||
To turn down a Kubernetes cluster:
|
||||
|
||||
```bash
|
||||
$ ./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Kill the proxy running in the background:
|
||||
After you are done running this demo make sure to kill it:
|
||||
|
||||
```bash
|
||||
$ jobs
|
||||
[1]+ Running ./kubectl proxy --www=local/ &
|
||||
$ kill %1
|
||||
[1]+ Terminated: 15 ./kubectl proxy --www=local/
|
||||
```
|
||||
|
||||
### Updating the Docker images
|
||||
|
||||
If you want to build your own docker images, you can set `$DOCKER_HUB_USER` to your Docker user id and run the included shell script. It can take a few minutes to download/upload stuff.
|
||||
|
||||
```bash
|
||||
$ export DOCKER_HUB_USER=my-docker-id
|
||||
$ ./examples/update-demo/build-images.sh
|
||||
```
|
||||
|
||||
To use your custom docker image in the above examples, you will need to change the image name in `examples/update-demo/nautilus-rc.yaml` and `examples/update-demo/kitten-rc.yaml`.
|
||||
|
||||
### Image Copyright
|
||||
|
||||
Note that the images included here are public domain.
|
||||
|
||||
* [kitten](http://commons.wikimedia.org/wiki/File:Kitten-stare.jpg)
|
||||
* [nautilus](http://commons.wikimedia.org/wiki/File:Nautilus_pompilius.jpg)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
30
docs/user-guide/update-demo/build-images.sh
Executable file
@@ -0,0 +1,30 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2014 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# This script will build and push the images necessary for the demo.
|
||||
|
||||
set -o errexit
|
||||
set -o nounset
|
||||
set -o pipefail
|
||||
|
||||
DOCKER_HUB_USER=${DOCKER_HUB_USER:-kubernetes}
|
||||
|
||||
set -x
|
||||
|
||||
docker build -t "${DOCKER_HUB_USER}/update-demo:kitten" images/kitten
|
||||
docker build -t "${DOCKER_HUB_USER}/update-demo:nautilus" images/nautilus
|
||||
|
||||
docker push "${DOCKER_HUB_USER}/update-demo"
|
17
docs/user-guide/update-demo/images/kitten/Dockerfile
Normal file
@@ -0,0 +1,17 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM kubernetes/test-webserver
|
||||
COPY html/kitten.jpg kitten.jpg
|
||||
COPY html/data.json data.json
|
3
docs/user-guide/update-demo/images/kitten/html/data.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"image": "kitten.jpg"
|
||||
}
|
BIN
docs/user-guide/update-demo/images/kitten/html/kitten.jpg
Normal file
After Width: | Height: | Size: 14 KiB |
17
docs/user-guide/update-demo/images/nautilus/Dockerfile
Normal file
@@ -0,0 +1,17 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM kubernetes/test-webserver
|
||||
COPY html/nautilus.jpg nautilus.jpg
|
||||
COPY html/data.json data.json
|
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"image": "nautilus.jpg"
|
||||
}
|
BIN
docs/user-guide/update-demo/images/nautilus/html/nautilus.jpg
Normal file
After Width: | Height: | Size: 21 KiB |