Remove all docs which are moving to http://kubernetes.github.io

All .md files now are only a pointer to where they likely are on the new
site.

All other files are untouched.
This commit is contained in:
Eric Paris 2016-03-04 12:49:17 -05:00
parent a20258efae
commit f334fc4179
134 changed files with 136 additions and 23433 deletions

View File

@ -32,42 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Cluster Admin Guide This file has moved to: http://kubernetes.github.io/docs/admin/README/
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md).
## Admin Guide Table of Contents
1. [Introduction](introduction.md)
1. [Components of a cluster](cluster-components.md)
1. [Cluster Management](cluster-management.md)
1. Kubernetes Master Components
1. [The kube-apiserver binary](kube-apiserver.md)
1. [Authorization](authorization.md)
1. [Authentication](authentication.md)
1. [Accessing the api](accessing-the-api.md)
1. [Admission Controllers](admission-controllers.md)
1. [Administrating Service Accounts](service-accounts-admin.md)
1. [Resource Quotas](resource-quota.md)
1. [The kube-scheduler binary](kube-scheduler.md)
1. [The kube-controller-manager binary](kube-controller-manager.md)
1. [Kubernetes Node Components](node.md)
1. [The kubelet binary](kubelet.md)
1. [Garbage Collection](garbage-collection.md)
1. [The kube-proxy binary](kube-proxy.md)
1. Cluster Addons
1. [DNS](dns.md)
1. [Networking](networking.md)
1. [OVS Networking](ovs-networking.md)
1. [Master <-> Node Communication](master-node-communication.md)
1. Example Configurations
1. [Multiple Clusters](multi-cluster.md)
1. [High Availability Clusters](high-availability.md)
1. [Large Clusters](cluster-large.md)
1. [Getting started from scratch](../getting-started-guides/scratch.md)
1. [Kubernetes's use of salt](salt.md)
1. [Troubleshooting](cluster-troubleshooting.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,72 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Configuring APIserver ports This file has moved to: http://kubernetes.github.io/docs/admin/accessing-the-api/
This document describes what ports the Kubernetes apiserver
may serve on and how to reach them. The audience is
cluster administrators who want to customize their cluster
or understand the details.
Most questions about accessing the cluster are covered
in [Accessing the cluster](../user-guide/accessing-the-cluster.md).
## Ports and IPs Served On
The Kubernetes API is served by the Kubernetes apiserver process. Typically,
there is one of these running on a single kubernetes-master node.
By default the Kubernetes APIserver serves HTTP on 2 ports:
1. Localhost Port
- serves HTTP
- default is port 8080, change with `--insecure-port` flag.
- defaults IP is localhost, change with `--insecure-bind-address` flag.
- no authentication or authorization checks in HTTP
- protected by need to have host access
2. Secure Port
- default is port 6443, change with `--secure-port` flag.
- default IP is first non-localhost network interface, change with `--bind-address` flag.
- serves HTTPS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
- uses token-file or client-certificate based [authentication](authentication.md).
- uses policy-based [authorization](authorization.md).
3. Removed: ReadOnly Port
- For security reasons, this had to be removed. Use the [service account](../user-guide/service-accounts.md) feature instead.
## Proxies and Firewall rules
Additionally, in some configurations there is a proxy (nginx) running
on the same machine as the apiserver process. The proxy serves HTTPS protected
by Basic Auth on port 443, and proxies to the apiserver on localhost:8080. In
these configurations the secure port is typically set to 6443.
A firewall rule is typically configured to allow external HTTPS access to port 443.
The above are defaults and reflect how Kubernetes is deployed to Google Compute Engine using
kube-up.sh. Other cloud providers may vary.
## Use Cases vs IP:Ports
There are three differently configured serving ports because there are a
variety of uses cases:
1. Clients outside of a Kubernetes cluster, such as human running `kubectl`
on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx)
running on the `kubernetes-master` machine. The proxy can use cert-based authentication
or token-based authentication.
2. Processes running in Containers on Kubernetes that need to read from
the apiserver. Currently, these can use a [service account](../user-guide/service-accounts.md).
3. Scheduler and Controller-manager processes, which need to do read-write
API operations, using service accounts to avoid the need to be co-located.
4. Kubelets, which need to do read-write API operations and are necessarily
on different machines than the apiserver. Kubelet uses the Secure Port
to get their pods, to find the services that a pod can see, and to
write events. Credentials are distributed to kubelets at cluster
setup time. Kubelet and kube-proxy can use cert-based authentication or token-based
authentication.
## Expected changes
- Policy will limit the actions kubelets can do via the authed port.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/accessing-the-api.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/accessing-the-api.md?pixel)]()

View File

@ -32,170 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Admission Controllers This file has moved to: http://kubernetes.github.io/docs/admin/admission-controllers/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Admission Controllers](#admission-controllers)
- [What are they?](#what-are-they)
- [Why do I need them?](#why-do-i-need-them)
- [How do I turn on an admission control plug-in?](#how-do-i-turn-on-an-admission-control-plug-in)
- [What does each plug-in do?](#what-does-each-plug-in-do)
- [AlwaysAdmit](#alwaysadmit)
- [AlwaysPullImages](#alwayspullimages)
- [AlwaysDeny](#alwaysdeny)
- [DenyExecOnPrivileged (deprecated)](#denyexeconprivileged-deprecated)
- [DenyEscalatingExec](#denyescalatingexec)
- [ServiceAccount](#serviceaccount)
- [SecurityContextDeny](#securitycontextdeny)
- [ResourceQuota](#resourcequota)
- [LimitRanger](#limitranger)
- [InitialResources (experimental)](#initialresources-experimental)
- [NamespaceExists (deprecated)](#namespaceexists-deprecated)
- [NamespaceAutoProvision (deprecated)](#namespaceautoprovision-deprecated)
- [NamespaceLifecycle](#namespacelifecycle)
- [Is there a recommended set of plug-ins to use?](#is-there-a-recommended-set-of-plug-ins-to-use)
<!-- END MUNGE: GENERATED_TOC -->
## What are they?
An admission control plug-in is a piece of code that intercepts requests to the Kubernetes
API server prior to persistence of the object, but after the request is authenticated
and authorized. The plug-in code is in the API server process
and must be compiled into the binary in order to be used at this time.
Each admission control plug-in is run in sequence before a request is accepted into the cluster. If
any of the plug-ins in the sequence reject the request, the entire request is rejected immediately
and an error is returned to the end-user.
Admission control plug-ins may mutate the incoming object in some cases to apply system configured
defaults. In addition, admission control plug-ins may mutate related resources as part of request
processing to do things like increment quota usage.
## Why do I need them?
Many advanced features in Kubernetes require an admission control plug-in to be enabled in order
to properly support the feature. As a result, a Kubernetes API server that is not properly
configured with the right set of admission control plug-ins is an incomplete server and will not
support all the features you expect.
## How do I turn on an admission control plug-in?
The Kubernetes API server supports a flag, `admission-control` that takes a comma-delimited,
ordered list of admission control choices to invoke prior to modifying objects in the cluster.
## What does each plug-in do?
### AlwaysAdmit
Use this plugin by itself to pass-through all requests.
### AlwaysPullImages
This plug-in modifies every new Pod to force the image pull policy to Always. This is useful in a
multitenant cluster so that users can be assured that their private images can only be used by those
who have the credentials to pull them. Without this plug-in, once an image has been pulled to a
node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is
scheduled onto the right node), without any authorization check against the image. When this plug-in
is enabled, images are always pulled prior to starting containers, which means valid credentials are
required.
### AlwaysDeny
Rejects all requests. Used for testing.
### DenyExecOnPrivileged (deprecated)
This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container.
If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec
commands in those containers, we strongly encourage enabling this plug-in.
This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec).
### DenyEscalatingExec
This plug-in will deny exec and attach commands to pods that run with escalated privileges that
allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and
have access to the host PID namespace.
If your cluster supports containers that run with escalated privileges, and you want to
restrict the ability of end-users to exec commands in those containers, we strongly encourage
enabling this plug-in.
### ServiceAccount
This plug-in implements automation for [serviceAccounts](../user-guide/service-accounts.md).
We strongly recommend using this plug-in if you intend to make use of Kubernetes `ServiceAccount` objects.
### SecurityContextDeny
This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.md) that defines options that were not available on the `Container`.
### ResourceQuota
This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](resourcequota/) for more details.
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
### LimitRanger
This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
enumerated in the `LimitRange` object in a `Namespace`. If you are using `LimitRange` objects in
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
applies a 0.1 CPU requirement to all Pods in the `default` namespace.
See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](limitrange/) for more details.
### InitialResources (experimental)
This plug-in observes pod creation requests. If a container omits compute resource requests and limits,
then the plug-in auto-populates a compute resource request based on historical usage of containers running the same image.
If there is not enough data to make a decision the Request is left unchanged.
When the plug-in sets a compute resource request, it annotates the pod with information on what compute resources it auto-populated.
See the [InitialResouces proposal](../proposals/initial-resources.md) for more details.
### NamespaceExists (deprecated)
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
and reject the request if the `Namespace` was not previously created. We strongly recommend running
this plug-in to ensure integrity of your data.
The functionality of this admission controller has been merged into `NamespaceLifecycle`
### NamespaceAutoProvision (deprecated)
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
and create a new `Namespace` if one did not already exist previously.
We strongly recommend `NamespaceLifecycle` over `NamespaceAutoProvision`.
### NamespaceLifecycle
This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it,
and ensures that requests in a non-existent `Namespace` are rejected.
A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.
## Is there a recommended set of plug-ins to use?
Yes.
For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters):
```
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,138 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Authentication Plugins This file has moved to: http://kubernetes.github.io/docs/admin/authentication/
Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.
**Client certificate authentication** is enabled by passing the `--client-ca-file=SOMEFILE`
option to apiserver. The referenced file must contain one or more certificates authorities
to use to validate client certificates presented to the apiserver. If a client certificate
is presented and verified, the common name of the subject is used as the user name for the
request.
**Token File** is enabled by passing the `--token-auth-file=SOMEFILE` option
to apiserver. Currently, tokens last indefinitely, and the token list cannot
be changed without restarting apiserver.
The token file format is implemented in `plugin/pkg/auth/authenticator/token/tokenfile/...`
and is a csv file with a minimum of 3 columns: token, user name, user uid, followed by
optional group names. Note, if you have more than one group the column must be double quoted e.g.
```
token,user,uid,"group1,group2,group3"
```
When using token authentication from an http client the apiserver expects an `Authorization`
header with a value of `Bearer SOMETOKEN`.
**OpenID Connect ID Token** is enabled by passing the following options to the apiserver:
- `--oidc-issuer-url` (required) tells the apiserver where to connect to the OpenID provider. Only HTTPS scheme will be accepted.
- `--oidc-client-id` (required) is used by apiserver to verify the audience of the token.
A valid [ID token](http://openid.net/specs/openid-connect-core-1_0.html#IDToken) MUST have this
client-id in its `aud` claims.
- `--oidc-ca-file` (optional) is used by apiserver to establish and verify the secure connection
to the OpenID provider.
- `--oidc-username-claim` (optional, experimental) specifies which OpenID claim to use as the user name. By default, `sub`
will be used, which should be unique and immutable under the issuer's domain. Cluster administrator can
choose other claims such as `email` to use as the user name, but the uniqueness and immutability is not guaranteed.
- `--oidc-groups-claim` (optional, experimental) the name of a custom OpenID Connect claim for specifying user groups. The claim
value is expected to be an array of strings.
Please note that this flag is still experimental until we settle more on how to handle the mapping of the OpenID user to the Kubernetes user. Thus further changes are possible.
Currently, the ID token will be obtained by some third-party app. This means the app and apiserver
MUST share the `--oidc-client-id`.
Like **Token File**, when using token authentication from an http client the apiserver expects
an `Authorization` header with a value of `Bearer SOMETOKEN`.
**Basic authentication** is enabled by passing the `--basic-auth-file=SOMEFILE`
option to apiserver. Currently, the basic auth credentials last indefinitely,
and the password cannot be changed without restarting apiserver. Note that basic
authentication is currently supported for convenience while we finish making the
more secure modes described above easier to use.
The basic auth file format is implemented in `plugin/pkg/auth/authenticator/password/passwordfile/...`
and is a csv file with 3 columns: password, user name, user id.
When using basic authentication from an http client, the apiserver expects an `Authorization` header
with a value of `Basic BASE64ENCODED(USER:PASSWORD)`.
**Keystone authentication** is enabled by passing the `--experimental-keystone-url=<AuthURL>`
option to the apiserver during startup. The plugin is implemented in
`plugin/pkg/auth/authenticator/request/keystone/keystone.go`.
For details on how to use keystone to manage projects and users, refer to the
[Keystone documentation](http://docs.openstack.org/developer/keystone/). Please note that
this plugin is still experimental which means it is subject to changes.
Please refer to the [discussion](https://github.com/kubernetes/kubernetes/pull/11798#issuecomment-129655212)
and the [blueprint](https://github.com/kubernetes/kubernetes/issues/11626) for more details
## Plugin Development
We plan for the Kubernetes API server to issue tokens
after the user has been (re)authenticated by a *bedrock* authentication
provider external to Kubernetes. We plan to make it easy to develop modules
that interface between Kubernetes and a bedrock authentication provider (e.g.
github.com, google.com, enterprise directory, kerberos, etc.)
## APPENDIX
### Creating Certificates
When using client certificate authentication, you can generate certificates manually or
using an existing deployment script.
**Deployment script** is implemented at
`cluster/saltbase/salt/generate-cert/make-ca-cert.sh`.
Execute this script with two parameters. First is the IP address of apiserver, the second is
a list of subject alternate names in the form `IP:<ip-address> or DNS:<dns-name>`.
The script will generate three files:ca.crt, server.crt and server.key.
Finally, add these parameters
`--client-ca-file=/srv/kubernetes/ca.crt`
`--tls-cert-file=/srv/kubernetes/server.cert`
`--tls-private-key-file=/srv/kubernetes/server.key`
into apiserver start parameters.
**easyrsa** can be used to manually generate certificates for your cluster.
1. Download, unpack, and initialize the patched version of easyrsa3.
curl -L -O https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
tar xzf easy-rsa.tar.gz
cd easy-rsa-master/easyrsa3
./easyrsa init-pki
1. Generate a CA. (`--batch` set automatic mode. `--req-cn` default CN to use.)
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
1. Generate server certificate and key.
(build-server-full [filename]: Generate a keypair and sign locally for a client or server)
./easyrsa --subject-alt-name="IP:${MASTER_IP}" build-server-full kubernetes-master nopass
1. Copy `pki/ca.crt` `pki/issued/kubernetes-master.crt`
`pki/private/kubernetes-master.key` to your directory.
1. Remember fill the parameters
`--client-ca-file=/yourdirectory/ca.crt`
`--tls-cert-file=/yourdirectory/server.cert`
`--tls-private-key-file=/yourdirectory/server.key`
and add these into apiserver start parameters.
**openssl** can also be use to manually generate certificates for your cluster.
1. Generate a ca.key with 2048bit
`openssl genrsa -out ca.key 2048`
1. According to the ca.key generate a ca.crt. (-days set the certificate effective time).
`openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt`
1. Generate a server.key with 2048bit
`openssl genrsa -out server.key 2048`
1. According to the server.key generate a server.csr.
`openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr`
1. According to the ca.key, ca.crt and server.csr generate the server.crt.
`openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt
-days 10000`
1. View the certificate.
`openssl x509 -noout -text -in ./server.crt`
Finally, do not forget fill the same parameters and add parameters into apiserver start parameters.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/authentication.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/authentication.md?pixel)]()

View File

@ -32,269 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Authorization Plugins This file has moved to: http://kubernetes.github.io/docs/admin/authorization/
In Kubernetes, authorization happens as a separate step from authentication.
See the [authentication documentation](authentication.md) for an
overview of authentication.
Authorization applies to all HTTP accesses on the main (secure) apiserver port.
The authorization check for any request compares attributes of the context of
the request, (such as user, resource, and namespace) with access
policies. An API call must be allowed by some policy in order to proceed.
The following implementations are available, and are selected by flag:
- `--authorization-mode=AlwaysDeny`
- `--authorization-mode=AlwaysAllow`
- `--authorization-mode=ABAC`
- `--authorization-mode=Webhook`
`AlwaysDeny` blocks all requests (used in tests).
`AlwaysAllow` allows all requests; use if you don't need authorization.
`ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control.
`Webhook` allows for authorization to be driven by a remote service using REST.
## ABAC Mode
### Request Attributes
A request has the following attributes that can be considered for authorization:
- user (the user-string which a user was authenticated as).
- group (the list of group names the authenticated user is a member of).
- whether the request is for an API resource.
- the request path.
- allows authorizing access to miscellaneous endpoints like `/api` or `/healthz` (see [kubectl](#kubectl)).
- the request verb.
- API verbs like `get`, `list`, `create`, `update`, `watch`, `delete`, and `deletecollection` are used for API requests
- HTTP verbs like `get`, `post`, `put`, and `delete` are used for non-API requests
- what resource is being accessed (for API requests only)
- the namespace of the object being accessed (for namespaced API requests only)
- the API group being accessed (for API requests only)
We anticipate adding more attributes to allow finer grained access control and
to assist in policy management.
### Policy File Format
For mode `ABAC`, also specify `--authorization-policy-file=SOME_FILENAME`.
The file format is [one JSON object per line](http://jsonlines.org/). There should be no enclosing list or map, just
one map per line.
Each line is a "policy object". A policy object is a map with the following properties:
- Versioning properties:
- `apiVersion`, type string; valid values are "abac.authorization.kubernetes.io/v1beta1". Allows versioning and conversion of the policy format.
- `kind`, type string: valid values are "Policy". Allows versioning and conversion of the policy format.
- `spec` property set to a map with the following properties:
- Subject-matching properties:
- `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the username of the authenticated user. `*` matches all requests.
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user. `*` matches all requests.
- `readonly`, type boolean, when true, means that the policy only applies to get, list, and watch operations.
- Resource-matching properties:
- `apiGroup`, type string; an API group, such as `extensions`. `*` matches all API groups.
- `namespace`, type string; a namespace string. `*` matches all resource requests.
- `resource`, type string; a resource, such as `pods`. `*` matches all resource requests.
- Non-resource-matching properties:
- `nonResourcePath`, type string; matches the non-resource request paths (like `/version` and `/apis`). `*` matches all non-resource requests. `/foo/*` matches `/foo/` and all of its subpaths.
An unset property is the same as a property set to the zero value for its type (e.g. empty string, 0, false).
However, unset should be preferred for readability.
In the future, policies may be expressed in a JSON format, and managed via a REST interface.
### Authorization Algorithm
A request has attributes which correspond to the properties of a policy object.
When a request is received, the attributes are determined. Unknown attributes
are set to the zero value of its type (e.g. empty string, 0, false).
A property set to "*" will match any value of the corresponding attribute.
The tuple of attributes is checked for a match against every policy in the policy file.
If at least one line matches the request attributes, then the request is authorized (but may fail later validation).
To permit any user to do something, write a policy with the user property set to "*".
To permit a user to do anything, write a policy with the apiGroup, namespace, resource, and nonResourcePath properties set to "*".
### Kubectl
Kubectl uses the `/api` and `/apis` endpoints of api-server to negotiate client/server versions. To validate objects sent to the API by create/update operations, kubectl queries certain swagger resources. For API version `v1` those would be `/swaggerapi/api/v1` & `/swaggerapi/experimental/v1`.
When using ABAC authorization, those special resources have to be explicitly exposed via the `nonResourcePath` property in a policy (see [examples](#examples) below):
* `/api`, `/api/*`, `/apis`, and `/apis/*` for API version negotiation.
* `/version` for retrieving the server version via `kubectl version`.
* `/swaggerapi/*` for create/update operations.
To inspect the HTTP calls involved in a specific kubectl operation you can turn up the verbosity:
kubectl --v=8 version
### Examples
1. Alice can do anything to all resources: `{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}`
2. Kubelet can read any pods: `{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}`
3. Kubelet can read and write events: `{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}`
4. Bob can just read pods in namespace "projectCaribou": `{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}`
5. Anyone can make read-only requests to all non-API paths: `{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "*", "readonly": true, "nonResourcePath": "*"}}`
[Complete file example](http://releases.k8s.io/HEAD/pkg/auth/authorizer/abac/example_policy_file.jsonl)
### A quick note on service accounts
A service account automatically generates a user. The user's name is generated according to the naming convention:
```
system:serviceaccount:<namespace>:<serviceaccountname>
```
Creating a new namespace also causes a new service account to be created, of this form:*
```
system:serviceaccount:<namespace>:default
```
For example, if you wanted to grant the default service account in the kube-system full privilege to the API, you would add this line to your policy file:
```json
{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*"}
```
The apiserver will need to be restarted to pickup the new policy lines.
## Webhook Mode
When specified, mode `Webhook` causes Kubernetes to query an outside REST service when determining user privileges.
### Configuration File Format
Mode `Webhook` requires a file for HTTP configuration, specify by the `--authorization-webhook-config-file=SOME_FILENAME` flag.
The configuration file uses the [kubeconfig](../user-guide/kubeconfig-file.md) file format. Within the file "users" refers to the API Server webhook and "clusters" refers to the remote service.
A configuration example which uses HTTPS client auth:
```yaml
# clusters refers to the remote service.
clusters:
- name: name-of-remote-authz-service
cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://authz.example.com/authorize # URL of remote service to query. Must use 'https'.
# users refers to the API Server's webhook configuration.
users:
- name: name-of-api-server
user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
```
### Request Payloads
When faced with an authorization decision, the API Server POSTs a JSON serialized api.authorization.v1beta1.SubjectAccessReview object describing the action. This object contains fields describing the user attempting to make the request, and either details about the resource being accessed or requests attributes.
Note that webhook API objects are subject to the same [versioning compatibility rules](../api.md) as other Kubernetes API objects. Implementers should be aware of loser compatibility promises for beta objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the `authorization.k8s.io/v1beta1` API extensions group (`--runtime-config=authorization.k8s.io/v1beta1=true`).
An example request body:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"namespace": "kittensandponies",
"verb": "GET",
"group": "*",
"resource": "pods"
},
"user": "jane",
"group": [
"group1",
"group2"
]
}
}
```
The remote service is expected to fill the SubjectAccessReviewStatus field of the request and respond to either allow or disallow access. The response body's "spec" field is ignored and may be omitted. A permissive response would return:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"status": {
"allowed": true
}
}
```
To disallow access, the remote service would return:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"status": {
"allowed": false,
"reason": "user does not have read access to the namespace"
}
}
```
Access to non-resource paths are sent as:
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"nonResourceAttributes": {
"path": "/debug",
"verb": "GET"
},
"user": "jane",
"group": [
"group1",
"group2"
]
}
}
```
Non-resource paths include: `/api`, `/apis`, `/metrics`, `/resetMetrics`, `/logs`, `/debug`, `/healthz`, `/swagger-ui/`, `/swaggerapi/`, `/ui`, and `/version.` Clients require access to `/api`, `/api/*/`, `/apis/`, `/apis/*`, `/apis/*/*`, and `/version` to discover what resources and versions are present on the server. Access to other non-resource paths can be disallowed without restricting access to the REST api.
For further documentation refer to the authorization.v1beta1 API objects and plugin/pkg/auth/authorizer/webhook/webhook.go.
## Plugin Development
Other implementations can be developed fairly easily.
The APIserver calls the Authorizer interface:
```go
type Authorizer interface {
Authorize(a Attributes) error
}
```
to determine whether or not to allow each API action.
An authorization plugin is a module that implements this interface.
Authorization plugin code goes in `pkg/auth/authorizer/$MODULENAME`.
An authorization module can be completely implemented in go, or can call out
to a remote authorization service. Authorization modules can implement
their own caching to reduce the cost of repeated authorization calls with the
same or similar arguments. Developers should then consider the interaction between
caching and revocation of permissions.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,173 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Cluster Admin Guide: Cluster Components This file has moved to: http://kubernetes.github.io/docs/admin/cluster-components/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes Cluster Admin Guide: Cluster Components](#kubernetes-cluster-admin-guide-cluster-components)
- [Master Components](#master-components)
- [kube-apiserver](#kube-apiserver)
- [etcd](#etcd)
- [kube-controller-manager](#kube-controller-manager)
- [kube-scheduler](#kube-scheduler)
- [addons](#addons)
- [DNS](#dns)
- [User interface](#user-interface)
- [Container Resource Monitoring](#container-resource-monitoring)
- [Cluster-level Logging](#cluster-level-logging)
- [Node components](#node-components)
- [kubelet](#kubelet)
- [kube-proxy](#kube-proxy)
- [docker](#docker)
- [rkt](#rkt)
- [supervisord](#supervisord)
- [fluentd](#fluentd)
<!-- END MUNGE: GENERATED_TOC -->
This document outlines the various binary components that need to run to
deliver a functioning Kubernetes cluster.
## Master Components
Master components are those that provide the cluster's control plane. For
example, master components are responsible for making global decisions about the
cluster (e.g., scheduling), and detecting and responding to cluster events
(e.g., starting up a new pod when a replication controller's 'replicas' field is
unsatisfied).
Master components could in theory be run on any node in the cluster. However,
for simplicity, current set up scripts typically start all master components on
the same VM, and does not run user containers on this VM. See
[high-availability.md](high-availability.md) for an example multi-master-VM setup.
Even in the future, when Kubernetes is fully self-hosting, it will probably be
wise to only allow master components to schedule on a subset of nodes, to limit
co-running with user-run pods, reducing the possible scope of a
node-compromising security exploit.
### kube-apiserver
[kube-apiserver](kube-apiserver.md) exposes the Kubernetes API; it is the front-end for the
Kubernetes control plane. It is designed to scale horizontally (i.e., one scales
it by running more of them-- [high-availability.md](high-availability.md)).
### etcd
[etcd](etcd.md) is used as Kubernetes' backing store. All cluster data is stored here.
Proper administration of a Kubernetes cluster includes a backup plan for etcd's
data.
### kube-controller-manager
[kube-controller-manager](kube-controller-manager.md) is a binary that runs controllers, which are the
background threads that handle routine tasks in the cluster. Logically, each
controller is a separate process, but to reduce the number of moving pieces in
the system, they are all compiled into a single binary and run in a single
process.
These controllers include:
* Node Controller
* Responsible for noticing & responding when nodes go down.
* Replication Controller
* Responsible for maintaining the correct number of pods for every replication
controller object in the system.
* Endpoints Controller
* Populates the Endpoints object (i.e., join Services & Pods).
* Service Account & Token Controllers
* Create default accounts and API access tokens for new namespaces.
* ... and others.
### kube-scheduler
[kube-scheduler](kube-scheduler.md) watches newly created pods that have no node assigned, and
selects a node for them to run on.
### addons
Addons are pods and services that implement cluster features. They don't run on
the master VM, but currently the default setup scripts that make the API calls
to create these pods and services does run on the master VM. See:
[kube-master-addons](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
Addon objects are created in the "kube-system" namespace.
#### DNS
While the other addons are not strictly required, all Kubernetes
clusters should have [cluster DNS](dns.md), as many examples rely on it.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your
environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server
in their DNS searches.
#### User interface
The kube-ui provides a read-only overview of the cluster state. Access
[the UI using kubectl proxy](../user-guide/connecting-to-applications-proxy.md#connecting-to-the-kube-ui-service-from-your-local-workstation)
#### Container Resource Monitoring
[Container Resource Monitoring](../user-guide/monitoring.md) records generic time-series metrics
about containers in a central database, and provides a UI for browsing that data.
#### Cluster-level Logging
[Container Logging](../user-guide/monitoring.md) saves container logs
to a central log store with search/browsing interface. There are two
implementations:
* [Cluster-level logging to Google Cloud Logging](
docs/user-guide/logging.md#cluster-level-logging-to-google-cloud-logging)
* [Cluster-level Logging with Elasticsearch and Kibana](
docs/user-guide/logging.md#cluster-level-logging-with-elasticsearch-and-kibana)
## Node components
Node components run on every node, maintaining running pods and providing them
the Kubernetes runtime environment.
### kubelet
[kubelet](kubelet.md) is the primary node agent. It:
* Watches for pods that have been assigned to its node (either by apiserver
or via local configuration file) and:
* Mounts the pod's required volumes
* Downloads the pod's secrets
* Run the pod's containers via docker (or, experimentally, rkt).
* Periodically executes any requested container liveness probes.
* Reports the status of the pod back to the rest of the system, by creating a
"mirror pod" if necessary.
* Reports the status of the node back to the rest of the system.
### kube-proxy
[kube-proxy](kube-proxy.md) enables the Kubernetes service abstraction by maintaining
network rules on the host and performing connection forwarding.
### docker
`docker` is of course used for actually running containers.
### rkt
`rkt` is supported experimentally as an alternative to docker.
### supervisord
`supervisord` is a lightweight process babysitting system for keeping kubelet and docker
running.
### fluentd
`fluentd` is a daemon which helps provide [cluster-level logging](#cluster-level-logging).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-components.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-components.md?pixel)]()

View File

@ -32,96 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Large Cluster This file has moved to: http://kubernetes.github.io/docs/admin/cluster-large/
## Support
At v1.2, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
* No more than 1000 nodes
* No more than 30000 total pods
* No more than 60000 total containers
* No more than 100 pods per node
## Setup
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
Normally the number of nodes in a cluster is controlled by the the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/HEAD/cluster/gce/config-default.sh)).
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
When setting up a large Kubernetes cluster, the following issues must be considered.
### Quota Issues
To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
* Increase the quota for things like CPU, IPs, etc.
* In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
* CPUs
* VM instances
* Total persistent disk reserved
* In-use IP addresses
* Firewall Rules
* Forwarding rules
* Routes
* Target pools
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
### Etcd storage
To improve performance of large clusters, we store events in a separate dedicated etcd instance.
When creating a cluster, existing salt scripts:
* start and configure additional etcd instance
* configure api-server to use it for storing events
### Addon Resources
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
For [example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
```yaml
containers:
- name: fluentd-cloud-logging
image: gcr.io/google_containers/fluentd-gcp:1.16
resources:
limits:
cpu: 100m
memory: 200Mi
```
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
* [InfluxDB and Grafana](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/HEAD/cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
* [elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185) and [#21258](http://issue.k8s.io/21258)). If you find that Heapster is running
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](../user-guide/compute-resources.md#troubleshooting).
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
We welcome PRs that implement those features.
### Allowing minor node failure at startup
For various reasons (see [#18969](https://github.com/kubernetes/kubernetes/issues/18969) for more details) running
`kube-up.sh` with a very large `NUM_NODES` may fail due to a very small number of nodes not coming up properly.
Currently you have two choices: restart the cluster (`kube-down.sh` and then `kube-up.sh` again), or before
running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to whatever value you feel comfortable
with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
reason for the failure, those additional nodes may join later or the cluster may remain at a size of
`NUM_NODES - ALLOWED_NOTREADY_NODES`.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-large.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-large.md?pixel)]()

View File

@ -32,185 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Management This file has moved to: http://kubernetes.github.io/docs/admin/cluster-management/
This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
upgrading your cluster's
master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
running cluster.
## Creating and configuring a Cluster
To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](../../docs/getting-started-guides/README.md) depending on your environment.
## Upgrading a cluster
The current state of cluster upgrades is provider dependent.
### Upgrading Google Compute Engine clusters
Google Compute Engine Open Source (GCE-OSS) support master upgrades by deleting and
recreating the master, while maintaining the same Persistent Disk (PD) to ensure that data is retained across the
upgrade.
Node upgrades for GCE use a [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/), each node
is sequentially destroyed and then recreated with new software. Any Pods that are running on that node need to be
controlled by a Replication Controller, or manually re-created after the roll out.
Upgrades on open source Google Compute Engine (GCE) clusters are controlled by the `cluster/gce/upgrade.sh` script.
Get its usage by running `cluster/gce/upgrade.sh -h`.
For example, to upgrade just your master to a specific version (v1.0.2):
```console
cluster/gce/upgrade.sh -M v1.0.2
```
Alternatively, to upgrade your entire cluster to the latest stable release:
```console
cluster/gce/upgrade.sh release/stable
```
### Upgrading Google Container Engine (GKE) clusters
Google Container Engine automatically updates master components (e.g. `kube-apiserver`, `kube-scheduler`) to the latest
version. It also handles upgrading the operating system and other components that the master runs on.
The node upgrade process is user-initiated and is described in the [GKE documentation.](https://cloud.google.com/container-engine/docs/clusters/upgrade)
### Upgrading clusters on other platforms
The `cluster/kube-push.sh` script will do a rudimentary update. This process is still quite experimental, we
recommend testing the upgrade on an experimental cluster before performing the update on a production cluster.
## Resizing a cluster
If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](node.md#self-registration-of-nodes).
If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
```
gcloud compute instance-groups managed resize kubernetes-minion-group --size 42 --zone $ZONE
```
Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.
In other environments you may need to configure the machine yourself and tell the Kubelet on which machine API server is running.
### Horizontal auto-scaling of nodes (GCE)
If you are using GCE, you can configure your cluster so that the number of nodes will be automatically scaled based on:
* CPU and memory utilization.
* Amount of of CPU and memory requested by the pods (called also reservation).
Before setting up the cluster by `kube-up.sh`, you can set `KUBE_ENABLE_NODE_AUTOSCALER` environment variable to `true` and export it.
The script will create an autoscaler for the instance group managing your nodes.
The autoscaler will try to maintain the average CPU/memory utilization and reservation of nodes within the cluster close to the target value.
The target value can be configured by `KUBE_TARGET_NODE_UTILIZATION` environment variable (default: 0.7) for ``kube-up.sh`` when creating the cluster.
Node utilization is the total node's CPU/memory usage (OS + k8s + user load) divided by the node's capacity.
Node reservation is the total CPU/memory requested by pods that are running on the node divided by the node's capacity.
If the desired numbers of nodes in the cluster resulting from CPU/memory utilization/reservation are different,
the autoscaler will choose the bigger number. The number of nodes in the cluster set by the autoscaler will be limited from `KUBE_AUTOSCALER_MIN_NODES` (default: 1)
to `KUBE_AUTOSCALER_MAX_NODES` (default: the initial number of nodes in the cluster).
The autoscaler is implemented as a Compute Engine Autoscaler.
The initial values of the autoscaler parameters set by `kube-up.sh` and some more advanced options can be tweaked on
`Compute > Compute Engine > Instance groups > your group > Edit group`[Google Cloud Console page](https://console.developers.google.com)
or using gcloud CLI:
```
gcloud alpha compute autoscaler --zone $ZONE <command>
```
Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring.
To make the metrics accessible, you need to create your cluster with `KUBE_ENABLE_CLUSTER_MONITORING`
equal to `google` or `googleinfluxdb` (`googleinfluxdb` is the default value). Please also make sure
that you have Google Cloud Monitoring API enabled in Google Developer Console.
## Maintenance on a Node
If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer,
then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
replication controller, then a new copy of the pod will be started on a different node. So, in the case where all
pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.
If you want more control over the upgrading process, you may use the following workflow:
Mark the node to be rebooted as unschedulable:
```console
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
```
This keeps new pods from landing on the node while you are trying to get them off.
Get the pods off the machine, via any of the following strategies:
* Wait for finite-duration pods to complete.
* Delete pods with:
```console
kubectl delete pods $PODNAME
```
For pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
Perform maintenance work on the node.
Make the node schedulable again:
```console
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'
```
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node.md) for more details.
## Advanced Topics
### Upgrading to a different API version
When a new API version is released, you may need to upgrade a cluster to support the new API version (e.g. switching from 'v1' to 'v2' when 'v2' is launched)
This is an infrequent event, but it requires careful management. There is a sequence of steps to upgrade to a new API version.
1. Turn on the new api version.
1. Upgrade the cluster's storage to use the new version.
1. Upgrade all config files. Identify users of the old API version endpoints.
1. Update existing objects in the storage to new version by running `cluster/update-storage-objects.sh`.
1. Turn off the old API version.
### Turn on or off an API version for your cluster
Specific API versions can be turned on or off by passing --runtime-config=api/<version> flag while bringing up the API server. For example: to turn off v1 API, pass `--runtime-config=api/v1=false`.
runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively.
For example, for turning off all api versions except v1, pass `--runtime-config=api/all=false,api/v1=true`.
For the purposes of these flags, _legacy_ APIs are those APIs which have been explicitly deprecated (e.g. `v1beta3`).
### Switching your cluster's storage API version
The objects that are stored to disk for a cluster's internal representation of the Kubernetes resources active in the cluster are written using a particular version of the API.
When the supported API changes, these objects may need to be rewritten in the newer API. Failure to do this will eventually result in resources that are no longer decodable or usable
by the kubernetes API server.
`KUBE_API_VERSIONS` environment variable for the `kube-apiserver` binary which controls the API versions that are supported in the cluster. The first version in the list is used as the cluster's storage version. Hence, to set a specific version as the storage version, bring it to the front of list of versions in the value of `KUBE_API_VERSIONS`. You need to restart the `kube-apiserver` binary
for changes to this variable to take effect.
### Switching your config files to a new API version
You can use `kubectl convert` command to convert config files between different API versions.
```console
$ kubectl convert -f pod.yaml --output-version v1
```
For more options, please refer to the usage of [kubectl convert](../user-guide/kubectl/kubectl_convert.md) command.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-management.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-management.md?pixel)]()

View File

@ -32,114 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Troubleshooting This file has moved to: http://kubernetes.github.io/docs/admin/cluster-troubleshooting/
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
problem you are experiencing. See
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
## Listing your cluster
The first thing to debug in your cluster is if your nodes are all registered correctly.
Run
```sh
kubectl get nodes
```
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
## Looking at logs
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
of the relevant log files. (note that on systemd-based systems, you may need to use `journalctl` instead)
### Master
* /var/log/kube-apiserver.log - API Server, responsible for serving the API
* /var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions
* /var/log/kube-controller-manager.log - Controller that manages replication controllers
### Worker Nodes
* /var/log/kubelet.log - Kubelet, responsible for running containers on the node
* /var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing
## A general overview of cluster failure modes
This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.
Root causes:
- VM(s) shutdown
- Network partition within cluster, or between cluster and users
- Crashes in Kubernetes software
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
- Operator error, e.g. misconfigured Kubernetes software or application software
Specific scenarios:
- Apiserver VM shutdown or apiserver crashing
- Results
- unable to stop, update, or start new pods, services, replication controller
- existing pods and services should continue to work normally, unless they depend on the Kubernetes API
- Apiserver backing storage lost
- Results
- apiserver should fail to come up
- kubelets will not be able to reach it but will continue to run the same pods and provide the same service proxying
- manual recovery or recreation of apiserver state necessary before apiserver is restarted
- Supporting services (node controller, replication controller manager, scheduler, etc) VM shutdown or crashes
- currently those are colocated with the apiserver, and their unavailability has similar consequences as apiserver
- in future, these will be replicated as well and may not be co-located
- they do not have their own persistent state
- Individual node (VM or physical machine) shuts down
- Results
- pods on that Node stop running
- Network partition
- Results
- partition A thinks the nodes in partition B are down; partition B thinks the apiserver is down. (Assuming the master VM ends up in partition A.)
- Kubelet software fault
- Results
- crashing kubelet cannot start new pods on the node
- kubelet might delete the pods or not
- node marked unhealthy
- replication controllers start new pods elsewhere
- Cluster operator error
- Results
- loss of pods, services, etc
- lost of apiserver backing store
- users unable to read API
- etc.
Mitigations:
- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Mitigates: Supporting services VM shutdown or crashes
- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
- Mitigates: Apiserver backing storage lost
- Action: Use (experimental) [high-availability](high-availability.md) configuration
- Mitigates: Master VM shutdown or master components (scheduler, API server, controller-managing) crashing
- Will tolerate one or more simultaneous node or component failures
- Mitigates: Apiserver backing storage (i.e., etcd's data directory) lost
- Assuming you used clustered etcd.
- Action: Snapshot apiserver PDs/EBS-volumes periodically
- Mitigates: Apiserver backing storage lost
- Mitigates: Some cases of operator error
- Mitigates: Some cases of Kubernetes software fault
- Action: use replication controller and services in front of pods
- Mitigates: Node shutdown
- Mitigates: Kubelet software fault
- Action: applications (containers) designed to tolerate unexpected restarts
- Mitigates: Node shutdown
- Mitigates: Kubelet software fault
- Action: [Multiple independent clusters](multi-cluster.md) (and avoid making risky changes to all clusters at once)
- Mitigates: Everything listed above.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,187 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Daemon Sets This file has moved to: http://kubernetes.github.io/docs/admin/daemons/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Daemon Sets](#daemon-sets)
- [What is a _Daemon Set_?](#what-is-a-daemon-set)
- [Writing a DaemonSet Spec](#writing-a-daemonset-spec)
- [Required Fields](#required-fields)
- [Pod Template](#pod-template)
- [Pod Selector](#pod-selector)
- [Running Pods on Only Some Nodes](#running-pods-on-only-some-nodes)
- [How Daemon Pods are Scheduled](#how-daemon-pods-are-scheduled)
- [Communicating with DaemonSet Pods](#communicating-with-daemonset-pods)
- [Updating a DaemonSet](#updating-a-daemonset)
- [Alternatives to Daemon Set](#alternatives-to-daemon-set)
- [Init Scripts](#init-scripts)
- [Bare Pods](#bare-pods)
- [Static Pods](#static-pods)
- [Replication Controller](#replication-controller)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _Daemon Set_?
A _Daemon Set_ ensures that all (or some) nodes run a copy of a pod. As nodes are added to the
cluster, pods are added to them. As nodes are removed from the cluster, those pods are garbage
collected. Deleting a Daemon Set will clean up the pods it created.
Some typical uses of a Daemon Set are:
- running a cluster storage daemon, such as `glusterd`, `ceph`, on each node.
- running a logs collection daemon on every node, such as `fluentd` or `logstash`.
- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](
https://github.com/prometheus/node_exporter), `collectd`, New Relic agent, or Ganglia `gmond`.
In a simple case, one Daemon Set, covering all nodes, would be used for each type of daemon.
A more complex setup might use multiple DaemonSets would be used for a single type of daemon,
but with different flags and/or different memory and cpu requests for different hardware types.
## Writing a DaemonSet Spec
### Required Fields
As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [deploying applications](../user-guide/deploying-applications.md),
[configuring containers](../user-guide/configuring-containers.md), and [working with resources](../user-guide/working-with-resources.md) documents.
A DaemonSet also needs a [`.spec`](../devel/api-conventions.md#spec-and-status) section.
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](../user-guide/replication-controller.md#pod-template).
It has exactly the same schema as a [pod](../user-guide/pods.md), except
it is nested and does not have an `apiVersion` or `kind`.
In addition to required fields for a pod, a pod template in a DaemonSet has to specify appropriate
labels (see [pod selector](#pod-selector)).
A pod template in a DaemonSet must have a [`RestartPolicy`](../user-guide/pod-states.md)
equal to `Always`, or be unspecified, which defaults to `Always`.
### Pod Selector
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
a [Job](../user-guide/jobs.md) or other new resources.
The `spec.selector` is an object consisting of two fields:
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](../user-guide/replication-controller.md)
* `matchExpressions` - allows to build more sophisticated selectors by specifying key,
list of values and an operator that relates the key and values.
When the two are specified the result is ANDed.
If the `.spec.selector` is specified, it must match the `.spec.template.metadata.labels`. If not
specified, they are defaulted to be equal. Config with these not matching will be rejected by the API.
Also you should not normally create any pods whose labels match this selector, either directly, via
another DaemonSet, or via other controller such as ReplicationController. Otherwise, the DaemonSet
controller will think that those pods were created by it. Kubernetes will not stop you from doing
this. Once case where you might want to do this is manually create a pod with a different value on
a node for testing.
### Running Pods on Only Some Nodes
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on nodes which match that [node
selector](../user-guide/node-selection/README.md).
If you do not specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on all nodes.
## How Daemon Pods are Scheduled
Normally, the machine that a pod runs on is selected by the Kubernetes scheduler. However, pods
created by the Daemon controller have the machine already selected (`.spec.nodeName` is specified
when the pod is created, so it is ignored by the scheduler). Therefore:
- the [`unschedulable`](node.md#manual-node-administration) field of a node is not respected
by the daemon set controller.
- daemon set controller can make pods even when the scheduler has not been started, which can help cluster
bootstrap.
## Communicating with DaemonSet Pods
Some possible patterns for communicating with pods in a DaemonSet are:
- **Push**: Pods in the Daemon Set are configured to send updates to another service, such
as a stats database. They do not have clients.
- **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable
via the node IPs. Clients knows the the list of nodes ips somehow, and know the port by convention.
- **DNS**: Create a [headless service](../user-guide/services.md#headless-services) with the same pod selector,
and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from
DNS.
- **Service**: Create a service with the same pod selector, and use the service to reach a
daemon on a random node. (No way to reach specific node.)
## Updating a DaemonSet
If node labels are changed, the DaemonSet will promptly add pods to newly matching nodes and delete
pods from newly not-matching nodes.
You can modify the pods that a DaemonSet creates. However, pods do not allow all
fields to be updated. Also, the DaemonSet controller will use the original template the next
time a node (even with the same name) is created.
You can delete a DaemonSet. If you specify `--cascade=false` with `kubectl`, then the pods
will be left on the nodes. You can then create a new DaemonSet with a different template.
the new DaemonSet with the different template will recognize all the existing pods as having
matching labels. It will not modify or delete them despite a mismatch in the pod template.
You will need to force new pod creation by deleting the pod or deleting the node.
You cannot update a DaemonSet.
Support for updating DaemonSets and controlled updating of nodes is planned.
## Alternatives to Daemon Set
### Init Scripts
It is certainly possible to run daemon processes by directly starting them on a node (e.g using
`init`, `upstartd`, or `systemd`). This is perfectly fine. However, there are several advantages to
running such processes via a DaemonSet:
- Ability to monitor and manage logs for daemons in the same way as applications.
- Same config language and tools (e.g. pod templates, `kubectl`) for daemons and applications.
- Future versions of Kubernetes will likely support integration between DaemonSet-created
pods and node upgrade workflows.
- Running daemons in containers with resource limits increases isolation between daemons from app
containers. However, this can also be accomplished by running the daemons in a container but not in a pod
(e.g. start directly via Docker).
### Bare Pods
It is possible to create pods directly which specify a particular node to run on. However,
a Daemon Set replaces pods that are deleted or terminated for any reason, such as in the case of
node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, you should
use a Daemon Set rather than creating individual pods.
### Static Pods
It is possible to create pods by writing a file to a certain directory watched by Kubelet. These
are called [static pods](static-pods.md).
Unlike DaemonSet, static pods cannot be managed with kubectl
or other Kubernetes API clients. Static pods do not depend on the apiserver, making them useful
in cluster bootstrapping cases. Also, static pods may be deprecated in the future.
### Replication Controller
Daemon Set are similar to [Replication Controllers](../user-guide/replication-controller.md) in that
they both create pods, and those pods have processes which are not expected to terminate (e.g. web servers,
storage servers).
Use a replication controller for stateless services, like frontends, where scaling up and down the
number of replicas and rolling out updates are more important than controlling exactly which host
the pod runs on. Use a Daemon Controller when it is important that a copy of a pod always run on
all or certain hosts, and when it needs to start before other pods.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,44 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# DNS Integration with Kubernetes This file has moved to: http://kubernetes.github.io/docs/admin/dns/
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
Every Service defined in the cluster (including the DNS server itself) will be
assigned a DNS name. By default, a client Pod's DNS search list will
include the Pod's own namespace and the cluster's default domain. This is best
illustrated by example:
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
in namespace `bar` can look up this service by simply doing a DNS query for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
The cluster DNS server ([SkyDNS](https://github.com/skynetservices/skydns))
supports forward lookups (A records) and service lookups (SRV records).
## How it Works
The running DNS pod holds 4 containers - skydns, etcd (a private instance which skydns uses),
a Kubernetes-to-skydns bridge called kube2sky, and a health check called healthz. The kube2sky process
watches the Kubernetes master for changes in Services, and then writes the
information to etcd, which skydns reads. This etcd instance is not linked to
any other etcd clusters that might exist, including the Kubernetes master.
## Issues
The skydns service is reachable directly from Kubernetes nodes (outside
of any container) and DNS resolution works if the skydns service is targeted
explicitly. However, nodes are not configured to use the cluster DNS service or
to search the cluster's DNS domain by default. This may be resolved at a later
time.
## For more information
See [the docs for the DNS cluster addon](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,51 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# etcd This file has moved to: http://kubernetes.github.io/docs/admin/etcd/
[etcd](https://coreos.com/etcd/docs/2.2.1/) is a highly-available key value
store which Kubernetes uses for persistent storage of all of its REST API
objects.
## Configuration: high-level goals
Access Control: give *only* kube-apiserver read/write access to etcd. You do not
want apiserver's etcd exposed to every node in your cluster (or worse, to the
internet at large), because access to etcd is equivalent to root in your
cluster.
Data Reliability: for reasonable safety, either etcd needs to be run as a
[cluster](high-availability.md#clustering-etcd) (multiple machines each running
etcd) or etcd's data directory should be located on durable storage (e.g., GCE's
persistent disk). In either case, if high availability is required--as it might
be in a production cluster--the data directory ought to be [backed up
periodically](https://coreos.com/etcd/docs/2.2.1/admin_guide.html#disaster-recovery),
to reduce downtime in case of corruption.
## Default configuration
The default setup scripts use kubelet's file-based static pods feature to run etcd in a
[pod](http://releases.k8s.io/HEAD/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
be run on master VMs. The default location that kubelet scans for manifests is
`/etc/kubernetes/manifests/`.
## Kubernetes's usage of etcd
By default, Kubernetes objects are stored under the `/registry` key in etcd.
This path can be prefixed by using the [kube-apiserver](kube-apiserver.md) flag
`--etcd-prefix="/foo"`.
`etcd` is the only place that Kubernetes keeps state.
## Troubleshooting
To test whether `etcd` is running correctly, you can try writing a value to a
test key. On your master VM (or somewhere with firewalls configured such that
you can talk to your cluster's etcd), try:
```sh
curl -fs -X PUT "http://${host}:${port}/v2/keys/_test"
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,58 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Garbage Collection This file has moved to: http://kubernetes.github.io/docs/admin/garbage-collection/
- [Introduction](#introduction)
- [Image Collection](#image-collection)
- [Container Collection](#container-collection)
- [User Configuration](#user-configuration)
### Introduction
Garbage collection is a helpful function of kubelet that will clean up unreferenced images and unused containers. kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
### Image Collection
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
of cadvisor.
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the the high threshold
will trigger garbage collection. The garbage collection will delete least recently used images until the low
threshold has been met.
### Container Collection
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers any single
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting 'Min Age' to zero and setting 'MaxPerPodContainer' and 'MaxContainers' respectively to less than zero.
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. 'MaxPerPodContainer' and 'MaxContainer' may potentially conflict with each other in situations where retaining the maximum number of containers per pod ('MaxPerPodContainer') would go outside the allowable range of global dead containers ('MaxContainers'). 'MaxPerPodContainer' would be adjusted in this situation: A worst case scenario would be to downgrade 'MaxPerPodContainer' to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
Containers that are not managed by kubelet are not subject to container garbage collection.
### User Configuration
Users can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
Default is 90%.
2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
to free. Default is 80%.
We also allow users to customize garbage collection policy through the following kubelet flags:
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
garbage collected. Default is 1 minute.
2. `maximum-dead-containers-per-container`, maximum number of old instances to retain
per container. Default is 2.
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
Default is 100.
Containers can potentially be garbage collected before their usefulness has expired. These containers can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for `maximum-dead-containers-per-container` is highly recommended to allow at least 2 dead containers to be retained per expected container. A higher value for `maximum-dead-containers` is also recommended for a similiar reason.
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,252 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# High Availability Kubernetes Clusters This file has moved to: http://kubernetes.github.io/docs/admin/high-availability/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [High Availability Kubernetes Clusters](#high-availability-kubernetes-clusters)
- [Introduction](#introduction)
- [Overview](#overview)
- [Initial set-up](#initial-set-up)
- [Reliable nodes](#reliable-nodes)
- [Establishing a redundant, reliable data storage layer](#establishing-a-redundant-reliable-data-storage-layer)
- [Clustering etcd](#clustering-etcd)
- [Validating your cluster](#validating-your-cluster)
- [Even more reliable storage](#even-more-reliable-storage)
- [Replicated API Servers](#replicated-api-servers)
- [Installing configuration files](#installing-configuration-files)
- [Starting the API Server](#starting-the-api-server)
- [Load balancing](#load-balancing)
- [Master elected components](#master-elected-components)
- [Installing configuration files](#installing-configuration-files)
- [Running the podmaster](#running-the-podmaster)
- [Conclusion](#conclusion)
<!-- END MUNGE: GENERATED_TOC -->
## Introduction
PLEASE NOTE: Note that the podmaster implementation is obsoleted by https://github.com/kubernetes/kubernetes/pull/16830,
which provides a primitive for leader election in the experimental kubernetes API.
Nevertheless, the concepts and implementation in this document are still valid, as is the podmaster implementation itself.
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.md),
or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes.
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
## Overview
Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
of these steps in detail, but a summary is given here to help guide and orient the user.
The steps involved are as follows:
* [Creating the reliable constituent nodes that collectively form our HA master implementation.](#reliable-nodes)
* [Setting up a redundant, reliable storage layer with clustered etcd.](#establishing-a-redundant-reliable-data-storage-layer)
* [Starting replicated, load balanced Kubernetes API servers](#replicated-api-servers)
* [Setting up master-elected Kubernetes scheduler and controller-manager daemons](#master-elected-components)
Here's what the system should look like when it's finished:
![High availability Kubernetes diagram](high-availability/ha.png)
Ready? Let's get started.
## Initial set-up
The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux.
Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions.
Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running
on bare metal.
The easiest way to implement an HA Kubernetes cluster is to start with an existing single-master cluster. The
instructions at [https://get.k8s.io](https://get.k8s.io)
describe easy installation for single-master clusters on a variety of platforms.
## Reliable nodes
On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
[kubelet init file](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
scripts.
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
[high-availability/monit-docker](high-availability/monit-docker) configs.
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
## Establishing a redundant, reliable data storage layer
The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're
done.
Clustered etcd already replicates your storage to all master instances in your cluster. This means that to lose data, all three nodes would need
to have their physical (or virtual) disks fail at the same time. The probability that this occurs is relatively low, so for many people
running a replicated etcd cluster is likely reliable enough. You can add additional reliability by increasing the
size of the cluster from three to five nodes. If that is still insufficient, you can add
[even more redundancy to your storage layer](#even-more-reliable-storage).
### Clustering etcd
The full details of clustering etcd are beyond the scope of this document, lots of details are given on the
[etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through
a simple cluster set up, using etcd's built in discovery to build our cluster.
First, hit the etcd discovery service to create a new token:
```sh
curl https://discovery.etcd.io/new?size=3
```
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
server from the definition of the pod specified in `etcd.yaml`.
Note that in `etcd.yaml` you should substitute the token URL you got above for `${DISCOVERY_TOKEN}` on all three machines,
and you should substitute a different name (e.g. `node-1`) for ${NODE_NAME} and the correct IP address
for `${NODE_IP}` on each machine.
#### Validating your cluster
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
```sh
etcdctl member list
```
and
```sh
etcdctl cluster-health
```
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
on a different node.
### Even more reliable storage
Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).
If you use a cloud provider, then they usually provide this
for you, for example [Persistent Disk](https://cloud.google.com/compute/docs/disks/persistent-disks) on the Google Cloud Platform. These
are block-device persistent storage that can be mounted onto your virtual machine. Other cloud providers provide similar solutions.
If you are running on physical machines, you can also use network attached redundant storage using an iSCSI or NFS interface.
Alternatively, you can run a clustered file system like Gluster or Ceph. Finally, you can also run a RAID array on each physical machine.
Regardless of how you choose to implement it, if you chose to use one of these options, you should make sure that your storage is mounted
to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in `/var/etcd/data`
## Replicated API Servers
Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
### Installing configuration files
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
```sh
touch /var/log/kube-apiserver.log
```
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
* basic_auth.csv - basic auth user and password
* ca.crt - Certificate Authority cert
* known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
* kubecfg.crt - Client certificate, public key
* kubecfg.key - Client certificate, private key
* server.cert - Server certificate, public key
* server.key - Server certificate, private key
The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.
### Starting the API Server
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.
The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified
in the file.
### Load balancing
At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
Platform can be found [here](https://cloud.google.com/compute/docs/load-balancing/)
Note, if you are using authentication, you may need to regenerate your certificate to include the IP address of the balancer,
in addition to the IP addresses of the individual nodes.
For pods that you deploy into the cluster, the `kubernetes` service/dns name should provide a load balanced endpoint for the master automatically.
For external users of the API (e.g. the `kubectl` command line interface, continuous build pipelines, or other clients) you will want to configure
them to talk to the external load balancer's IP address.
## Master elected components
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
master election. On each of the three apiserver nodes, we run a small utility application named `podmaster`. It's job is to implement a master
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability.md)
### Installing configuration files
First, create empty log files on each node, so that Docker will mount the files not make new directories:
```sh
touch /var/log/kube-scheduler.log
touch /var/log/kube-controller-manager.log
```
Next, set up the descriptions of the scheduler and controller manager pods on each node.
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`
directory.
### Running the podmaster
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.
Now you will have one instance of the scheduler process running on a single master node, and likewise one
controller-manager process running on a single (possibly different) master node. If either of these processes fail,
the kubelet will restart them. If any of these nodes fail, the process will move to a different instance of a master
node.
## Conclusion
At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!).
If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and
restarting the kubelets on each node.
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
set the `--apiserver` flag to your replicated endpoint.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/high-availability.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/high-availability.md?pixel)]()

View File

@ -32,80 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Cluster Admin Guide This file has moved to: http://kubernetes.github.io/docs/admin/introduction/
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md).
## Planning a cluster
There are many different examples of how to setup a kubernetes cluster. Many of them are listed in this
[matrix](../getting-started-guides/README.md). We call each of the combinations in this matrix a *distro*.
Before choosing a particular guide, here are some things to consider:
- Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both
models are supported, but some distros are better for one case or the other.
- Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting
one up yourself?
- Will your cluster be on-premises, or in the cloud (IaaS)? Kubernetes does not directly support hybrid clusters. We
recommend setting up multiple clusters rather than spanning distant locations.
- Will you be running Kubernetes on "bare metal" or virtual machines? Kubernetes supports both, via different distros.
- Do you just want to run a cluster, or do you expect to do active development of kubernetes project code? If the
latter, it is better to pick a distro actively used by other developers. Some distros only use binary releases, but
offer is a greater variety of choices.
- Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of
Kubernetes.
- If you are configuring kubernetes on-premises, you will need to consider what [networking
model](networking.md) fits best.
- If you are designing for very high-availability, you may want [clusters in multiple zones](multi-cluster.md).
- You may want to familiarize yourself with the various
[components](cluster-components.md) needed to run a cluster.
## Setting up a cluster
Pick one of the Getting Started Guides from the [matrix](../getting-started-guides/README.md) and follow it.
If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides.
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](ovs-networking.md)), which
uses OpenVSwitch to set up networking between pods across
Kubernetes nodes.
If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
project](salt.md).
## Managing a cluster, including upgrades
[Managing a cluster](cluster-management.md).
## Managing nodes
[Managing nodes](node.md).
## Optional Cluster Services
* **DNS Integration with SkyDNS** ([dns.md](dns.md)):
Resolving a DNS name directly to a Kubernetes service.
* **Logging** with [Kibana](../user-guide/logging.md)
## Multi-tenant support
* **Resource Quota** ([resource-quota.md](resource-quota.md))
## Security
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](../user-guide/container-environment.md)):
Describes the environment for Kubelet managed containers on a Kubernetes
node.
* **Securing access to the API Server** [accessing the api](accessing-the-api.md)
* **Authentication** [authentication](authentication.md)
* **Authorization** [authorization](authorization.md)
* **Admission Controllers** [admission_controllers](admission-controllers.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,201 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Limit Range
========================================
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
system will be able to consume as much CPU and memory on the node that executes the pod.
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
for a variety of reasons.
For example:
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
of memory as part of admission control.
2. A cluster is shared by two communities in an organization that runs production and development workloads
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
each namespace.
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
average node size in order to provide for more uniform scheduling and to limit waste.
This example demonstrates how limits can be applied to a Kubernetes namespace to control
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
See [LimitRange design doc](../../design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md)
Step 0: Prerequisites
-----------------------------------------
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../../docs/getting-started-guides/) for how to get started.
Change to the `<kubernetes>` directory if you're not already there.
Step 1: Create a namespace
-----------------------------------------
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called limit-example:
```console
$ kubectl create -f docs/admin/limitrange/namespace.yaml
namespace "limit-example" created
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 5m
limit-example <none> Active 53s
```
Step 2: Apply a limit to the namespace
-----------------------------------------
Let's create a simple limit in our namespace.
```console
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created
```
Let's describe the limits that we have imposed in our namespace.
```console
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
Namespace: limit-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Pod cpu 200m 2 - - -
Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
```
In this scenario, we have said the following:
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
must be specified for that resource across all containers. Failure to specify a limit will result in
a validation error when attempting to create the pod. Note that a default value of limit is set by
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
request must be specified for that resource across all containers. Failure to specify a request will
result in a validation error when attempting to create the pod. Note that a default value of request is
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
containers CPU limits must be <= 2.
Step 3: Enforcing limits at point of creation
-----------------------------------------
The limits enumerated in a namespace are only enforced when a pod is created or updated in
the cluster. If you change the limits to a different value range, it does not affect pods that
were previously created in a namespace.
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
of creation explaining why.
Let's first spin up a replication controller that creates a single container pod to demonstrate
how default values are applied to each pod.
```console
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
replicationcontroller "nginx" created
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-aq0mf 1/1 Running 0 35s
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
```
```yaml
resourceVersion: "127"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
uid: 51be42a7-7156-11e5-9921-286ed488f785
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
```console
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
```
Let's create a pod that falls within the allowed limit boundaries.
```console
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
```
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
imagePullPolicy: IfNotPresent
name: kubernetes-serve-hostname
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "1"
memory: 512Mi
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
```
$ kubelet --help
Usage of kubelet
....
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=true ...
```
Step 4: Cleanup
----------------------------
To remove the resources used by this example, you can just delete the limit-example namespace.
```console
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 20m
```
Summary
----------------------------
Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node.
This file has moved to: http://kubernetes.github.io/docs/admin/limitrange/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -27,98 +27,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Master <-> Node Communication This file has moved to: http://kubernetes.github.io/docs/admin/master-node-communication/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Master <-> Node Communication](#master---node-communication)
- [Summary](#summary)
- [Cluster -> Master](#cluster---master)
- [Master -> Cluster](#master---cluster)
- [SSH Tunnels](#ssh-tunnels)
<!-- END MUNGE: GENERATED_TOC -->
## Summary
This document catalogs the communication paths between the master (really the
apiserver) and the Kubernetes cluster. The intent is to allow users to
customize their installation to harden the network configuration such that
the cluster can be run on an untrusted network (or on fully public IPs on a
cloud provider).
## Cluster -> Master
All communication paths from the cluster to the master terminate at the
apiserver (none of the other master components are designed to expose remote
services). In a typical deployment, the apiserver is configured to listen for
remote connections on a secure HTTPS port (443) with one or more forms of
client [authentication](authentication.md) enabled.
Nodes should be provisioned with the public root certificate for the cluster
such that they can connect securely to the apiserver along with valid client
credentials. For example, on a default GCE deployment, the client credentials
provided to the kubelet are in the form of a client certificate. Pods that
wish to connect to the apiserver can do so securely by leveraging a service
account so that Kubernetes will automatically inject the public root
certificate and a valid bearer token into the pod when it is instantiated.
The `kubernetes` service (in all namespaces) is configured with a virtual IP
address that is redirected (via kube-proxy) to the HTTPS endpoint on the
apiserver.
The master components communicate with the cluster apiserver over the
insecure (not encrypted or authenticated) port. This port is typically only
exposed on the localhost interface of the master machine, so that the master
components, all running on the same machine, can communicate with the
cluster apiserver. Over time, the master components will be migrated to use
the secure port with authentication and authorization (see
[#13598](https://github.com/kubernetes/kubernetes/issues/13598)).
As a result, the default operating mode for connections from the cluster
(nodes and pods running on the nodes) to the master is secured by default
and can run over untrusted and/or public networks.
## Master -> Cluster
There are two primary communication paths from the master (apiserver) to the
cluster. The first is from the apiserver to the kubelet process which runs on
each node in the cluster. The second is from the apiserver to any node, pod,
or service through the apiserver's proxy functionality.
The connections from the apiserver to the kubelet are used for fetching logs
for pods, attaching (through kubectl) to running pods, and using the kubelet's
port-forwarding functionality. These connections terminate at the kubelet's
HTTPS endpoint, which is typically using a self-signed certificate, and
ignore the certificate presented by the kubelet (although you can override this
behavior by specifying the `--kubelet-certificate-authority`,
`--kubelet-client-certificate`, and `--kubelet-client-key` flags when starting
the cluster apiserver). By default, these connections **are not currently safe**
to run over untrusted and/or public networks as they are subject to
man-in-the-middle attacks.
The connections from the apiserver to a node, pod, or service default to plain
HTTP connections and are therefore neither authenticated nor encrypted. They
can be run over a secure HTTPS connection by prefixing `https:` to the node,
pod, or service name in the API URL, but they will not validate the certificate
provided by the HTTPS endpoint nor provide client credentials so while the
connection will by encrypted, it will not provide any guarantees of integrity.
These connections **are not currently safe** to run over untrusted and/or
public networks.
### SSH Tunnels
[Google Container Engine](https://cloud.google.com/container-engine/docs/) uses
SSH tunnels to protect the Master -> Cluster communication paths. In this
configuration, the apiserver initiates an SSH tunnel to each node in the
cluster (connecting to the ssh server listening on port 22) and passes all
traffic destined for a kubelet, node, pod, or service through the tunnel.
This tunnel ensures that the traffic is not exposed outside of the private
GCE network in which the cluster is running.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,68 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Considerations for running multiple Kubernetes clusters This file has moved to: http://kubernetes.github.io/docs/admin/multi-cluster/
You may want to set up multiple Kubernetes clusters, both to
have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
This document describes some of the issues to consider when making a decision about doing so.
Note that at present,
Kubernetes does not offer a mechanism to aggregate multiple clusters into a single virtual cluster. However,
we [plan to do this in the future](../proposals/federation.md).
## Scope of a single cluster
On IaaS providers such as Google Compute Engine or Amazon Web Services, a VM exists in a
[zone](https://cloud.google.com/compute/docs/zones) or [availability
zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html).
We suggest that all the VMs in a Kubernetes cluster should be in the same availability zone, because:
- compared to having a single global Kubernetes cluster, there are fewer single-points of failure
- compared to a cluster that spans availability zones, it is easier to reason about the availability properties of a
single-zone cluster.
- when the Kubernetes developers are designing the system (e.g. making assumptions about latency, bandwidth, or
correlated failures) they are assuming all the machines are in a single data center, or otherwise closely connected.
It is okay to have multiple clusters per availability zone, though on balance we think fewer is better.
Reasons to prefer fewer clusters are:
- improved bin packing of Pods in some cases with more nodes in one cluster (less resource fragmentation)
- reduced operational overhead (though the advantage is diminished as ops tooling and processes matures)
- reduced costs for per-cluster fixed resource costs, e.g. apiserver VMs (but small as a percentage
of overall cluster cost for medium to large clusters).
Reasons to have multiple clusters include:
- strict security policies requiring isolation of one class of work from another (but, see Partitioning Clusters
below).
- test clusters to canary new Kubernetes releases or other cluster software.
## Selecting the right number of clusters
The selection of the number of Kubernetes clusters may be a relatively static choice, only revisited occasionally.
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
load and growth.
To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run
on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
Call the number of regions to be in `R`.
Second, decide how many clusters should be able to be unavailable at the same time, while still being available. Call
the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice.
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
you need at least the larger of `R` or `U + 1` clusters. If it is not (e.g you want to ensure low latency for all
users in the event of a cluster failure), then you need to have `R * (U + 1)` clusters
(`U + 1` in each of `R` regions). In any case, try to put each cluster in a different zone.
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
you may need even more clusters. Kubernetes v1.0 currently supports clusters up to 100 nodes in size, but we are targeting
1000-node clusters by early 2016.
## Working with multiple clusters
When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those
service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer) spanning all of them, so that
failures of a single cluster are not visible to end users.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,152 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Namespaces This file has moved to: http://kubernetes.github.io/docs/admin/namespaces/
## Abstract
A Namespace is a mechanism to partition resources created by users into
a logically named group.
## Motivation
A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community').
Each user community wants to be able to work in isolation from other communities.
Each user community has its own:
1. resources (pods, services, replication controllers, etc.)
2. policies (who can or cannot perform actions in their community)
3. constraints (this community is allowed this much quota, etc.)
A cluster operator may create a Namespace for each unique user community.
The Namespace provides a unique scope for:
1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
3. ability to limit community resource consumption
## Use cases
1. As a cluster operator, I want to support multiple user communities on a single cluster.
2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users
in those communities.
3. As a cluster operator, I want to limit the amount of resources each community can consume in order
to limit the impact to other communities using the cluster.
4. As a cluster user, I want to interact with resources that are pertinent to my user community in
isolation of what other user communities are doing on the cluster.
## Usage
Look [here](namespaces/) for an in depth example of namespaces.
### Viewing namespaces
You can list the current namespaces in a cluster using:
```console
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
You can also get the summary of a specific namespace using:
```console
$ kubectl get namespaces <name>
```
Or you can get detailed information with:
```console
$ kubectl describe namespaces <name>
Name: default
Labels: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
```
Note that these details show both resource quota (if present) as well as resource limit ranges.
Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators
to define *Hard* resource usage limits that a *Namespace* may consume.
A limit range defines min/max constraints on the amount of resources a single entity can consume in
a *Namespace*.
See [Admission control: Limit Range](../design/admission_control_limit_range.md)
A namespace can be in one of two phases:
* `Active` the namespace is in use
* `Terminating` the namespace is being deleted, and can not be used for new objects
See the [design doc](../design/namespaces.md#phases) for more details.
### Creating a new namespace
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Note that the name of your namespace must be a DNS compatible label.
More information on the `finalizers` field can be found in the namespace [design doc](../design/namespaces.md#finalizers).
Then run:
```console
$ kubectl create -f ./my-namespace.yaml
```
### Working in namespaces
See [Setting the namespace for a request](../../docs/user-guide/namespaces.md#setting-the-namespace-for-a-request)
and [Setting the namespace preference](../../docs/user-guide/namespaces.md#setting-the-namespace-preference).
### Deleting a namespace
You can delete a namespace with
```console
$ kubectl delete namespaces <insert-some-namespace-name>
```
**WARNING, this deletes _everything_ under the namespace!**
This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state.
## Namespaces and DNS
When you create a [Service](../../docs/user-guide/services.md), it creates a corresponding [DNS entry](dns.md).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
## Design
Details of the design of namespaces in Kubernetes, including a [detailed example](../design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
can be found in the [namespaces design doc](../design/namespaces.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,256 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Kubernetes Namespaces This file has moved to: http://kubernetes.github.io/docs/admin/namespaces/README/
Kubernetes _[namespaces](../../../docs/admin/namespaces.md)_ help different projects, teams, or customers to share a Kubernetes cluster.
It does this by providing the following:
1. A scope for [Names](../../user-guide/identifiers.md).
2. A mechanism to attach authorization and policy to a subsection of the cluster.
Use of multiple namespaces is optional.
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
### Step Zero: Prerequisites
This example assumes the following:
1. You have an [existing Kubernetes cluster](../../getting-started-guides/).
2. You have a basic understanding of Kubernetes _[pods](../../user-guide/pods.md)_, _[services](../../user-guide/services.md)_, and _[replication controllers](../../user-guide/replication-controller.md)_.
### Step One: Understand the default namespace
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods,
services, and replication controllers used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
```console
$ kubectl get namespaces
NAME LABELS
default <none>
```
### Step Two: Create new namespaces
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication controllers
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
are relaxed to enable agile development.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
pods, services, and replication controllers that run the production site.
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace:
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
```json
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "development",
"labels": {
"name": "development"
}
}
}
```
[Download example](namespace-dev.json?raw=true)
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
Create the development namespace using kubectl.
```console
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
```
And then lets create the production namespace using kubectl.
```console
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
```console
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
production name=production Active
```
### Step Three: Create pods in each namespace
A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
Users interacting with one namespace do not see the content in another namespace.
To demonstrate this, let's spin up a simple replication controller and pod in the development namespace.
We first check what is the current context:
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
current-context: lithe-cocoa-92103_kubernetes
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
```console
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
Let's switch to operate in the development namespace.
```console
$ kubectl config use-context dev
```
You can verify your current context by doing the following:
```console
$ kubectl config view
```
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: development
user: lithe-cocoa-92103_kubernetes
name: dev
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: production
user: lithe-cocoa-92103_kubernetes
name: prod
current-context: dev
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
```
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
Let's create some content.
```console
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
snowflake-8w0qn 1/1 Running 0 22s
snowflake-jrpzb 1/1 Running 0 22s
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
```console
$ kubectl config use-context prod
```
The production namespace should be empty.
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
Production likes to run cattle, so let's create some cattle pods.
```console
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
cattle cattle kubernetes/serve_hostname run=cattle 5
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cattle-97rva 1/1 Running 0 12s
cattle-i9ojn 1/1 Running 0 12s
cattle-qj3yv 1/1 Running 0 12s
cattle-yc7vn 1/1 Running 0 12s
cattle-zz7ea 1/1 Running 0 12s
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
authorization rules for each namespace.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -27,44 +27,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Network Plugins This file has moved to: http://kubernetes.github.io/docs/admin/network-plugins/
__Disclaimer__: Network plugins are in alpha. Its contents will change rapidly.
Network plugins in Kubernetes come in a few flavors:
* Plain vanilla exec plugins - deprecated in favor of CNI plugins.
* CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
* Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins
## Installation
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it found, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
* `network-plugin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `network-plugin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
## Network Plugin Requirements
Besides providing the [`NetworkPlugin` interface](../../pkg/kubelet/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like docker with a bridge) work correctly with the iptables proxy.
### Exec
Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a bridge plugin and `network-plugin-dir` is `/usr/lib/kubernetes`, you'd place the bridge plugin executable at `/usr/lib/kubernetes/bridge/bridge`. See [this comment](../../pkg/kubelet/network/exec/exec.go) for more details.
### CNI
The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads the first CNI configuration file from `--network-plugin-dir` and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/appc/cni/blob/master/SPEC.md), and any required CNI plugins referenced by the configuration must be present in `/opt/cni/bin`.
### kubenet
The Linux-only kubenet plugin provides functionality similar to the `--configure-cbr0` kubelet command-line option. It creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node through either configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host. The kubenet plugin is currently mutually exclusive with, and will eventually replace, the --configure-cbr0 option. It is also currently incompatible with the flannel experimental overlay.
The plugin requires a few things:
* The standard CNI `bridge` and `host-local` plugins to be placed in `/opt/cni/bin`.
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
* Kubelet must also be run with the `--reconcile-cidr` argument to ensure the IP subnet assigned to the node by configuration or the controller-manager is propagated to the plugin
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,201 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Networking in Kubernetes This file has moved to: http://kubernetes.github.io/docs/admin/networking/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Networking in Kubernetes](#networking-in-kubernetes)
- [Summary](#summary)
- [Docker model](#docker-model)
- [Kubernetes model](#kubernetes-model)
- [How to achieve this](#how-to-achieve-this)
- [Google Compute Engine (GCE)](#google-compute-engine-gce)
- [L2 networks and linux bridging](#l2-networks-and-linux-bridging)
- [Flannel](#flannel)
- [OpenVSwitch](#openvswitch)
- [Weave](#weave)
- [Calico](#calico)
- [Other reading](#other-reading)
<!-- END MUNGE: GENERATED_TOC -->
Kubernetes approaches networking somewhat differently than Docker does by
default. There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container communications: this is solved by
[pods](../user-guide/pods.md) and `localhost` communications.
2. Pod-to-Pod communications: this is the primary focus of this document.
3. Pod-to-Service communications: this is covered by [services](../user-guide/services.md).
4. External-to-Service communications: this is covered by [services](../user-guide/services.md).
## Summary
Kubernetes assumes that pods can communicate with other pods, regardless of
which host they land on. We give every pod its own IP address so you do not
need to explicitly create links between pods and you almost never need to deal
with mapping container ports to host ports. This creates a clean,
backwards-compatible model where pods can be treated much like VMs or physical
hosts from the perspectives of port allocation, naming, service discovery, load
balancing, application configuration, and migration.
To achieve this we must impose some requirements on how you set up your cluster
networking.
## Docker model
Before discussing the Kubernetes approach to networking, it is worthwhile to
review the "normal" way that networking works with Docker. By default, Docker
uses host-private networking. It creates a virtual bridge, called `docker0` by
default, and allocates a subnet from one of the private address blocks defined
in [RFC1918](https://tools.ietf.org/html/rfc1918) for that bridge. For each
container that Docker creates, it allocates a virtual ethernet device (called
`veth`) which is attached to the bridge. The veth is mapped to appear as `eth0`
in the container, using Linux namespaces. The in-container `eth0` interface is
given an IP address from the bridge's address range.
The result is that Docker containers can talk to other containers only if they
are on the same machine (and thus the same virtual bridge). Containers on
different machines can not reach each other - in fact they may end up with the
exact same network ranges and IP addresses.
In order for Docker containers to communicate across nodes, they must be
allocated ports on the machine's own IP address, which are then forwarded or
proxied to the containers. This obviously means that containers must either
coordinate which ports they use very carefully or else be allocated ports
dynamically.
## Kubernetes model
Coordinating ports across multiple developers is very difficult to do at
scale and exposes users to cluster-level issues outside of their control.
Dynamic port allocation brings a lot of complications to the system - every
application has to take ports as flags, the API servers have to know how to
insert dynamic port numbers into configuration blocks, services have to know
how to find each other, etc. Rather than deal with this, Kubernetes takes a
different approach.
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
* all containers can communicate with all other containers without NAT
* all nodes can communicate with all containers (and vice-versa) without NAT
* the IP that a container sees itself as is the same IP that others see it as
What this means in practice is that you can not just take two computers
running Docker and expect Kubernetes to work. You must ensure that the
fundamental requirements are met.
This model is not only less complex overall, but it is principally compatible
with the desire for Kubernetes to enable low-friction porting of apps from VMs
to containers. If your job previously ran in a VM, your VM had an IP and could
talk to other VMs in your project. This is the same basic model.
Until now this document has talked about containers. In reality, Kubernetes
applies IP addresses at the `Pod` scope - containers within a `Pod` share their
network namespaces - including their IP address. This means that containers
within a `Pod` can all reach each others ports on `localhost`. This does imply
that containers within a `Pod` must coordinate port usage, but this is no
different than processes in a VM. We call this the "IP-per-pod" model. This
is implemented in Docker as a "pod container" which holds the network namespace
open while "app containers" (the things the user specified) join that namespace
with Docker's `--net=container:<id>` function.
As with Docker, it is possible to request host ports, but this is reduced to a
very niche operation. In this case a port will be allocated on the host `Node`
and traffic will be forwarded to the `Pod`. The `Pod` itself is blind to the
existence or non-existence of host ports.
## How to achieve this
There are a number of ways that this network model can be implemented. This
document is not an exhaustive study of the various methods, but hopefully serves
as an introduction to various technologies and serves as a jumping-off point.
If some techniques become vastly preferable to others, we might detail them more
here.
### Google Compute Engine (GCE)
For the Google Compute Engine cluster configuration scripts, we use [advanced
routing](https://developers.google.com/compute/docs/networking#routing) to
assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for that
subnet will be routed directly to the VM by the GCE network fabric. This is in
addition to the "main" IP address assigned to the VM, which is NAT'ed for
outbound internet access. A linux bridge (called `cbr0`) is configured to exist
on that subnet, and is passed to docker's `--bridge` flag.
We start Docker with:
```sh
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
This bridge is created by Kubelet (controlled by the `--configure-cbr0=true`
flag) according to the `Node`'s `spec.podCIDR`.
Docker will now allocate IPs from the `cbr-cidr` block. Containers can reach
each other and `Nodes` over the `cbr0` bridge. Those IPs are all routable
within the GCE project network.
GCE itself does not know anything about these IPs, though, so it will not NAT
them for outbound internet traffic. To achieve that we use an iptables rule to
masquerade (aka SNAT - to make it seem as if packets came from the `Node`
itself) traffic that is bound for IPs outside the GCE project network
(10.0.0.0/8).
```sh
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
```
Lastly we enable IP forwarding in the kernel (so the kernel will process
packets for bridged containers):
```sh
sysctl net.ipv4.ip_forward=1
```
The result of all this is that all `Pods` can reach each other and can egress
traffic to the internet.
### L2 networks and linux bridging
If you have a "dumb" L2 network, such as a simple switch in a "bare-metal"
environment, you should be able to do something similar to the above GCE setup.
Note that these instructions have only been tried very casually - it seems to
work, but has not been thoroughly tested. If you use this technique and
perfect the process, please let us know.
Follow the "With Linux Bridge devices" section of [this very nice
tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
Lars Kellogg-Stedman.
### Flannel
[Flannel](https://github.com/coreos/flannel#flannel) is a very simple overlay
network that satisfies the Kubernetes requirements. It installs in minutes and
should get you up and running if the above techniques are not working. Many
people have reported success with Flannel and Kubernetes.
### OpenVSwitch
[OpenVSwitch](ovs-networking.md) is a somewhat more mature but also
complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking.
### Weave
[Weave](https://github.com/zettio/weave) is yet another way to build an overlay
network, primarily aiming at Docker integration.
### Calico
[Calico](https://github.com/projectcalico/calico-containers) uses BGP to enable real container
IPs.
## Other reading
The early design of the networking model and its rationale, and some future
plans are described in more detail in the [networking design
document](../design/networking.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,239 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Node This file has moved to: http://kubernetes.github.io/docs/admin/node/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Node](#node)
- [What is a node?](#what-is-a-node)
- [Node Status](#node-status)
- [Node Addresses](#node-addresses)
- [Node Phase](#node-phase)
- [Node Condition](#node-condition)
- [Node Capacity](#node-capacity)
- [Node Info](#node-info)
- [Node Management](#node-management)
- [Node Controller](#node-controller)
- [Self-Registration of Nodes](#self-registration-of-nodes)
- [Manual Node Administration](#manual-node-administration)
- [Node capacity](#node-capacity)
- [API Object](#api-object)
<!-- END MUNGE: GENERATED_TOC -->
## What is a node?
`Node` is a worker machine in Kubernetes, previously known as `Minion`. Node
may be a VM or physical machine, depending on the cluster. Each node has
the services necessary to run [Pods](../user-guide/pods.md) and is managed by the master
components. The services on a node include docker, kubelet and network proxy. See
[The Kubernetes Node](../design/architecture.md#the-kubernetes-node) section in the
architecture design doc for more details.
## Node Status
Node status describes current status of a node. For now, there are the following
pieces of information:
### Node Addresses
The usage of these fields varies depending on your cloud provider or bare metal configuration.
* HostName: Generally not used
* ExternalIP: Generally the IP address of the node that is externally routable (available from outside the cluster)
* InternalIP: Generally the IP address of the node that is routable only within the cluster
### Node Phase
Node Phase is the current lifecycle phase of node, one of `Pending`,
`Running` and `Terminated`.
* Pending: New nodes are created in this state. A node stays in this state until it is configured.
* Running: Node has been configured and the Kubernetes components are running
* Terminated: Node has been removed from the cluster. It will not receive any scheduling requests,
and any running pods will be removed from the node.
Node with `Running` phase is necessary but not sufficient requirement for
scheduling Pods. For a node to be considered a scheduling candidate, it
must have appropriate conditions, see below.
### Node Condition
Node Condition describes the conditions of `Running` nodes. Currently the only
node condition is Ready. The Status of this condition can be True, False, or
Unknown. True means the Kubelet is healthy and ready to accept pods.
False means the Kubelet is not healthy and is not accepting pods. Unknown
means the Node Controller, which manages node lifecycle and is responsible for
setting the Status of the condition, has not heard from the
node recently (currently 40 seconds).
Node condition is represented as a json object. For example,
the following conditions mean the node is in sane state:
```json
"conditions": [
{
"kind": "Ready",
"status": "True",
},
]
```
If the Status of the Ready condition
is Unknown or False for more than five minutes, then all of the Pods on the node are terminated by the Node Controller.
### Node Capacity
Describes the resources available on the node: CPUs, memory and the maximum
number of pods that can be scheduled onto the node.
### Node Info
General information about the node, for instance kernel version, Kubernetes version
(kubelet version, kube-proxy version), docker version (if used), OS name.
The information is gathered by Kubelet from the node.
## Node Management
Unlike [Pods](../user-guide/pods.md) and [Services](../user-guide/services.md), a Node is not inherently
created by Kubernetes: it is either taken from cloud providers like Google Compute Engine,
or from your pool of physical or virtual machines. What this means is that when
Kubernetes creates a node, it is really just creating an object that represents the node in its internal state.
After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:
```json
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "10.240.79.157",
"labels": {
"name": "my-first-k8s-node"
}
}
}
```
Kubernetes will create a Node object internally (the representation), and
validate the node by health checking based on the `metadata.name` field: we
assume `metadata.name` can be resolved. If the node is valid, i.e. all necessary
services are running, it is eligible to run a Pod; otherwise, it will be
ignored for any cluster activity, until it becomes valid. Note that Kubernetes
will keep the object for the invalid node unless it is explicitly deleted by the client, and it will keep
checking to see if it becomes valid.
Currently, there are three components that interact with the Kubernetes node interface: Node Controller, Kubelet, and kubectl.
### Node Controller
Node controller is a component in Kubernetes master which manages Node
objects. It performs two major functions: cluster-wide node synchronization
and single node life-cycle management.
Node controller has a sync loop that deletes Nodes from Kubernetes
based on all matching VM instances listed from the cloud provider. The sync period
can be controlled via flag `--node-sync-period`. If a new VM instance
gets created, Node Controller creates a representation for it. If an existing
instance gets deleted, Node Controller deletes the representation. Note however,
that Node Controller is unable to provision the node for you, i.e. it won't install
any binary; therefore, to
join a node to a Kubernetes cluster, you as an admin need to make sure proper services are
running in the node. In the future, we plan to automatically provision some node
services.
In general, node controller is responsible for updating the NodeReady condition of node
status to ConditionUnknown when a node becomes unreachable (e.g. due to the node being down),
and then later evicting all the pods from the node (using graceful termination) if the node
continues to be unreachable. (The current timeouts for those are 40s and 5m, respectively.)
It also allocates CIDR blocks to the new nodes.
### Self-Registration of Nodes
When kubelet flag `--register-node` is true (the default), the kubelet will attempt to
register itself with the API server. This is the preferred pattern, used by most distros.
For self-registration, the kubelet is started with the following options:
- `--api-servers=` tells the kubelet the location of the apiserver.
- `--kubeconfig` tells kubelet where to find credentials to authenticate itself to the apiserver.
- `--cloud-provider=` tells the kubelet how to talk to a cloud provider to read metadata about itself.
- `--register-node` tells the kubelet to create its own node resource.
Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies
its own. (In the future, we plan to limit authorization to only allow a kubelet to modify its own Node resource.)
#### Manual Node Administration
A cluster administrator can create and modify Node objects.
If the administrator wishes to create node objects manually, set kubelet flag
`--register-node=false`.
The administrator can modify Node resources (regardless of the setting of `--register-node`).
Modifications include setting labels on the Node, and marking it unschedulable.
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling,
e.g. to constrain a Pod to only be eligible to run on a subset of the nodes.
Making a node unscheduleable will prevent new pods from being scheduled to that
node, but will not affect any existing pods on the node. This is useful as a
preparatory step before a node reboot, etc. For example, to mark a node
unschedulable, run this command:
```sh
kubectl patch nodes $NODENAME -p '{"spec": {"unschedulable": true}}'
```
Note that pods which are created by a daemonSet controller bypass the Kubernetes scheduler,
and do not respect the unschedulable attribute on a node. The assumption is that daemons belong on
the machine even if it is being drained of applications in preparation for a reboot.
### Node capacity
The capacity of the node (number of cpus and amount of memory) is part of the node resource.
Normally, nodes register themselves and report their capacity when creating the node resource. If
you are doing [manual node administration](#manual-node-administration), then you need to set node
capacity when adding a node.
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
checks that the sum of the limits of containers on the node is no greater than the node capacity. It
includes all containers started by kubelet, but not containers started directly by docker, nor
processes not in containers.
If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
pod. Use the following template:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: resource-reserver
spec:
containers:
- name: sleep-forever
image: gcr.io/google_containers/pause:0.8.0
resources:
limits:
cpu: 100m
memory: 100Mi
```
Set the `cpu` and `memory` values to the amount of resources you want to reserve.
Place the file in the manifest directory (`--config=DIR` flag of kubelet). Do this
on each kubelet where you want to reserve resources.
## API Object
Node is a top-level resource in the kubernetes REST API. More details about the
API object can be found at: [Node API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_node).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,20 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes OpenVSwitch GRE/VxLAN networking This file has moved to: http://kubernetes.github.io/docs/admin/ovs-networking/
This document describes how OpenVSwitch is used to setup networking between pods across nodes.
The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network.
![ovs-networking](ovs-networking.png "OVS Networking")
The vagrant setup in Kubernetes does the following:
The docker bridge is replaced with a brctl generated linux bridge (kbr0) with a 256 address space subnet. Basically, a node gets 10.244.x.0/24 subnet and docker is configured to use that bridge instead of the default docker0 bridge.
Also, an OVS bridge is created(obr0) and added as a port to the kbr0 bridge. All OVS bridges across all nodes are linked with GRE tunnels. So, each node has an outgoing GRE tunnel to all other nodes. It does not need to be a complete mesh really, just meshier the better. STP (spanning tree) mode is enabled in the bridges to prevent loops.
Routing rules enable any 10.244.0.0/16 target to become reachable via the OVS bridge connected with the tunnels.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,156 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Resource Quotas This file has moved to: http://kubernetes.github.io/docs/admin/resource-quota/
When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern. Resource quotas
work like this:
- Different teams work in different namespaces. Currently this is voluntary, but
support for making this mandatory via ACLs is planned.
- The administrator creates a Resource Quota for each namespace.
- Users put compute resource requests on their pods. The sum of all resource requests across
all pods in the same namespace must not exceed any hard resource limit in any Resource Quota
document for the namespace. Note that we used to verify Resource Quota by taking the sum of
resource limits of the pods, but this was altered to use resource requests. Backwards compatibility
for those pods previously created is preserved because pods that only specify a resource limit have
their resource requests defaulted to match their defined limits. The user is only charged for the
resources they request in the Resource Quota versus their limits because the request is the minimum
amount of resource guaranteed by the cluster during scheduling. For more information on over commit,
see [compute-resources](../user-guide/compute-resources.md).
- If creating a pod would cause the namespace to exceed any of the limits specified in the
the Resource Quota for that namespace, then the request will fail with HTTP status
code `403 FORBIDDEN`.
- If quota is enabled in a namespace and the user does not specify *requests* on the pod for each
of the resources for which quota is enabled, then the POST of the pod will fail with HTTP
status code `403 FORBIDDEN`. Hint: Use the LimitRange admission controller to force default
values of *limits* (then resource *requests* would be equal to *limits* by default, see
[admission controller](admission-controllers.md)) before the quota is checked to avoid this problem.
Examples of policies that could be created using namespaces and quotas are:
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 Gib and 10 cores,
let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation.
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
use any amount.
In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces,
there may be contention for resources. This is handled on a first-come-first-served basis.
Neither contention nor changes to quota will affect already-running pods.
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as
one of its arguments.
Resource Quota is enforced in a particular namespace when there is a
`ResourceQuota` object in that namespace. There should be at most one
`ResourceQuota` object in a namespace.
## Compute Resource Quota
The total sum of [compute resources](../user-guide/compute-resources.md) requested by pods
in a namespace can be limited. The following compute resource types are supported:
| ResourceName | Description |
| ------------ | ----------- |
| cpu | Total cpu requests of containers |
| memory | Total memory requests of containers
For example, `cpu` quota sums up the `resources.requests.cpu` fields of every
container of every pod in the namespace, and enforces a maximum on that sum.
## Object Count Quota
The number of objects of a given type can be restricted. The following types
are supported:
| ResourceName | Description |
| ------------ | ----------- |
| pods | Total number of pods |
| services | Total number of services |
| replicationcontrollers | Total number of replication controllers |
| resourcequotas | Total number of [resource quotas](admission-controllers.md#resourcequota) |
| secrets | Total number of secrets |
| persistentvolumeclaims | Total number of [persistent volume claims](../user-guide/persistent-volumes.md#persistentvolumeclaims) |
For example, `pods` quota counts and enforces a maximum on the number of `pods`
created in a single namespace.
You might want to set a pods quota on a namespace
to avoid the case where a user creates many small pods and exhausts the cluster's
supply of Pod IPs.
## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas:
```console
$ kubectl namespace myspace
$ cat <<EOF > quota.json
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": "quota"
},
"spec": {
"hard": {
"memory": "1Gi",
"cpu": "20",
"pods": "10",
"services": "5",
"replicationcontrollers":"20",
"resourcequotas":"1"
}
}
}
EOF
$ kubectl create -f ./quota.json
$ kubectl get quota
NAME
quota
$ kubectl describe quota quota
Name: quota
Resource Used Hard
-------- ---- ----
cpu 0m 20
memory 0 1Gi
pods 5 10
replicationcontrollers 5 20
resourcequotas 1 1
services 3 5
```
## Quota and Cluster Capacity
Resource Quota objects are independent of the Cluster Capacity. They are
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
automatically give each namespace the ability to consume more resources.
Sometimes more complex policies may be desired, such as:
- proportionally divide total cluster resources among several teams.
- allow each tenant to grow resource usage as needed, but have a generous
limit to prevent accidental resource exhaustion.
- detect demand from one namespace, add nodes, and increase quota.
Such policies could be implemented using ResourceQuota as a building-block, by
writing a 'controller' which watches the quota usage and adjusts the quota
hard limits of each namespace according to other signals.
Note that resource quota divides up aggregate cluster resources, but it creates no
restrictions around nodes: pods from several namespaces may run on the same node.
## Example
See a [detailed example for how to use resource quota](resourcequota/)..
## Read More
See [ResourceQuota design doc](../design/admission_control_resource_quota.md) for more information.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,165 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Resource Quota
========================================
This example demonstrates how [resource quota](../../admin/admission-controllers.md#resourcequota) and
[limitsranger](../../admin/admission-controllers.md#limitranger) can be applied to a Kubernetes namespace.
See [ResourceQuota design doc](../../design/admission_control_resource_quota.md) for more information.
This example assumes you have a functional Kubernetes setup. This file has moved to: http://kubernetes.github.io/docs/admin/resourcequota/README/
Step 1: Create a namespace
-----------------------------------------
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called quota-example:
```console
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
namespace "quota-example" created
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 2m
quota-example <none> Active 39s
```
Step 2: Apply a quota to the namespace
-----------------------------------------
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
system will be able to consume as much CPU and memory on the node that executes the pod.
Users may want to restrict how much of the cluster resources a given namespace may consume
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes
checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace.
Let's create a simple quota in our namespace:
```console
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
```
Once your quota is applied to a namespace, the system will restrict any creation of content
in the namespace until the quota usage has been calculated. This should happen quickly.
You can describe your current quota usage to see what resources are being consumed in your
namespace.
```console
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 0 20
memory 0 1Gi
persistentvolumeclaims 0 10
pods 0 10
replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
Step 3: Applying default resource requests and limits
-----------------------------------------
Pod authors rarely specify resource requests and limits for their pods.
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
cpu and memory by creating an nginx container.
To demonstrate, lets create a replication controller that runs nginx:
```console
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
replicationcontroller "nginx" created
```
Now let's look at the pods that were created.
```console
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
```
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
```console
kubectl describe rc nginx --namespace=quota-example
Name: nginx
Namespace: quota-example
Image(s): nginx
Selector: run=nginx
Labels: run=nginx
Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
───────── ──────── ───── ──── ───────────── ──────── ────── ───────
12s 12s 2 {replication-controller } Warning FailedCreate Error creating: Pod "nginx-" is forbidden: memory is limited by quota, must make explicit request.
```
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
do not specify any memory usage *request*.
So let's set some default values for the amount of cpu and memory a pod can consume:
```console
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
limitrange "limits" created
$ kubectl describe limits limits --namespace=quota-example
Name: limits
Namespace: quota-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
Container cpu - - 100m 200m -
```
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
create its pods.
```console
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-fca65 1/1 Running 0 1m
```
And if we print out our quota usage in the namespace:
```console
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 100m 20
memory 268435456 1Gi
persistentvolumeclaims 0 10
pods 1 10
replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
and the usage is being tracked by the Kubernetes system properly.
Summary
----------------------------
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
meet your end goal.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,105 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Using Salt to configure Kubernetes This file has moved to: http://kubernetes.github.io/docs/admin/salt/
The Kubernetes cluster can be configured using Salt.
The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers.
## Salt cluster setup
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
```console
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
master: kubernetes-master
```
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
## Standalone Salt Configuration on GCE
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
## Salt security
*(Not applicable on default GCE setup.)*
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
```console
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
open_mode: True
auto_accept: True
```
## Salt minion configuration
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
An example file is presented below using the Vagrant based environment.
```console
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
grains:
etcd_servers: $MASTER_IP
cloud: vagrant
roles:
- kubernetes-master
```
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
Key | Value
------------- | -------------
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
`node_ip` | (Optional) The IP address to use to address this node
`hostname_override` | (Optional) Mapped to the kubelet hostname-override
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine.
These keys may be leveraged by the Salt sls files to branch behavior.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
```jinja
{% if grains['os_family'] == 'RedHat' %}
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
{% else %}
// something specific to Debian environment (apt-get, initd)
{% endif %}
```
## Best Practices
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
## Future enhancements (Networking)
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
## Further reading
The [cluster/saltbase](http://releases.k8s.io/HEAD/cluster/saltbase/) tree has more details on the current SaltStack configuration.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,98 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Admin Guide to Service Accounts This file has moved to: http://kubernetes.github.io/docs/admin/service-accounts-admin/
*This is a Cluster Administrator guide to service accounts. It assumes knowledge of
the [User Guide to Service Accounts](../user-guide/service-accounts.md).*
*Support for authorization and user accounts is planned but incomplete. Sometimes
incomplete features are referred to in order to better describe service accounts.*
## User accounts vs service accounts
Kubernetes distinguished between the concept of a user account and a service accounts
for a number of reasons:
- User accounts are for humans. Service accounts are for processes, which
run in pods.
- User accounts are intended to be global. Names must be unique across all
namespaces of a cluster, future user resource will not be namespaced.
Service accounts are namespaced.
- Typically, a cluster's User accounts might be synced from a corporate
database, where new user account creation requires special privileges and
is tied to complex business processes. Service account creation is intended
to be more lightweight, allowing cluster users to create service accounts for
specific tasks (i.e. principle of least privilege).
- Auditing considerations for humans and service accounts may differ.
- A config bundle for a complex system may include definition of various service
accounts for components of that system. Because service accounts can be created
ad-hoc and have namespaced names, such config is portable.
## Service account automation
Three separate components cooperate to implement the automation around service accounts:
- A Service account admission controller
- A Token controller
- A Service account controller
### Service Account Admission Controller
The modification of pods is implemented via a plugin
called an [Admission Controller](admission-controllers.md). It is part of the apiserver.
It acts synchronously to modify pods as they are created or updated. When this plugin is active
(and it is by default on most distributions), then it does the following when a pod is created or modified:
1. If the pod does not have a `ServiceAccount` set, it sets the `ServiceAccount` to `default`.
2. It ensures that the `ServiceAccount` referenced by the pod exists, and otherwise rejects it.
4. If the pod does not contain any `ImagePullSecrets`, then `ImagePullSecrets` of the
`ServiceAccount` are added to the pod.
5. It adds a `volume` to the pod which contains a token for API access.
6. It adds a `volumeSource` to each container of the pod mounted at `/var/run/secrets/kubernetes.io/serviceaccount`.
### Token Controller
TokenController runs as part of controller-manager. It acts asynchronously. It:
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed
- observes secret deletion and removes a reference from the corresponding ServiceAccount if needed
#### To create additional API tokens
A controller loop ensures a secret with an API token exists for each service
account. To create additional API tokens for a service account, create a secret
of type `ServiceAccountToken` with an annotation referencing the service
account, and the controller will update it with a generated token:
```json
secret.json:
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "mysecretname",
"annotations": {
"kubernetes.io/service-account.name": "myserviceaccount"
}
},
"type": "kubernetes.io/service-account-token"
}
```
```sh
kubectl create -f ./secret.json
kubectl describe secret mysecretname
```
#### To delete/invalidate a service account token
```sh
kubectl delete secret mysecretname
```
### Service Account Controller
Service Account Controller manages ServiceAccount inside namespaces, and ensures
a ServiceAccount named "default" exists in every active namespace.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,131 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Static pods This file has moved to: http://kubernetes.github.io/docs/admin/static-pods/
**If you are running clustered Kubernetes and are using static pods to run a pod on every node, you should probably be using a [DaemonSet](daemons.md)!**
*Static pod* are managed directly by kubelet daemon on a specific node, without API server observing it. It does not have associated any replication controller, kubelet daemon itself watches it and restarts it when it crashes. There is no health check though. Static pods are always bound to one kubelet daemon and always run on the same node with it.
Kubelet automatically creates so-called *mirror pod* on Kubernetes API server for each static pod, so the pods are visible there, but they cannot be controlled from the API server.
## Static pod creation
Static pod can be created in two ways: either by using configuration file(s) or by HTTP.
### Configuration files
The configuration files are just standard pod definition in json or yaml format in specific directory. Use `kubelet --config=<the directory>` to start kubelet daemon, which periodically scans the directory and creates/deletes static pods as yaml/json files appear/disappear there.
For example, this is how to start a simple web server as a static pod:
1. Choose a node where we want to run the static pod. In this example, it's `my-node1`.
```console
[joe@host ~] $ ssh my-node1
```
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
```console
[root@my-node1 ~] $ mkdir /etc/kubernetes.d/
[root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: tcp
EOF
```
2. Configure your kubelet daemon on the node to use this directory by running it with `--config=/etc/kubelet.d/` argument. On Fedora Fedora 21 with Kubernetes 0.17 edit `/etc/kubernetes/kubelet` to include this line:
```
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --config=/etc/kubelet.d/"
```
Instructions for other distributions or Kubernetes installations may vary.
3. Restart kubelet. On Fedora 21, this is:
```console
[root@my-node1 ~] $ systemctl restart kubelet
```
## Pods created via HTTP
Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argument and interprets it as a json/yaml file with a pod definition. It works the same as `--config=<directory>`, i.e. it's reloaded every now and then and changes are applied to running static pods (see below).
## Behavior of static pods
When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient…):
```console
[joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
```
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
```console
[joe@host ~] $ ssh my-master
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 role=myrole Running 11 minutes
web nginx Running 11 minutes
```
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl.md) command), kubelet simply won't remove it.
```console
[joe@my-master ~] $ kubectl delete pod static-web-my-node1
pods/static-web-my-node1
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST ...
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 ...
```
Back to our `my-node1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
```console
[joe@host ~] $ ssh my-node1
[joe@my-node1 ~] $ docker stop f6d05272b57e
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
```
## Dynamic addition and removal of static pods
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
```console
[joe@my-node1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
// no nginx container is running
[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,201 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Creating a Kubernetes Cluster This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/README/
----------------------------------------
Kubernetes can run on a range of platforms, from your laptop, to VMs on a cloud provider, to rack of
bare metal servers. The effort required to set up a cluster varies from running a single command to
crafting your own customized cluster. We'll guide you in picking a solution that fits for your needs.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Picking the Right Solution](#picking-the-right-solution)
- [Local-machine Solutions](#local-machine-solutions)
- [Hosted Solutions](#hosted-solutions)
- [Turn-key Cloud Solutions](#turn-key-cloud-solutions)
- [Custom Solutions](#custom-solutions)
- [Cloud](#cloud)
- [On-Premises VMs](#on-premises-vms)
- [Bare Metal](#bare-metal)
- [Integrations](#integrations)
- [Table of Solutions](#table-of-solutions)
<!-- END MUNGE: GENERATED_TOC -->
## Picking the Right Solution
If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](docker.md) solution.
The local Docker-based solution is one of several [Local cluster](#local-machine-solutions) solutions
that are quick to set up, but are limited to running on one machine.
When you are ready to scale up to more machines and higher availability, a [Hosted](#hosted-solutions)
solution is the easiest to create and maintain.
[Turn-key cloud solutions](#turn-key-cloud-solutions) require only a few commands to create
and cover a wider range of cloud providers.
[Custom solutions](#custom-solutions) require more effort to setup but cover and even
they vary from step-by-step instructions to general advice for setting up
a Kubernetes cluster from scratch.
### Local-machine Solutions
Local-machine solutions create a single cluster with one or more Kubernetes nodes on a single
physical machine. Setup is completely automated and doesn't require a cloud provider account.
But their size and availability is limited to that of a single machine.
The local-machine solutions are:
- [Local Docker-based](docker.md) (recommended starting point)
- [Vagrant](vagrant.md) (works on any platform with Vagrant: Linux, MacOS, or Windows.)
- [No-VM local cluster](../devel/running-locally.md) (Linux only)
### Hosted Solutions
[Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes
clusters.
### Turn-key Cloud Solutions
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
few commands, and have active community support.
- [GCE](gce.md)
- [AWS](aws.md)
- [Azure](coreos/azure/README.md)
### Custom Solutions
Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many
base operating systems.
If you can find a guide below that matches your needs, use it. It may be a little out of date, but
it will be easier than starting from scratch. If you do want to start from scratch because you
have special requirements or just because you want to understand what is underneath a Kubernetes
cluster, try the [Getting Started from Scratch](scratch.md) guide.
If you are interested in supporting Kubernetes on a new platform, check out our [advice for
writing a new solution](../../docs/devel/writing-a-getting-started-guide.md).
#### Cloud
These solutions are combinations of cloud provider and OS not covered by the above solutions.
- [AWS + CoreOS](coreos.md)
- [GCE + CoreOS](coreos.md)
- [AWS + Ubuntu](juju.md)
- [Joyent + Ubuntu](juju.md)
- [Rackspace + CoreOS](rackspace.md)
#### On-Premises VMs
- [Vagrant](coreos.md) (uses CoreOS and flannel)
- [CloudStack](cloudstack.md) (uses Ansible, CoreOS and flannel)
- [Vmware](vsphere.md) (uses Debian)
- [juju.md](juju.md) (uses Juju, Ubuntu and flannel)
- [Vmware](coreos.md) (uses CoreOS and flannel)
- [libvirt-coreos.md](libvirt-coreos.md) (uses CoreOS)
- [oVirt](ovirt.md)
- [libvirt](fedora/flannel_multi_node_cluster.md) (uses Fedora and flannel)
- [KVM](fedora/flannel_multi_node_cluster.md) (uses Fedora and flannel)
#### Bare Metal
- [Offline](coreos/bare_metal_offline.md) (no internet required. Uses CoreOS and Flannel)
- [fedora/fedora_ansible_config.md](fedora/fedora_ansible_config.md)
- [Fedora single node](fedora/fedora_manual_config.md)
- [Fedora multi node](fedora/flannel_multi_node_cluster.md)
- [Centos](centos/centos_manual_config.md)
- [Ubuntu](ubuntu.md)
- [Docker Multi Node](docker-multinode.md)
#### Integrations
These solutions provide integration with 3rd party schedulers, resource managers, and/or lower level platforms.
- [Kubernetes on Mesos](mesos.md)
- Instructions specify GCE, but are generic enough to be adapted to most existing Mesos clusters
- [Kubernetes on DCOS](dcos.md)
- Community Edition DCOS uses AWS
- Enterprise Edition DCOS supports cloud hosting, on-premise VMs, and bare metal
## Table of Solutions
Here are all the solutions mentioned above in table form.
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | [✓][3] | Commercial
Vagrant | Saltstack | Fedora | flannel | [docs](vagrant.md) | [✓][2] | Project
GCE | Saltstack | Debian | GCE | [docs](gce.md) | [✓][1] | Project
Azure | CoreOS | CoreOS | Weave | [docs](coreos/azure/README.md) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
Docker Single Node | custom | N/A | local | [docs](docker.md) | | Project ([@brendandburns](https://github.com/brendandburns))
Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | | Project ([@brendandburns](https://github.com/brendandburns))
Bare-metal | Ansible | Fedora | flannel | [docs](fedora/fedora_ansible_config.md) | | Project
Bare-metal | custom | Fedora | _none_ | [docs](fedora/fedora_manual_config.md) | | Project
Bare-metal | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
libvirt | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
KVM | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
Mesos/Docker | custom | Ubuntu | Docker | [docs](mesos-docker.md) | [✓][4] | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
Mesos/GCE | | | | [docs](mesos.md) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
DCOS | Marathon | CoreOS/Alpine | custom | [docs](dcos.md) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
AWS | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community
GCE | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ([@pires](https://github.com/pires))
Vagrant | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](coreos/bare_metal_offline.md) | | Community ([@jeffbean](https://github.com/jeffbean))
Bare-metal | CoreOS | CoreOS | Calico | [docs](coreos/bare_metal_calico.md) | [✓][5] | Community ([@caseydavenport](https://github.com/caseydavenport))
CloudStack | Ansible | CoreOS | flannel | [docs](cloudstack.md) | | Community ([@runseb](https://github.com/runseb))
Vmware | | Debian | OVS | [docs](vsphere.md) | | Community ([@pietern](https://github.com/pietern))
Bare-metal | custom | CentOS | _none_ | [docs](centos/centos_manual_config.md) | | Community ([@coolsvap](https://github.com/coolsvap))
AWS | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
Joyent | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
AWS | Saltstack | Ubuntu | OVS | [docs](aws.md) | | Community ([@justinsb](https://github.com/justinsb))
Bare-metal | custom | Ubuntu | Calico | [docs](ubuntu-calico.md) | | Community ([@djosborne](https://github.com/djosborne))
Bare-metal | custom | Ubuntu | flannel | [docs](ubuntu.md) | | Community ([@resouer](https://github.com/resouer), [@dalanlan](https://github.com/dalanlan), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](libvirt-coreos.md) | | Community ([@lhuard1A](https://github.com/lhuard1A))
oVirt | | | | [docs](ovirt.md) | | Community ([@simon3z](https://github.com/simon3z))
Rackspace | CoreOS | CoreOS | flannel | [docs](rackspace.md) | | Community ([@doublerr](https://github.com/doublerr))
any | any | any | any | [docs](scratch.md) | | Community ([@erictune](https://github.com/erictune))
*Note*: The above table is ordered by version test/used in notes followed by support level.
Definition of columns:
- **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on.
- **OS** is the base operating system of the nodes.
- **Config. Mgmt** is the configuration management system that helps install and maintain Kubernetes software on the
nodes.
- **Networking** is what implements the [networking model](../../docs/admin/networking.md). Those with networking type
_none_ may not support more than one node, or may support multiple VM nodes only in the same physical node.
- **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
tests for supporting the API and base features of Kubernetes v1.0.0.
- Support Levels
- **Project**: Kubernetes Committers regularly use this configuration, so it usually works with the latest release
of Kubernetes.
- **Commercial**: A commercial offering with its own support arrangements.
- **Community**: Actively supported by community contributions. May not work with more recent releases of Kubernetes.
- **Inactive**: No active maintainer. Not recommended for first-time Kubernetes users, and may be deleted soon.
- **Notes** is relevant information such as the version of Kubernetes used.
<!-- reference style links below here -->
<!-- GCE conformance test result -->
[1]: https://gist.github.com/erictune/4cabc010906afbcc5061
<!-- Vagrant conformance test result -->
[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6
<!-- GKE conformance test result -->
[3]: https://gist.github.com/erictune/2f39b22f72565365e59b
<!-- Mesos/Docker conformance test result -->
[4]: https://gist.github.com/sttts/d27f3b879223895494d4
<!-- Calico/CoreOS conformance test result -->
[5]: https://gist.github.com/caseydavenport/98ca87e709b21f03d195
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,158 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on AWS EC2
--------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/aws/
- [Prerequisites](#prerequisites)
- [Cluster turnup](#cluster-turnup)
- [Supported procedure: `get-kube`](#supported-procedure-get-kube)
- [Alternatives](#alternatives)
- [Getting started with your cluster](#getting-started-with-your-cluster)
- [Command line administration tool: `kubectl`](#command-line-administration-tool-kubectl)
- [Examples](#examples)
- [Tearing down the cluster](#tearing-down-the-cluster)
- [Further reading](#further-reading)
## Prerequisites
1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started
2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli)
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
NOTE: This script use the 'default' AWS profile by default.
You may explicitly set AWS profile to use using the `AWS_DEFAULT_PROFILE` environment variable:
```bash
export AWS_DEFAULT_PROFILE=myawsprofile
```
## Cluster turnup
### Supported procedure: `get-kube`
```bash
#Using wget
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
#Using cURL
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
```
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/HEAD/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/HEAD/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/config-default.sh).
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with EC2 instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/config-default.sh) to change this behavior as follows:
```bash
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
export INSTANCE_PREFIX=k8s
...
```
If you don't specify master and minion sizes, the scripts will attempt to guess
the correct size of the master and worker nodes based on `${NUM_NODES}`. In
version 1.2 these default are:
* For the master, for clusters of less than 150 nodes it will use an
`m3.medium`, for clusters of greater than 150 nodes it will use an
`m3.large`.
* For worker nodes, for clusters less than 50 nodes it will use a `t2.micro`,
for clusters between 50 and 150 nodes it will use a `t2.small` and for
clusters with greater than 150 nodes it will use a `t2.medium`.
WARNING: beware that `t2` instances receive a limited number of CPU credits per hour and might not be suitable for clusters where the CPU is used
consistently. As a rough estimation, consider 15 pods/node the absolute limit a `t2.large` instance can handle before it starts exhausting its CPU credits
steadily, although this number depends heavily on the usage.
In prior versions of kubernetes, we defaulted the master node to a t2-class
instance, but found that this sometimes gave hard-to-diagnose problems when the
master ran out of memory or CPU credits. If you are running a test cluster
and want to save money, you can specify `export MASTER_SIZE=t2.micro` but if
your master pauses do check the CPU credits in the AWS console.
For production usage, we recommend at least `export MASTER_SIZE=m3.medium` and
`export NODE_SIZE=m3.medium`. And once you get above a handful of nodes, be
aware that one m3.large instance has more storage than two m3.medium instances,
for the same price.
We generally recommend the m3 instances over the m4 instances, because the m3
instances include local instance storage. Historically local instance storage
has been more reliable than AWS EBS, and performance should be more consistent.
The ephemeral nature of this storage is a match for ephemeral container
workloads also!
If you use an m4 instance, or another instance type which does not have local
instance storage, you may want to increase the `NODE_ROOT_DISK_SIZE` value,
although the default value of 32 is probably sufficient for the smaller
instance types in the m4 family.
The script will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If these already exist, make sure you want them to be used here.
NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key.
### Alternatives
CoreOS maintains [a CLI tool](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html), `kube-aws` that will create and manage a Kubernetes cluster based on [CoreOS](http://www.coreos.com), using AWS tools: EC2, CloudFormation and Autoscaling.
## Getting started with your cluster
### Command line administration tool: `kubectl`
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
Next, add the appropriate binary folder to your `PATH` to access kubectl:
```bash
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
An up-to-date documentation page for this tool is available here: [kubectl manual](../../docs/user-guide/kubectl/kubectl.md)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
For more information, please read [kubeconfig files](../../docs/user-guide/kubeconfig-file.md)
### Examples
See [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](../../examples/guestbook/)
For more complete applications, please look in the [examples directory](../../examples/)
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
```bash
cluster/kube-down.sh
```
## Further reading
Please see the [Kubernetes docs](../../docs/) for more details on administering
and using a Kubernetes cluster.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,10 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on Microsoft Azure This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/azure/
----------------------------------
Checkout the [coreos azure getting started guide](coreos/azure/README.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,29 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Getting a Binary Release This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/binary_release/
You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release.
### Prebuilt Binary Release
The list of binary releases is available for download from the [GitHub Kubernetes repo release page](https://github.com/kubernetes/kubernetes/releases).
Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud.
### Building from source
Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.
Building a release is simple.
```bash
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release
```
For more details on the release process see the [`build/` directory](http://releases.k8s.io/HEAD/build/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,162 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on [CentOS](http://centos.org)
----------------------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/centos/centos_manual_config/
- [Prerequisites](#prerequisites)
- [Starting a cluster](#starting-a-cluster)
## Prerequisites
You need two machines with CentOS installed on them.
## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
**System Information:**
Hosts:
```
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
```
**Prepare the hosts:**
* Create a virt7-docker-common-release repo on all hosts - centos-{master,minion} with following information.
```
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
```
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
```sh
yum -y install --enablerepo=virt7-docker-common-release kubernetes
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```sh
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```sh
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
```sh
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such:
```sh
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Start the appropriate services on master:
```sh
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such:
```sh
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=centos-minion"
# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
```
* Start the appropriate services on node (centos-minion).
```sh
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
*You should be finished!*
* Check to make sure the cluster can see the node (on centos-master)
```console
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
```
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](../../../docs/user-guide/walkthrough/README.md)!
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,90 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on [CloudStack](http://cloudstack.apache.org)
------------------------------------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/cloudstack/
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Clone the playbook](#clone-the-playbook)
- [Create a Kubernetes cluster](#create-a-kubernetes-cluster)
### Introduction
CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. [Exoscale](http://exoscale.ch) for instance makes a [CoreOS](http://coreos.com) template available, therefore instructions to deploy Kubernetes on coreOS can be used. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](coreos/coreos_multinode_cluster.md).
This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
### Prerequisites
$ sudo apt-get install -y python-pip
$ sudo pip install ansible
$ sudo pip install cs
[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API.
Set your CloudStack endpoint, API keys and HTTP method used.
You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
Or create a `~/.cloudstack.ini` file:
[cloudstack]
endpoint = <your cloudstack api endpoint>
key = <your api access key>
secret = <your api secret key>
method = post
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
### Clone the playbook
$ git clone --recursive https://github.com/runseb/ansible-kubernetes.git
$ cd ansible-kubernetes
The [ansible-cloudstack](https://github.com/resmo/ansible-cloudstack) module is setup in this repository as a submodule, hence the `--recursive`.
### Create a Kubernetes cluster
You simply need to run the playbook.
$ ansible-playbook k8s.yml
Some variables can be edited in the `k8s.yml` file.
vars:
ssh_key: k8s
k8s_num_nodes: 2
k8s_security_group_name: k8s
k8s_node_prefix: k8s2
k8s_template: Linux CoreOS alpha 435 64-bit 10GB Disk
k8s_instance_type: Tiny
This will start a Kubernetes master node and a number of compute nodes (by default 2).
The `instance_type` and `template` by default are specific to [exoscale](http://exoscale.ch), edit them to specify your CloudStack cloud specific template and instance type (i.e service offering).
Check the tasks and templates in `roles/k8s` if you want to modify anything.
Once the playbook as finished, it will print out the IP of the Kubernetes master:
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster:
$ ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
$ fleetctl list-machines
MACHINE IP METADATA
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,85 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Getting Started on [CoreOS](https://coreos.com) This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/
There are multiple guides on running Kubernetes with [CoreOS](https://coreos.com/kubernetes/docs/latest/):
### Official CoreOS Guides
These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests.html).
[**AWS Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
Guide and CLI tool for setting up a multi-node cluster on AWS. CloudFormation is used to set up a master and multiple workers in auto-scaling groups.
<hr/>
[**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html)
Guide to setting up a multi-node cluster on Vagrant. The deployer can independently configure the number of etcd nodes, master nodes, and worker nodes to bring up a fully HA control plane.
<hr/>
[**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
The quickest way to set up a Kubernetes development environment locally. As easy as `git clone`, `vagrant up` and configuring `kubectl`.
<hr/>
[**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started.html)
A generic guide to setting up an HA cluster on any cloud or bare metal, with full TLS. Repeat the master or worker steps to configure more machines of that role.
### Community Guides
These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS.
[**Multi-node Cluster**](coreos/coreos_multinode_cluster.md)
Set up a single master, multi-worker cluster on your choice of platform: AWS, GCE, or VMware Fusion.
<hr/>
[**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
Scripted installation of a single master, multi-worker cluster on GCE. Kubernetes components are managed by [fleet](https://github.com/coreos/fleet).
<hr/>
[**Multi-node cluster using cloud-config and Weave on Vagrant**](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
Configure a Vagrant-based cluster of 3 machines with networking provided by Weave.
<hr/>
[**Multi-node cluster using cloud-config and Vagrant**](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
Configure a single master, multi-worker cluster locally, running on your choice of hypervisor: VirtualBox, Parallels, or VMware
<hr/>
[**Single-node cluster using a small OS X App**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md)
Guide to running a solo cluster (master + worker) controlled by an OS X menubar application. Uses xhyve + CoreOS under the hood.
<hr/>
[**Multi-node cluster with Vagrant and fleet units using a small OS X App**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
Guide to running a single master, multi-worker cluster controlled by an OS X menubar application. Uses Vagrant under the hood.
<hr/>
[**Resizable multi-node cluster on Azure with Weave**](coreos/azure/README.md)
Guide to running an HA etcd cluster with a single master on Azure. Uses the Azure node.js CLI to resize the cluster.
<hr/>
[**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
Configure a single master, single worker cluster on VMware ESXi.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos.md?pixel)]()

View File

@ -31,245 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Kubernetes on Azure with CoreOS and [Weave](http://weave.works)
---------------------------------------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/azure/README/
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Let's go!](#lets-go)
- [Deploying the workload](#deploying-the-workload)
- [Scaling](#scaling)
- [Exposing the app to the outside world](#exposing-the-app-to-the-outside-world)
- [Next steps](#next-steps)
- [Tear down...](#tear-down)
## Introduction
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
### Prerequisites
1. You need an Azure account.
## Let's go!
To get started, you need to checkout the code:
```sh
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```sh
npm install
```
Now, all you need to do is:
```sh
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
```
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
# or
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
```
![VMs in Azure](initial_cluster.png)
Once the creation of Azure VMs has finished, you should see the following:
```console
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
Let's login to the master node like so:
```sh
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
```console
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
```
## Deploying the workload
Let's follow the Guestbook example now:
```sh
kubectl create -f ~/guestbook-example
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
```sh
kubectl get pods --watch
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
```console
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 4m
frontend-4wahe 1/1 Running 0 4m
frontend-6l36j 1/1 Running 0 4m
redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/kubernetes/docs/getting-started-guides/coreos/azure/`).
First, lets set the size of new VMs:
```sh
export AZ_VM_SIZE=Large
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```console
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00',
'etcd-01',
'etcd-02',
'kube-00',
'kube-01',
'kube-02',
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
```console
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
```console
core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
```
As there are 4 nodes, let's scale proportionally:
```console
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
>>>>>>> coreos/azure: Updates for 1.0
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
Check what you have now:
```console
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```console
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 22m
frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
```
## Exposing the app to the outside world
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Guestbook app is on port 31605, will map it to port 80 on kube-00
info: Executing command vm endpoint create
+ Getting virtual machines
+ Reading network configuration
+ Updating network configuration
info: vm endpoint create command OK
info: Executing command vm endpoint show
+ Getting virtual machines
data: Name : tcp-80-31605
data: Local port : 31605
data: Protcol : tcp
data: Virtual IP Address : 137.117.156.164
data: Direct server return : Disabled
info: vm endpoint show command OK
```
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
## Next steps
You now have a full-blow cluster running in Azure, congrats!
You should probably try deploy other [example apps](../../../../examples/) or write your own ;)
## Tear down...
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
```sh
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,203 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Bare Metal Kubernetes on CoreOS with Calico Networking This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/bare_metal_calico/
------------------------------------------
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config.
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
- Configure `kubectl` to access your cluster.
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
## Prerequisites and Assumptions
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
- 1 Kubernetes Master
- 2 Kubernetes Nodes
- Your nodes should have IP connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
## Cloud-config
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
We'll use two cloud-config files:
- `master-config.yaml`: cloud-config for the Kubernetes master
- `node-config.yaml`: cloud-config for each Kubernetes node
## Download CoreOS
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
2. *On another machine*, download the the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
3. Replace the following variables in the `master-config.yaml` file.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
### Configure TLS
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
2. Send the three files to your master host (using `scp` for example).
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set Permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
```
4. Restart the kubelet to pick up the changes:
```
sudo systemctl restart kubelet
```
## Configure the compute nodes
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
4. Replace the following placeholders with the contents of their respective files.
- `<CA_CERT>`: Complete contents of `ca.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
>
> ```
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> <CA_CERT>
> ```
>
> should look like this once the certificate is in place:
>
> ```
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE-----
> ```
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
Check your work with `kubectl get nodes`.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](../../../examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
[![Analytics](https://ga-beacon.appspot.com/UA-52125893-3/kubernetes/docs/getting-started-guides/coreos/bare_metal_calico.md?pixel)](https://github.com/igrigorik/ga-beacon)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/bare_metal_calico.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/bare_metal_calico.md?pixel)]()

View File

@ -31,676 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Bare Metal CoreOS with Kubernetes (OFFLINE)
------------------------------------------
Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications.
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/bare_metal_offline/
- [Prerequisites](#prerequisites)
- [High Level Design](#high-level-design)
- [This Guides variables](#this-guides-variables)
- [Setup PXELINUX CentOS](#setup-pxelinux-centos)
- [Adding CoreOS to PXE](#adding-coreos-to-pxe)
- [DHCP configuration](#dhcp-configuration)
- [Kubernetes](#kubernetes)
- [Cloud Configs](#cloud-configs)
- [master.yml](#masteryml)
- [node.yml](#nodeyml)
- [New pxelinux.cfg file](#new-pxelinuxcfg-file)
- [Specify the pxelinux targets](#specify-the-pxelinux-targets)
- [Creating test pod](#creating-test-pod)
- [Helping commands for debugging](#helping-commands-for-debugging)
## Prerequisites
1. Installed *CentOS 6* for PXE server
2. At least two bare metal nodes to work with
## High Level Design
1. Manage the tftp directory
* /tftpboot/(coreos)(centos)(RHEL)
* /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file
2. Update per install the link for pxelinux
3. Update the DHCP config to reflect the host needing deployment
4. Setup nodes to deploy CoreOS creating a etcd cluster.
5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
6. Installing the CoreOS slaves to become Kubernetes nodes.
## This Guides variables
| Node Description | MAC | IP |
| :---------------------------- | :---------------: | :---------: |
| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
## Setup PXELINUX CentOS
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version.
1. Install packages needed on CentOS
sudo yum install tftp-server dhcp syslinux
2. `vi /etc/xinetd.d/tftp` to enable tftp service and change disable to 'no'
disable = no
3. Copy over the syslinux images we will need.
su -
mkdir -p /tftpboot
cd /tftpboot
cp /usr/share/syslinux/pxelinux.0 /tftpboot
cp /usr/share/syslinux/menu.c32 /tftpboot
cp /usr/share/syslinux/memdisk /tftpboot
cp /usr/share/syslinux/mboot.c32 /tftpboot
cp /usr/share/syslinux/chain.c32 /tftpboot
/sbin/service dhcpd start
/sbin/service xinetd start
/sbin/chkconfig tftp on
4. Setup default boot menu
mkdir /tftpboot/pxelinux.cfg
touch /tftpboot/pxelinux.cfg/default
5. Edit the menu `vi /tftpboot/pxelinux.cfg/default`
default menu.c32
prompt 0
timeout 15
ONTIMEOUT local
display boot.msg
MENU TITLE Main Menu
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
## Adding CoreOS to PXE
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
1. Find or create the TFTP root directory that everything will be based off of.
* For this document we will assume `/tftpboot/` is our root directory.
2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
3. Download the CoreOS PXE files provided by the CoreOS team.
MY_TFTPROOT_DIR=/tftpboot
mkdir -p $MY_TFTPROOT_DIR/images/coreos/
cd $MY_TFTPROOT_DIR/images/coreos/
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
gpg --verify coreos_production_pxe.vmlinuz.sig
gpg --verify coreos_production_pxe_image.cpio.gz.sig
4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
default menu.c32
prompt 0
timeout 300
ONTIMEOUT local
display boot.msg
MENU TITLE Main Menu
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
MENU BEGIN CoreOS Menu
LABEL coreos-master
MENU LABEL CoreOS Master
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-single-master.yml
LABEL coreos-slave
MENU LABEL CoreOS Slave
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
MENU END
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
## DHCP configuration
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
1. Add the `filename` to the _host_ or _subnet_ sections.
filename "/tftpboot/pxelinux.0";
2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments.
subnet 10.20.30.0 netmask 255.255.255.0 {
next-server 10.20.30.242;
option broadcast-address 10.20.30.255;
filename "<other default image>";
...
# http://www.syslinux.org/wiki/index.php/PXELINUX
host core_os_master {
hardware ethernet d0:00:67:13:0d:00;
option routers 10.20.30.1;
fixed-address 10.20.30.40;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave {
hardware ethernet d0:00:67:13:0d:01;
option routers 10.20.30.1;
fixed-address 10.20.30.41;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave2 {
hardware ethernet d0:00:67:13:0d:02;
option routers 10.20.30.1;
fixed-address 10.20.30.42;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
...
}
We will be specifying the node configuration later in the guide.
## Kubernetes
To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
2. Have a service discovery protocol running in our stack to do auto discovery.
This demo we just make a static single `etcd` server to host our Kubernetes and `etcd` master servers.
Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
To get this up and running we are going to setup a simple `apache` server to serve our binaries needed to bootstrap Kubernetes.
This is on the PXE server from the previous section:
rm /etc/httpd/conf.d/welcome.conf
cd /var/www/html/
wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64
wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate
wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future.
Now for the good stuff!
## Cloud Configs
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
These are based on the work found here: [master.yml](cloud-configs/master.yaml), [node.yml](cloud-configs/node.yaml)
To make the setup work, you need to replace a few placeholders:
- Replace `<PXE_SERVER_IP>` with your PXE server ip address (e.g. 10.20.30.242)
- Replace `<MASTER_SERVER_IP>` with the Kubernetes master ip address (e.g. 10.20.30.40)
- If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name.
- If you use a proxy, replace `rproxy.example.com` with your proxy server (and port)
- Add your own SSH public key(s) to the cloud config at the end
### master.yml
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-master.yml`.
#cloud-config
---
write_files:
- path: /opt/bin/waiter.sh
owner: root
content: |
#! /usr/bin/bash
until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
- path: /opt/bin/kubernetes-download.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubectl"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubernetes"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubecfg"
chmod +x /opt/bin/*
- path: /etc/profile.d/opt-path.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
PATH=$PATH/opt/bin
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: get-kube-tools.service
runtime: true
command: start
content: |
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStart=/opt/bin/kubernetes-download.sh
RemainAfterExit=yes
Type=oneshot
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: etcd.service
command: start
content: |
[Unit]
Description=etcd
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
User=etcd
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd \
--name ${DEFAULT_IPV4} \
--addr ${DEFAULT_IPV4}:4001 \
--bind-addr 0.0.0.0 \
--cluster-active-size 1 \
--data-dir /var/lib/etcd \
--http-read-timeout 86400 \
--peer-addr ${DEFAULT_IPV4}:7001 \
--snapshot true
Restart=always
RestartSec=10s
- name: fleet.socket
command: start
content: |
[Socket]
ListenStream=/var/run/fleet.sock
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=etcd.service
After=etcd.service
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001"
Environment="FLEET_METADATA=role=master"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: etcd-waiter.service
command: start
content: |
[Unit]
Description=etcd waiter
Wants=network-online.target
Wants=etcd.service
After=etcd.service
After=network-online.target
Before=flannel.service
Before=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
RemainAfterExit=true
Type=oneshot
- name: flannel.service
command: start
content: |
[Unit]
Wants=etcd-waiter.service
After=etcd-waiter.service
Requires=etcd.service
After=etcd.service
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}'
ExecStart=/opt/bin/flanneld
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
Requires=etcd.service
After=etcd.service
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--address=0.0.0.0 \
--port=8080 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd-servers=http://127.0.0.1:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-controller-manager.service
command: start
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-scheduler.service
command: start
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-register.service
command: start
content: |
[Unit]
Description=Kubernetes Registration Service
Documentation=https://github.com/kelseyhightower/kube-register
Requires=kube-apiserver.service
After=kube-apiserver.service
Requires=fleet.service
After=fleet.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-register
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
ExecStart=/opt/bin/kube-register \
--metadata=role=node \
--fleet-endpoint=unix:///var/run/fleet.sock \
--healthz-port=10248 \
--api-endpoint=http://127.0.0.1:8080
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
### node.yml
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
#cloud-config
---
write_files:
- path: /etc/default/docker
content: |
DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: etcd.service
mask: true
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://<MASTER_SERVER_IP>:4001"
Environment="FLEET_METADATA=role=node"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: flannel.service
command: start
content: |
[Unit]
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStart=/opt/bin/flanneld -etcd-endpoints http://<MASTER_SERVER_IP>:4001
- name: docker.service
command: start
content: |
[Unit]
After=flannel.service
Wants=flannel.service
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
EnvironmentFile=-/etc/default/docker
EnvironmentFile=/run/flannel/subnet.env
ExecStartPre=/bin/mount --make-rprivate /
ExecStart=/usr/bin/docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS}
[Install]
WantedBy=multi-user.target
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: kube-proxy.service
command: start
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
ExecStart=/opt/bin/kube-proxy \
--etcd-servers=http://<MASTER_SERVER_IP>:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname-override=${DEFAULT_IPV4} \
--api-servers=<MASTER_SERVER_IP>:8080 \
--healthz-bind-address=0.0.0.0 \
--healthz-port=10248 \
--logtostderr=true
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
## New pxelinux.cfg file
Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
default coreos
prompt 1
timeout 15
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
And one for the _master_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-master`
default coreos
prompt 1
timeout 15
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
## Specify the pxelinux targets
Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file.
Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX).
cd /tftpboot/pxelinux.cfg
ln -s coreos-node-master 01-d0-00-67-13-0d-00
ln -s coreos-node-slave 01-d0-00-67-13-0d-01
ln -s coreos-node-slave 01-d0-00-67-13-0d-02
Reboot these servers to get the images PXEd and ready for running containers!
## Creating test pod
Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).
## Helping commands for debugging
List all keys in etcd:
etcdctl ls --recursive
List fleet machines
fleetctl list-machines
Check system status of services on master:
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
systemctl status kube-register
Check system status of services on a node:
systemctl status kube-kubelet
systemctl status docker.service
List Kubernetes
kubectl get pods
kubectl get nodes
Kill all pods:
for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,203 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# CoreOS Multinode Cluster This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/coreos_multinode_cluster/
Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster.
> **Attention**: This requires at least CoreOS version **[695.0.0][coreos695]**, which includes `etcd2`.
[coreos695]: https://coreos.com/releases/#695.0.0
## Overview
* Provision the master node
* Capture the master node private IP address
* Edit node.yaml
* Provision one or more worker nodes
### AWS
*Attention:* Replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
#### Provision the Master
```sh
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
```
```sh
aws ec2 run-instances \
--image-id <ami_image_id> \
--key-name <keypair> \
--region us-west-2 \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://master.yaml
```
#### Capture the private IP address
```sh
aws ec2 describe-instances --instance-id <master-instance-id>
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
#### Provision worker nodes
```sh
aws ec2 run-instances \
--count 1 \
--image-id <ami_image_id> \
--key-name <keypair> \
--region us-west-2 \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://node.yaml
```
### Google Compute Engine (GCE)
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
#### Provision the Master
```sh
gcloud compute instances create master \
--image-project coreos-cloud \
--image <gce_image_id> \
--boot-disk-size 200GB \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=master.yaml
```
#### Capture the private IP address
```sh
gcloud compute instances list
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
#### Provision worker nodes
```sh
gcloud compute instances create node1 \
--image-project coreos-cloud \
--image <gce_image_id> \
--boot-disk-size 200GB \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=node.yaml
```
#### Establish network connectivity
Next, setup an ssh tunnel to the master so you can run kubectl from your local host.
In one terminal, run `gcloud compute ssh master --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second
run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`.
### OpenStack
These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard.
These instructions were tested on the Ice House release on a Metacloud distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack.
#### Make sure you can connect with OpenStack
Make sure the environment variables are set for OpenStack such as:
```sh
OS_TENANT_ID
OS_PASSWORD
OS_AUTH_URL
OS_USERNAME
OS_TENANT_NAME
```
Test this works with something like:
```
nova list
```
#### Get a Suitable CoreOS Image
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack.html)
Once you download that, upload it to glance. An example is shown below:
```sh
glance image-create --name CoreOS723 \
--container-format bare --disk-format qcow2 \
--file coreos_production_openstack_image.img \
--is-public True
```
#### Create security group
```sh
nova secgroup-create kubernetes "Kubernetes Security Group"
nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0
nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0
```
#### Provision the Master
```sh
nova boot \
--image <image_name> \
--key-name <my_key> \
--flavor <flavor id> \
--security-group kubernetes \
--user-data files/master.yaml \
kube-master
```
```<image_name>``` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
```<my_key>``` is the keypair name that you already generated to access the instance.
```<flavor_id>``` is the flavor ID you use to size the instance. Run ```nova flavor-list``` to get the IDs. 3 on the system this was tested with gives the m1.large size.
The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.
Next, assign it a public IP address:
```
nova floating-ip-list
```
Get an IP address that's free and run:
```
nova floating-ip-associate kube-master <ip address>
```
where ```<ip address>``` is the IP address that was available from the ```nova floating-ip-list``` command.
#### Provision Worker Nodes
Edit ```node.yaml``` and replace all instances of ```<master-private-ip>``` with the private IP address of the master node. You can get this by running ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
```sh
nova boot \
--image <image_name> \
--key-name <my_key> \
--flavor <flavor id> \
--security-group kubernetes \
--user-data files/node.yaml \
minion01
```
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]()

View File

@ -32,153 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started with Kubernetes on DCOS This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/dcos/
----------------------------------------
This guide will walk you through installing [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) on [Datacenter Operating System (DCOS)](https://mesosphere.com/product/) with the [DCOS CLI](https://github.com/mesosphere/dcos-cli) and operating Kubernetes with the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [About Kubernetes on DCOS](#about-kubernetes-on-dcos)
- [Resources](#resources)
- [Prerequisites](#prerequisites)
- [Install](#install)
- [Uninstall](#uninstall)
<!-- END MUNGE: GENERATED_TOC -->
## About Kubernetes on DCOS
DCOS is system software that manages computer cluster hardware and software resources and provides common services for distributed applications. Among other services, it provides [Apache Mesos](http://mesos.apache.org/) as its cluster kernel and [Marathon](https://mesosphere.github.io/marathon/) as its init system. With DCOS CLI, Mesos frameworks like [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) can be installed with a single command.
Another feature of the DCOS CLI is that it allows plugins like the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl). This allows for easy access to a version-compatible Kubectl without having to manually download or install.
Further information about the benefits of installing Kubernetes on DCOS can be found in the [Kubernetes-Mesos documentation](../../contrib/mesos/README.md).
For more details about the Kubernetes DCOS packaging, see the [Kubernetes-Mesos project](https://github.com/mesosphere/kubernetes-mesos).
Since Kubernetes-Mesos is still alpha, it is a good idea to familiarize yourself with the [current known issues](../../contrib/mesos/docs/issues.md) which may limit or modify the behavior of Kubernetes on DCOS.
If you have problems completing the steps below, please [file an issue against the kubernetes-mesos project](https://github.com/mesosphere/kubernetes-mesos/issues).
## Resources
Explore the following resources for more information about Kubernetes, Kubernetes on Mesos/DCOS, and DCOS itself.
- [DCOS Documentation](https://docs.mesosphere.com/)
- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/)
- [Kubernetes Examples](../../examples/README.md)
- [Kubernetes on Mesos Documentation](../../contrib/mesos/README.md)
- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases)
- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos)
## Prerequisites
- A running [DCOS cluster](https://mesosphere.com/product/)
- [DCOS Community Edition](https://docs.mesosphere.com/install/) is currently available on [AWS](https://mesosphere.com/amazon/).
- [DCOS Enterprise Edition](https://mesosphere.com/product/) can be deployed on virtual or bare metal machines. Contact sales@mesosphere.com for more info and to set up an engagement.
- [DCOS CLI](https://docs.mesosphere.com/install/cli/) installed locally
## Install
1. Configure and validate the [Mesosphere Multiverse](https://github.com/mesosphere/multiverse) as a package source repository
```
$ dcos config prepend package.sources https://github.com/mesosphere/multiverse/archive/version-1.x.zip
$ dcos package update --validate
```
2. Install etcd
By default, the Kubernetes DCOS package starts a single-node etcd. In order to avoid state loss in the event of Kubernetes component container failure, install an HA [etcd-mesos](https://github.com/mesosphere/etcd-mesos) cluster on DCOS.
```
$ dcos package install etcd
```
3. Verify that etcd is installed and healthy
The etcd cluster takes a short while to deploy. Verify that `/etcd` is healthy before going on to the next step.
```
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
```
4. Create Kubernetes installation configuration
Configure Kubernetes to use the HA etcd installed on DCOS.
```
$ cat >/tmp/options.json <<EOF
{
"kubernetes": {
"etcd-mesos-framework-name": "etcd"
}
}
EOF
```
5. Install Kubernetes
```
$ dcos package install --options=/tmp/options.json kubernetes
```
6. Verify that Kubernetes is installed and healthy
The Kubernetes cluster takes a short while to deploy. Verify that `/kubernetes` is healthy before going on to the next step.
```
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
/kubernetes 768 1 1/1 1/1 --- DOCKER None
```
7. Verify that Kube-DNS & Kube-UI are deployed, running, and ready
```
$ dcos kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-v8-tjxk9 4/4 Running 0 1m
kube-ui-v2-tjq7b 1/1 Running 0 1m
```
Names and ages may vary.
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](../../examples/README.md) or the [Kubernetes User Guide](../user-guide/README.md).
## Uninstall
1. Stop and delete all replication controllers and pods in each namespace:
Before uninstalling Kubernetes, destroy all the pods and replication controllers. The uninstall process will try to do this itself, but by default it times out quickly and may leave your cluster in a dirty state.
```
$ dcos kubectl delete rc,pods --all --namespace=default
$ dcos kubectl delete rc,pods --all --namespace=kube-system
```
2. Validate that all pods have been deleted
```
$ dcos kubectl get pods --all-namespaces
```
3. Uninstall Kubernetes
```
$ dcos package uninstall kubernetes
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,100 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Running Multi-Node Kubernetes Using Docker
------------------------------------------
_Note_: This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/
These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Overview](#overview)
- [Bootstrap Docker](#bootstrap-docker)
- [Master Node](#master-node)
- [Adding a worker node](#adding-a-worker-node)
- [Deploy a DNS](#deploy-a-dns)
- [Testing your cluster](#testing-your-cluster)
## Prerequisites
The only thing you need is a machine with **Docker 1.7.1 or higher**
## Overview
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-docker.png)
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like `flanneld` and `etcd`
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
You can specify the version on every node before install:
```sh
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
Otherwise, we'll use latest `hyperkube` image as default k8s version.
## Master Node
The first step in the process is to initialize the master node.
The MASTER_IP step here is optional, it defaults to the first value of `hostname -I`.
Clone the Kubernetes repo, and run [master.sh](docker-multinode/master.sh) on the master machine _with root_:
```console
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./master.sh
```
`Master done!`
See [here](docker-multinode/master.md) for detailed instructions explanation.
## Adding a worker node
Once your master is up and running you can add one or more workers on different machines.
Clone the Kubernetes repo, and run [worker.sh](docker-multinode/worker.sh) on the worker machine _with root_:
```console
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./worker.sh
```
`Worker done!`
See [here](docker-multinode/worker.md) for detailed instructions explanation.
## Deploy a DNS
See [here](docker-multinode/deployDNS.md) for instructions.
## Testing your cluster
Once your cluster has been created you can [test it out](docker-multinode/testing.md)
For more complete applications, please look in the [examples directory](../../examples/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,41 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Deploy DNS on `docker` and `docker-multinode` This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/deployDNS/
### Get the template file
First of all, download the dns template
[skydns template](skydns.yaml.in)
### Set environment variables
Then you need to set `DNS_REPLICAS`, `DNS_DOMAIN` and `DNS_SERVER_IP` envs
```console
$ export DNS_REPLICAS=1
$ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-domain` for containerized kubelet
$ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns` for containerized kubelet
```
### Replace the corresponding value in the template and create the pod
```console
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns.yaml.in > ./skydns.yaml
# If the kube-system namespace isn't already created, create it
$ kubectl get ns
$ kubectl create -f ./kube-system.yaml
$ kubectl create -f ./skydns.yaml
```
### Test if DNS works
Follow [this link](../../../cluster/addons/dns/#how-do-i-test-if-it-is-working) to check it out.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/deployDNS.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/deployDNS.md?pixel)]()

View File

@ -32,247 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Installing a Kubernetes Master Node via Docker This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/master/
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine
is `${MASTER_IP}`. We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.7"
Enviroinment variables used:
```sh
export MASTER_IP=<the_master_ip_here>
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
There are two main phases to installing the master:
* [Setting up `flanneld` and `etcd`](#setting-up-flanneld-and-etcd)
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
## Setting up flanneld and etcd
_Note_:
This guide expects **Docker 1.7.1 or higher**.
### Setup Docker Bootstrap
We're going to use `flannel` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
`--iptables=false` so that it can only run containers with `--net=host`. That's sufficient to bootstrap our system.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_If you have Docker 1.8.0 or higher run this instead_
```sh
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
### Startup etcd for flannel and the API server to use
Run:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
/usr/local/bin/etcd \
--listen-client-urls=http://127.0.0.1:4001,http://${MASTER_IP}:4001 \
--advertise-client-urls=http://${MASTER_IP}:4001 \
--data-dir=/var/etcd/data
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run \
--net=host \
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```sh
sudo /etc/init.d/docker stop
```
or
```sh
sudo systemctl stop docker
```
or
```sh
sudo service docker stop
```
or it may be something else.
#### Run flannel
Now run flanneld itself:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
--ip-masq=${FLANNEL_IPMASQ} \
--iface=${FLANNEL_IFACE}
```
The previous command should have printed a really long hash, the container id, copy this hash.
Now get the subnet settings from flannel:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
```sh
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
sudo /etc/init.d/docker start
```
it may be:
```sh
systemctl start docker
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
```sh
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--hostname-override=127.0.0.1 \
--config=/etc/kubernetes/manifests-multi \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
Download the kubectl binary for `${K8S_VERSION}` (look at the URL in the following links) and make it available by editing your PATH environment variable.
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/amd64/kubectl))
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/386/kubectl))
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/arm/kubectl))
For example, OS X:
```console
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Linux:
```console
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Now you can list the nodes:
```sh
kubectl get nodes
```
This should print something like:
```console
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
If all else fails, ask questions on [Slack](../../troubleshooting.md#slack).
### Next steps
Move on to [adding one or more workers](worker.md) or [deploy a dns](deployDNS.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,73 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Testing your Kubernetes cluster. This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/testing/
To validate that your node(s) have been added, run:
```sh
kubectl get nodes
```
That should show something like:
```console
NAME LABELS STATUS
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on [Slack](../../troubleshooting.md#slack).
### Run an application
```sh
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```sh
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
```sh
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```sh
kubectl get svc nginx --template={{.spec.clusterIP}}
```
Hit the webserver with the first IP (CLUSTER_IP):
```sh
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### Scaling
Now try to scale up the nginx you created before:
```sh
kubectl scale rc nginx --replicas=3
```
And list the pods
```sh
kubectl get pods
```
You should see pods landing on the newly added machine.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,184 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Adding a Kubernetes worker node via Docker. This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/worker/
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
You need to repeat these instructions for each node you want to join the cluster.
We will assume that you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master.md). We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.6"
Enviroinment variables used:
```sh
export MASTER_IP=<the_master_ip_here>
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.6)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
For each worker node, there are three steps:
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
### Set up Flanneld on the worker node
As before, the Flannel daemon is going to provide network connectivity.
_Note_:
This guide expects **Docker 1.7.1 or higher**.
#### Set up a bootstrap docker
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_If you have Docker 1.8.0 or higher run this instead_
```sh
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```sh
sudo /etc/init.d/docker stop
```
or
```sh
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq=${FLANNEL_IPMASQ} \
--etcd-endpoints=http://${MASTER_IP}:4001 \
--iface=${FLANNEL_IFACE}
```
The previous command should have printed a really long hash, the container id, copy this hash.
Now get the subnet settings from flannel:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
```sh
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
sudo /etc/init.d/docker start
```
or it may be:
```sh
systemctl start docker
```
### Start Kubernetes on the worker node
#### Run the kubelet
Again this is similar to the above, but the `--api-servers` now points to the master we set up in the beginning.
```sh
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://${MASTER_IP}:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
```sh
sudo docker run -d \
--net=host \
--privileged \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \
--v=2
```
### Next steps
Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,198 +31,9 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Running Kubernetes locally via Docker
-------------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker/
- [Overview](#overview)
- [Prerequisites](#prerequisites)
- [Run it](#run-it)
- [Download kubectl](#download-kubectl)
- [Test it out](#test-it-out)
- [Run an application](#run-an-application)
- [Expose it as a service](#expose-it-as-a-service)
- [Deploy a DNS](#deploy-a-dns)
- [A note on turning down your cluster](#a-note-on-turning-down-your-cluster)
- [Troubleshooting](#troubleshooting)
### Overview
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-singlenode-docker.png)
### Prerequisites
1. You need to have docker installed on one machine.
2. Decide what Kubernetes version to use. Set the `${K8S_VERSION}` variable to
a released version of Kubernetes >= "1.2.0-alpha.7"
### Run it
```sh
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
> If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230)
This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods.md) that contains the other master components.
### Download `kubectl`
At this point you should have a running Kubernetes cluster. You can test this
by downloading the kubectl binary for `${K8S_VERSION}` (look at the URL in the
following links) and make it available by editing your PATH environment
variable.
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/amd64/kubectl))
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/386/kubectl))
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/arm/kubectl))
For example, OS X:
```console
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Linux:
```console
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Create configuration:
```
$ kubectl config set-cluster test-doc --server=http://localhost:8080
$ kubectl config set-context test-doc --cluster=test-doc
$ kubectl config use-context test-doc
```
For Max OS X users instead of `localhost` you will have to use IP address of your docker machine,
which you can find by running `docker-machine env <machinename>` (see [documentation](https://docs.docker.com/machine/reference/env/)
for details).
### Test it out
List the nodes in your cluster by running:
```sh
kubectl get nodes
```
This should print:
```console
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
### Run an application
```sh
kubectl run nginx --image=nginx --port=80
```
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```sh
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP (if a LoadBalancer is configured)
```sh
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```sh
kubectl get svc nginx --template={{.spec.clusterIP}}
```
Hit the webserver with the first IP (CLUSTER_IP):
```sh
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
## Deploy a DNS
See [here](docker-multinode/deployDNS.md) for instructions.
### A note on turning down your cluster
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers.
You may use `docker kill $(docker ps -aq)`, note this removes _all_ containers running under Docker, so use with caution.
### Troubleshooting
#### Node is in `NotReady` state
If you see your node as `NotReady` it's possible that your OS does not have memcg and swap enabled.
1. Your kernel should support memory and swap accounting. Ensure that the
following configs are turned on in your linux kernel:
```console
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
```
2. Enable the memory and swap accounting in the kernel, at boot, as command line
parameters as follows:
```console
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```
NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:
```console
$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory swapaccount=1
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker.md?pixel)]()

View File

@ -31,236 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Configuring Kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
-------------------------------------------------------------------------------------------------------
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort. This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/fedora/fedora_ansible_config/
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Architecture of the cluster](#architecture-of-the-cluster)
- [Setting up ansible access to your nodes](#setting-up-ansible-access-to-your-nodes)
- [Setting up the cluster](#setting-up-the-cluster)
- [Testing and using your new cluster](#testing-and-using-your-new-cluster)
## Prerequisites
1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git)
2. A Fedora 21+ host to act as cluster master
3. As many Fedora 21+ hosts as you would like, that act as cluster nodes
The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes.
## Architecture of the cluster
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```console
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
```
**Make sure your local machine has**
- ansible (must be 1.9.0+)
- git
- python-netaddr
If not
```sh
yum install -y ansible git python-netaddr
```
**Now clone down the Kubernetes repository**
```sh
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible
```
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
```console
[masters]
kube-master.example.com
[etcd]
kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
```
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
edit: ~/contrib/ansible/group_vars/all.yml
```yaml
ansible_ssh_user: root
```
**Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
Make sure your local machine (root) has an ssh key pair if not
```sh
ssh-keygen
```
Copy the ssh public key to **all** nodes in the cluster
```sh
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
```
## Setting up the cluster
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
edit: ~/contrib/ansible/group_vars/all.yml
**Configure access to kubernetes packages**
Modify `source_type` as below to access kubernetes packages through the package manager.
```yaml
source_type: packageManager
```
**Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
kube_service_addresses: 10.254.0.0/16
```
**Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
**Managing add on services in your cluster**
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml
cluster_logging: true
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml
cluster_monitoring: true
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml
dns_setup: true
```
**Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you.
```sh
cd ~/contrib/ansible/
./setup.sh
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
**Show kubernetes nodes**
Run the following on the kube-master:
```sh
kubectl get nodes
```
**Show services running on masters and nodes**
```sh
systemctl | grep -i kube
```
**Show firewall rules on the masters and nodes**
```sh
iptables -nvL
```
**Create /tmp/apache.json on the master with the following contents and deploy pod**
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "fedoraapache",
"labels": {
"name": "fedoraapache"
}
},
"spec": {
"containers": [
{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [
{
"hostPort": 80,
"containerPort": 80
}
]
}
]
}
}
```
```sh
kubectl create -f /tmp/apache.json
```
**Check where the pod was created**
```sh
kubectl get pods
```
**Check Docker status on nodes**
```sh
docker ps
docker images
```
**After the pod is 'Running' Check web server access on the node**
```sh
curl http://localhost
```
That's it !
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,211 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on [Fedora](http://fedoraproject.org)
-----------------------------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/fedora/fedora_manual_config/
- [Prerequisites](#prerequisites)
- [Instructions](#instructions)
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Instructions
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:**
Hosts:
```
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
```sh
yum -y install --enablerepo=updates-testing kubernetes
```
* Install etcd and iptables
```sh
yum -y install etcd iptables
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```sh
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```sh
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```sh
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```sh
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```sh
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
* Create /var/run/kubernetes on master:
```sh
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
* Start the appropriate services on master:
```sh
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
```json
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "fed-node",
"labels":{ "name": "fed-node-label"}
},
"spec": {
"externalID": "fed-node"
}
}
```
Now create a node object internally in your Kubernetes cluster by running:
```console
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
reachable from Kubernetes master node. This guide will discuss how to provision
a Kubernetes node (fed-node) below.
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet on the node.***
* Edit /etc/kubernetes/kubelet to appear as such:
```sh
###
# Kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
* Start the appropriate services on the node (fed-node).
```sh
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```console
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```sh
kubectl delete -f ./node.json
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](../../../docs/user-guide/walkthrough/README.md)!
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,188 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Kubernetes multiple nodes cluster with flannel on Fedora
--------------------------------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Master Setup](#master-setup)
- [Node Setup](#node-setup)
- [**Test the cluster and flannel configuration**](#test-the-cluster-and-flannel-configuration)
## Introduction
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Master Setup
**Perform following commands on the Kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
* Add the configuration to the etcd server on fed-master.
```sh
etcdctl set /coreos.com/network/config < flannel-config.json
```
* Verify the key exists in the etcd server on fed-master.
```sh
etcdctl get /coreos.com/network/config
```
## Node Setup
**Perform following commands on all Kubernetes nodes**
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```sh
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://fed-master:4001"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
* Enable the flannel service.
```sh
systemctl enable flanneld
```
* If docker is not running, then starting flannel service is enough and skip the next step.
```sh
systemctl start flanneld
```
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```sh
systemctl stop docker
ip link delete docker0
systemctl start flanneld
systemctl start docker
```
***
## **Test the cluster and flannel configuration**
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
```console
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0
```
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```sh
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
```
```json
{
"node": {
"key": "/coreos.com/network/subnets",
{
"key": "/coreos.com/network/subnets/18.16.29.0-24",
"value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.83.0-24",
"value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.90.0-24",
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
}
}
}
```
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```console
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
* Issue the following commands on any 2 nodes:
```console
# docker run -it fedora:latest bash
bash-4.3#
```
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```console
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
```
* Now note the IP address on the first node:
```console
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
```
* And also note the IP address on the other node:
```console
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
```
* Now ping from the first node to the other node:
```console
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
```
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,236 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on Google Compute Engine
----------------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/gce/
- [Before you start](#before-you-start)
- [Prerequisites](#prerequisites)
- [Starting a cluster](#starting-a-cluster)
- [Installing the Kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation)
- [Getting started with your cluster](#getting-started-with-your-cluster)
- [Inspect your cluster](#inspect-your-cluster)
- [Run some examples](#run-some-examples)
- [Tearing down the cluster](#tearing-down-the-cluster)
- [Customizing](#customizing)
- [Troubleshooting](#troubleshooting)
- [Project settings](#project-settings)
- [Cluster initialization hang](#cluster-initialization-hang)
- [SSH](#ssh)
- [Networking](#networking)
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
### Before you start
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) for hosted cluster installation and management.
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
### Prerequisites
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
1. Enable the [Compute Engine Instance Group Manager API](https://developers.google.com/console/help/new/#activatingapis) in the [Google Cloud developers console](https://console.developers.google.com).
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
1. Make sure you have credentials for GCloud by running ` gcloud auth login`.
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
### Starting a cluster
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
```bash
curl -sS https://get.k8s.io | bash
```
or
```bash
wget -q -O - https://get.k8s.io | bash
```
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging.md), while `heapster` provides [monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/README.md) services.
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
```bash
cd kubernetes
cluster/kube-up.sh
```
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](../troubleshooting.md#slack).
The next few steps will show you:
1. how to set up the command line client on your workstation to manage the cluster
1. examples of how to use the cluster
1. how to delete the cluster
1. how to start clusters with non-default options (like larger clusters)
### Installing the Kubernetes command line tools on your workstation
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
The next step is to make sure the `kubectl` tool is in your path.
The [kubectl](../user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
You will use it to look at your new cluster and bring up example apps.
Add the appropriate binary folder to your `PATH` to access kubectl:
```bash
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
However the gcloud bundled kubectl version may be older than the one downloaded by the
get.k8s.io install script. We recommend you use the downloaded binary to avoid
potential issues with client/server version skew.
#### Enabling bash completion of the Kubernetes command line tools
You may find it useful to enable `kubectl` bash completion:
```
$ source ./contrib/completions/bash/kubectl
```
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
```
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
```
but then you have to update it when you update kubectl.
### Getting started with your cluster
#### Inspect your cluster
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
```console
$ kubectl get --all-namespaces services
```
should show a set of [services](../user-guide/services.md) that look something like this:
```console
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
default kubernetes 10.0.0.1 <none> 443/TCP <none> 1d
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP k8s-app=kube-dns 1d
kube-system kube-ui 10.0.0.3 <none> 80/TCP k8s-app=kube-ui 1d
...
```
Similarly, you can take a look at the set of [pods](../user-guide/pods.md) that were created during cluster startup.
You can do this via the
```console
$ kubectl get --all-namespaces pods
```
command.
You'll see a list of pods that looks something like this (the name specifics will be different):
```console
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
```
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
#### Run some examples
Then, see [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples/). The [guestbook example](../../examples/guestbook/) is a good "getting started" walkthrough.
### Tearing down the cluster
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
```bash
cd kubernetes
cluster/kube-down.sh
```
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
### Customizing
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
You can view a transcript of a successful cluster creation
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
### Troubleshooting
#### Project settings
You need to have the Google Cloud Storage API, and the Google Cloud Storage
JSON API enabled. It is activated by default for new projects. Otherwise, it
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
details.
Also ensure that-- as listed in the [Prerequsites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
#### Cluster initialization hang
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
#### SSH
If you're having trouble SSHing into your instances, ensure the GCE firewall
isn't blocking port 22 to your VMs. By default, this should work but if you
have edited firewall rules or created a new non-default network, you'll need to
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
--description "SSH allowed from anywhere" --allow tcp:22`
Additionally, your GCE SSH key must either have no passcode or you need to be
using `ssh-agent`.
#### Networking
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in `cluster/config-default.sh` create a new rule with the following
field values:
* Source Ranges: `10.0.0.0/8`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,241 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started with Juju This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/juju/
-------------------------
[Juju](https://jujucharms.com/docs/stable/about-juju) makes it easy to deploy
Kubernetes by provisioning, installing and configuring all the systems in
the cluster. Once deployed the cluster can easily scale up with one command
to increase the cluster size.
**Table of Contents**
- [Prerequisites](#prerequisites)
- [On Ubuntu](#on-ubuntu)
- [With Docker](#with-docker)
- [Launch Kubernetes cluster](#launch-kubernetes-cluster)
- [Exploring the cluster](#exploring-the-cluster)
- [Run some containers!](#run-some-containers)
- [Scale out cluster](#scale-out-cluster)
- [Launch the "k8petstore" example app](#launch-the-k8petstore-example-app)
- [Tear down cluster](#tear-down-cluster)
- [More Info](#more-info)
- [Cloud compatibility](#cloud-compatibility)
## Prerequisites
> Note: If you're running kube-up, on Ubuntu - all of the dependencies
> will be handled for you. You may safely skip to the section:
> [Launch Kubernetes Cluster](#launch-kubernetes-cluster)
### On Ubuntu
[Install the Juju client](https://jujucharms.com/get-started) on your
local Ubuntu system:
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju-core juju-quickstart juju-deployer
### With Docker
If you are not using Ubuntu or prefer the isolation of Docker, you may
run the following:
mkdir ~/.juju
sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti jujusolutions/jujubox:latest
At this point from either path you will have access to the `juju
quickstart` command.
To set up the credentials for your chosen cloud run:
juju quickstart --constraints="mem=3.75G" -i
> The `constraints` flag is optional, it changes the size of virtual machines
> that Juju will generate when it requests a new machine. Larger machines
> will run faster but cost more money than smaller machines.
Follow the dialogue and choose `save` and `use`. Quickstart will now
bootstrap the juju root node and setup the juju web based user
interface.
## Launch Kubernetes cluster
You will need to export the `KUBERNETES_PROVIDER` environment variable before
bringing up the cluster.
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
If this is your first time running the `kube-up.sh` script, it will install
the required dependencies to get started with Juju, additionally it will
launch a curses based configuration utility allowing you to select your cloud
provider and enter the proper access credentials.
Next it will deploy the kubernetes master, etcd, 2 nodes with flannel based
Software Defined Networking (SDN) so containers on different hosts can
communicate with each other.
## Exploring the cluster
The `juju status` command provides information about each unit in the cluster:
$ juju status --format=oneline
- docker/0: 52.4.92.78 (started)
- flannel-docker/0: 52.4.92.78 (started)
- kubernetes/0: 52.4.92.78 (started)
- docker/1: 52.6.104.142 (started)
- flannel-docker/1: 52.6.104.142 (started)
- kubernetes/1: 52.6.104.142 (started)
- etcd/0: 52.5.216.210 (started) 4001/tcp
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
You can use `juju ssh` to access any of the units:
juju ssh kubernetes-master/0
## Run some containers!
`kubectl` is available on the Kubernetes master node. We'll ssh in to
launch some containers, but one could use `kubectl` locally by setting
`KUBERNETES_MASTER` to point at the ip address of "kubernetes-master/0".
No pods will be available before starting a container:
kubectl get pods
NAME READY STATUS RESTARTS AGE
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
```
Create the pod with kubectl:
kubectl create -f pod.json
Get info on the pod:
kubectl get pods
To test the hello app, we need to locate which node is hosting
the container. Better tooling for using Juju to introspect container
is in the works but we can use `juju run` and `juju status` to find
our hello app.
Exit out of our ssh session and run:
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8 quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hour k8s_hello....
We see "kubernetes/1" has our container, we can open port 80:
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
Finally delete the pod:
juju ssh kubernetes-master/0
kubectl delete pods hello
## Scale out cluster
We can add node units like so:
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
## Launch the "k8petstore" example app
The [k8petstore example](../../examples/k8petstore/) is available as a
[juju action](https://jujucharms.com/docs/devel/actions).
juju action do kubernetes-master/0
> Note: this example includes curl statements to exercise the app, which
> automatically generates "petstore" transactions written to redis, and allows
> you to visualize the throughput in your browser.
## Tear down cluster
./kube-down.sh
or destroy your current Juju environment (using the `juju env` command):
juju destroy-environment --force `juju env`
## More Info
The Kubernetes charms and bundles can be found in the `kubernetes` project on
github.com:
- [Bundle Repository](http://releases.k8s.io/HEAD/cluster/juju/bundles)
* [Kubernetes master charm](../../cluster/juju/charms/trusty/kubernetes-master/)
* [Kubernetes node charm](../../cluster/juju/charms/trusty/kubernetes/)
- [More about Juju](https://jujucharms.com)
### Cloud compatibility
Juju runs natively against a variety of public cloud providers. Juju currently
works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
[Windows Azure](https://jujucharms.com/docs/stable/config-azure),
[DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean),
[Google Compute Engine](https://jujucharms.com/docs/stable/config-gce),
[HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud),
[Joyent](https://jujucharms.com/docs/stable/config-joyent),
[LXC](https://jujucharms.com/docs/stable/config-LXC), any
[OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment,
[Vagrant](https://jujucharms.com/docs/stable/config-vagrant), and
[Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware).
If you do not see your favorite cloud provider listed many clouds can be
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).
The Kubernetes bundle has been tested on GCE and AWS and found to work with
version 1.0.0.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,305 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started with libvirt CoreOS
-----------------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/libvirt-coreos/
- [Highlights](#highlights)
- [Warnings about `libvirt-coreos` use case](#warnings-about-libvirt-coreos-use-case)
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Interacting with your Kubernetes cluster with the `kube-*` scripts.](#interacting-with-your-kubernetes-cluster-with-the-kube--scripts)
- [Troubleshooting](#troubleshooting)
- [!!! Cannot find kubernetes-server-linux-amd64.tar.gz](#-cannot-find-kubernetes-server-linux-amd64targz)
- [Can't find virsh in PATH, please fix and retry.](#cant-find-virsh-in-path-please-fix-and-retry)
- [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-no-such-file-or-directory)
- [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-permission-denied)
- [error: Out of memory initializing network (virsh net-create...)](#error-out-of-memory-initializing-network-virsh-net-create)
### Highlights
* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write)
* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt)
### Warnings about `libvirt-coreos` use case
The primary goal of the `libvirt-coreos` cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
In order to achieve that goal, its deployment is very different from the “standard production deployment” method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
The `libvirt-coreos` cluster provider doesnt aim at being production look-alike.
Another difference is that no security is enforced on `libvirt-coreos` at all. For example,
* Kube API server is reachable via a clear-text connection (no SSL);
* Kube API server requires no credentials;
* etcd access is not protected;
* Kubernetes secrets are not protected as securely as they are on production environments;
* etc.
So, an k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like:
* un-authenticated access to Kube API server;
* Access to Kubernetes private data structures inside etcd;
* etc.
On the other hand, `libvirt-coreos` might be useful for people investigating low level implementation of Kubernetes because debugging techniques like sniffing the network traffic or introspecting the etcd content are easier on `libvirt-coreos` than on a production deployment.
### Prerequisites
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
2. Install [ebtables](http://ebtables.netfilter.org/)
3. Install [qemu](http://wiki.qemu.org/Main_Page)
4. Install [libvirt](http://libvirt.org/)
5. Install [openssl](http://openssl.org/)
6. Enable and start the libvirt daemon, e.g:
* ``systemctl enable libvirtd && systemctl start libvirtd`` # for systemd-based systems
* ``/etc/init.d/libvirt-bin start`` # for init.d-based systems
7. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
8. Check that your $HOME is accessible to the qemu user²
#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
You can test it with the following command:
```sh
virsh -c qemu:///system pool-list
```
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
```sh
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
```console
$ ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
$ usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account
```
(Replace `$USER` with your login name)
#### ² Qemu will run with a specific user. It must have access to the VMs drives
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
As were using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
```console
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
```
In order to fix that issue, you have several possibilities:
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
* backed by a filesystem with a lot of free disk space
* writable by your user;
* accessible by the qemu user.
* Grant the qemu user access to the storage pool.
* Edit `/etc/libvirt/qemu.conf` to run under that user, that have access to the storage pool (not recommended for production usage).
On Arch:
```sh
setfacl -m g:kvm:--x ~
```
### Setup
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
To start your local cluster, open a shell and run:
```sh
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The `NUM_NODES` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
You can check that your machines are there and running with:
```console
$ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_node-01 running
17 kubernetes_node-02 running
18 kubernetes_node-03 running
```
You can check that the Kubernetes cluster is working with:
```console
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> Ready
```
The VMs are running [CoreOS](https://coreos.com/).
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
The user to use to connect to the VM is `core`.
The IP to connect to the master is 192.168.10.1.
The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to `kubernetes_master`:
```sh
ssh core@192.168.10.1
```
Connect to `kubernetes_node-01`:
```sh
ssh core@192.168.10.2
```
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
```sh
export KUBERNETES_PROVIDER=libvirt-coreos
```
Bring up a libvirt-CoreOS cluster of 5 nodes
```sh
NUM_NODES=5 cluster/kube-up.sh
```
Destroy the libvirt-CoreOS cluster
```sh
cluster/kube-down.sh
```
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
```sh
cluster/kube-push.sh
```
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
```sh
KUBE_PUSH=local cluster/kube-push.sh
```
Interact with the cluster
```sh
kubectl ...
```
### Troubleshooting
#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz
Build the release tarballs:
```sh
make release
```
#### Can't find virsh in PATH, please fix and retry.
Install libvirt
On Arch:
```sh
pacman -S qemu libvirt
```
On Ubuntu 14.04.1:
```sh
aptitude install qemu-system-x86 libvirt-bin
```
On Fedora 21:
```sh
yum install qemu libvirt
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
Start the libvirt daemon
On Arch:
```sh
systemctl start libvirtd
```
On Ubuntu 14.04.1:
```sh
service libvirt-bin start
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
Fix libvirt access permission (Remember to adapt `$USER`)
On Arch and Fedora 21:
```sh
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```
On Ubuntu:
```sh
usermod -a -G libvirtd $USER
```
#### error: Out of memory initializing network (virsh net-create...)
Ensure libvirtd has been restarted since ebtables was installed.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,236 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Level Logging with Elasticsearch and Kibana This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/logging-elasticsearch/
On the Google Compute Engine (GCE) platform the default cluster level logging support targets
[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](logging.md) getting
started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an
alternative to Google Cloud Logging.
To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:
```console
KUBE_LOGGING_DESTINATION=elasticsearch
```
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
Now when you create a cluster a message will indicate that the Fluentd node-level log collectors
will target Elasticsearch:
```console
$ cluster/kube-up.sh
...
Project: kubernetes-satnam
Zone: us-central1-b
... calling kube-up
Project: kubernetes-satnam
Zone: us-central1-b
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
Looking for already existing resources
Starting master and configuring firewalls
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
NAME ZONE SIZE_GB TYPE STATUS
kubernetes-master-pd us-central1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
+++ Logging using Fluentd to elasticsearch
```
The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana
viewer should be running in the kube-system namespace soon after the cluster comes to life.
```console
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
```
Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers
the Docker container logs and sends them to Elasticsearch. The Fluentd collector communicates to
a Kubernetes service that maps requests to specific Elasticsearch pods. Similarly, Kibana can also be
accessed via a Kubernetes service definition.
```console
$ kubectl get services --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch-logging k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch k8s-app=elasticsearch-logging 10.0.222.57 9200/TCP
kibana-logging k8s-app=kibana-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Kibana k8s-app=kibana-logging 10.0.193.226 5601/TCP
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
monitoring-grafana kubernetes.io/cluster-service=true,kubernetes.io/name=Grafana k8s-app=influxGrafana 10.0.167.139 80/TCP
monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster 10.0.208.221 80/TCP
monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.188.57 8083/TCP
```
By default two Elasticsearch replicas are created and one Kibana replica is created.
```console
$ kubectl get rc --namespace=kube-system
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch-logging-v1 elasticsearch-logging gcr.io/google_containers/elasticsearch:1.4 k8s-app=elasticsearch-logging,version=v1 2
kibana-logging-v1 kibana-logging gcr.io/google_containers/kibana:1.3 k8s-app=kibana-logging,version=v1 1
kube-dns-v3 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v3 1
kube2sky gcr.io/google_containers/kube2sky:1.9
skydns gcr.io/google_containers/skydns:2015-03-11-001
monitoring-heapster-v4 heapster gcr.io/google_containers/heapster:v0.14.3 k8s-app=heapster,version=v4 1
monitoring-influx-grafana-v1 influxdb gcr.io/google_containers/heapster_influxdb:v0.3 k8s-app=influxGrafana,version=v1 1
grafana gcr.io/google_containers/heapster_grafana:v0.7
```
The Elasticsearch and Kibana services are not directly exposed via a publicly reachable IP address. Instead,
they can be accessed via the service proxy running at the master. The URLs for accessing Elasticsearch
and Kibana via the service proxy can be found using the `kubectl cluster-info` command.
```console
$ kubectl cluster-info
Kubernetes master is running at https://146.148.94.154
Elasticsearch is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Kibana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
```
Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out
the `admin` password for the cluster using `kubectl config view`.
```console
$ kubectl config view
...
- name: kubernetes-satnam_kubernetes-basic-auth
user:
password: 7GlspJ9Q43OnGIJO
username: admin
...
```
The first time you try to access the cluster from a browser a dialog box appears asking for the username and password.
Use the username `admin` and provide the basic auth password reported by `kubectl config view` for the
cluster you are trying to connect to. Connecting to the Elasticsearch URL should then give the
status page for Elasticsearch.
![Elasticsearch Status](es-browser.png)
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
from your local machine using `curl` but first you need to know what your bearer token is:
```console
$ kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://146.148.94.154
name: kubernetes-satnam_kubernetes
contexts:
- context:
cluster: kubernetes-satnam_kubernetes
user: kubernetes-satnam_kubernetes
name: kubernetes-satnam_kubernetes
current-context: kubernetes-satnam_kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-satnam_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp
```
Now you can issue requests to Elasticsearch:
```console
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/
{
"status" : 200,
"name" : "Vance Astrovik",
"cluster_name" : "kubernetes-logging",
"version" : {
"number" : "1.5.2",
"build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
"build_timestamp" : "2015-04-27T09:21:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
```
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
```console
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?pretty=true
{
"took" : 7,
"timed_out" : false,
"_shards" : {
"total" : 6,
"successful" : 6,
"failed" : 0
},
"hits" : {
"total" : 123711,
"max_score" : 1.0,
"hits" : [ {
"_index" : ".kibana",
"_type" : "config",
"_id" : "4.0.2",
"_score" : 1.0,
"_source":{"buildNum":6004,"defaultIndex":"logstash-*"}
}, {
...
"_index" : "logstash-2015.06.22",
"_type" : "fluentd",
"_id" : "AU4c_GvFZL5p_gZ8dxtx",
"_score" : 1.0,
"_source":{"log":"synthetic-logger-10lps-pod: 31: 2015-06-22 20:35:33.597918073+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:33+00:00"}
}, {
"_index" : "logstash-2015.06.22",
"_type" : "fluentd",
"_id" : "AU4c_GvFZL5p_gZ8dxt2",
"_score" : 1.0,
"_source":{"log":"synthetic-logger-10lps-pod: 36: 2015-06-22 20:35:34.108780133+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:34+00:00"}
} ]
}
}
```
The Elasticsearch website contains information about [URI search queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html) which can be used to extract the required logs.
Alternatively you can view the ingested logs using Kibana. The first time you visit the Kibana URL you will be
presented with a page that asks you to configure your view of the ingested logs. Select the option for
timeseries values and select `@timestamp`. On the following page select the `Discover` tab and then you
should be able to see the ingested logs. You can set the refresh interval to 5 seconds to have the logs
regulary refreshed. Here is a typical view of ingested logs from the Kibana viewer.
![Kibana logs](kibana-logs.png)
Another way to access Elasticsearch and Kibana in the cluster is to use `kubectl proxy` which will serve
a local proxy to the remote master:
```console
$ kubectl proxy
Starting to serve on localhost:8001
```
Now you can visit the URL [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging) to contact Elasticsearch and [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging) to access the Kibana viewer.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,226 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Level Logging to Google Cloud Logging This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/logging/
A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services.
Cluster level logging for Kubernetes allows us to collect logs which persist beyond the lifetime of the pods container images or the lifetime of the pod or even cluster. In this article we assume that a Kubernetes cluster has been created with cluster level logging support for sending logs to Google Cloud Logging. After a cluster has been created you will have a collection of system pods running in the `kube-system` namespace that support monitoring,
logging and DNS resolution for names of Kubernetes services:
```console
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
fluentd-cloud-logging-kubernetes-node-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-node-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-node-pk22 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-node-20ej 1/1 Running 0 31m
kube-dns-v3-pk22 3/3 Running 0 32m
monitoring-heapster-v1-20ej 0/1 Running 9 32m
```
Here is the same information in a picture which shows how the pods might be placed on specific nodes.
![Cluster](../../examples/blog-logging/diagrams/cloud-logging.png)
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pods execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
[cluster DNS service](../admin/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
To help explain how cluster level logging works lets start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: ubuntu:14.04
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
[Download example](../../examples/blog-logging/counter-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Lets create the pod in the default
namespace.
```console
$ kubectl create -f examples/blog-logging/counter-pod.yaml
pods/counter
```
We can observe the running pod:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 5m
```
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
One of the nodes is now running the counter pod:
![Counter Pod](../../examples/blog-logging/diagrams/27gf-counter.png)
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
```console
$ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015
1: Tue Jun 2 21:37:32 UTC 2015
2: Tue Jun 2 21:37:33 UTC 2015
3: Tue Jun 2 21:37:34 UTC 2015
4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015
...
```
This command fetches the log text from the Docker log file for the image that is running in this container. We can connect to the running container and observe the running counter bash script.
```console
$ kubectl exec -i counter bash
ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 17976 2888 ? Ss 00:02 0:00 bash -c for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done
root 468 0.0 0.0 17968 2904 ? Ss 00:05 0:00 bash
root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
```
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original containers execution and only see the log lines for the new container? Lets find out. First lets delete the currently running counter.
```console
$ kubectl delete pod counter
pods/counter
```
Now lets restart the counter.
```console
$ kubectl create -f examples/blog-logging/counter-pod.yaml
pods/counter
```
Lets wait for the container to restart and get the log lines again.
```console
$ kubectl logs counter
0: Tue Jun 2 21:51:40 UTC 2015
1: Tue Jun 2 21:51:41 UTC 2015
2: Tue Jun 2 21:51:42 UTC 2015
3: Tue Jun 2 21:51:43 UTC 2015
4: Tue Jun 2 21:51:44 UTC 2015
5: Tue Jun 2 21:51:45 UTC 2015
6: Tue Jun 2 21:51:46 UTC 2015
7: Tue Jun 2 21:51:47 UTC 2015
8: Tue Jun 2 21:51:48 UTC 2015
```
Weve lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But dont fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana.
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called `fluentd-cloud-logging` on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
This log collection pod has a specification which looks something like this:
<!-- BEGIN MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: fluentd-cloud-logging
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
dnsPolicy: Default
containers:
- name: fluentd-cloud-logging
image: gcr.io/google_containers/fluentd-gcp:1.17
resources:
limits:
cpu: 100m
memory: 200Mi
env:
- name: FLUENTD_ARGS
value: -q
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
```
[Download example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
![Cloud Logging Console](cloud-logging-console.png)
When we view the logs in the Developer Console we observe the logs for both invocations of the container.
![Both Logs](all-lines.png)
Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
Logs ingested into Google Cloud Logging may be exported to various other destinations including [Google Cloud Storage](https://cloud.google.com/storage/) buckets and [BigQuery](https://cloud.google.com/bigquery/). Use the Exports tab in the Cloud Logging console to specify where logs should be streamed to. You can also follow this link to the
[settings tab](https://pantheon.corp.google.com/project/_/logs/settings).
We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first:
```console
SELECT metadata.timestamp, structPayload.log
FROM [mylogs.kubernetes_counter_default_count_20150611]
ORDER BY metadata.timestamp DESC
```
Here is some sample output:
![BigQuery](bigquery-logging.png)
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a Compute Engine project called `myproject`. Only logs for the date 2015-06-11 are fetched.
```console
$ gsutil -m cp -r gs://myproject/kubernetes.counter_default_count/2015/06/11 .
```
Now we can run queries over the ingested logs. The example below uses the [jq](http://stedolan.github.io/jq/) program to extract just the log lines.
```console
$ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
"0: Thu Jun 11 21:39:38 UTC 2015\n"
"1: Thu Jun 11 21:39:39 UTC 2015\n"
"2: Thu Jun 11 21:39:40 UTC 2015\n"
"3: Thu Jun 11 21:39:41 UTC 2015\n"
"4: Thu Jun 11 21:39:42 UTC 2015\n"
"5: Thu Jun 11 21:39:43 UTC 2015\n"
"6: Thu Jun 11 21:39:44 UTC 2015\n"
"7: Thu Jun 11 21:39:45 UTC 2015\n"
...
```
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pods containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](https://github.com/kubernetes/contrib/tree/master/logging/fluentd-sidecar-gcp) and sending them to the Google Cloud Logging service.
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,327 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting Started With Kubernetes on Mesos on Docker This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/mesos-docker/
----------------------------------------
The mesos/docker provider uses docker-compose to launch Kubernetes as a Mesos framework, running in docker with its
dependencies (etcd & mesos).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Cluster Goals](#cluster-goals)
- [Cluster Topology](#cluster-topology)
- [Prerequisites](#prerequisites)
- [Install on Mac (Homebrew)](#install-on-mac-homebrew)
- [Install on Linux](#install-on-linux)
- [Docker Machine Config (Mac)](#docker-machine-config-mac)
- [Walkthrough](#walkthrough)
- [Addons](#addons)
- [KubeUI](#kubeui)
- [End To End Testing](#end-to-end-testing)
- [Kubernetes CLI](#kubernetes-cli)
- [Helpful scripts](#helpful-scripts)
- [Build Locally](#build-locally)
<!-- END MUNGE: GENERATED_TOC -->
## Cluster Goals
- kubernetes development
- pod/service development
- demoing
- fast deployment
- minimal hardware requirements
- minimal configuration
- entry point for exploration
- simplified networking
- fast end-to-end tests
- local deployment
Non-Goals:
- high availability
- fault tolerance
- remote deployment
- production usage
- monitoring
- long running
- state persistence across restarts
## Cluster Topology
The cluster consists of several docker containers linked together by docker-managed hostnames:
| Component | Hostname | Description |
|-------------------------------|-----------------------------|-----------------------------------------------------------------------------------------|
| docker-grand-ambassador | | Proxy to allow circular hostname linking in docker |
| etcd | etcd | Key/Value store used by Mesos |
| Mesos Master | mesosmaster1 | REST endpoint for interacting with Mesos |
| Mesos Slave (x2) | mesosslave1<br/>mesosslave2 | Mesos agents that offer resources and run framework executors (e.g. Kubernetes Kublets) |
| Kubernetes API Server | apiserver | REST endpoint for interacting with Kubernetes |
| Kubernetes Controller Manager | controller | |
| Kubernetes Scheduler | scheduler | Schedules container deployment by accepting Mesos offers |
## Prerequisites
Required:
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - version control system
- [Docker CLI](https://docs.docker.com/) - container management command line client
- [Docker Engine](https://docs.docker.com/) - container management daemon
- On Mac, use [Docker Machine](https://docs.docker.com/machine/install-machine/)
- [Docker Compose](https://docs.docker.com/compose/install/) - multi-container application orchestration
Optional:
- [Virtual Box](https://www.virtualbox.org/wiki/Downloads)
- Free x86 virtualization engine with a Docker Machine driver
- [Golang](https://golang.org/doc/install) - Go programming language
- Required to build Kubernetes locally
- [Make](https://en.wikipedia.org/wiki/Make_(software)) - Utility for building executables from source
- Required to build Kubernetes locally with make
### Install on Mac (Homebrew)
It's possible to install all of the above via [Homebrew](http://brew.sh/) on a Mac.
Some steps print instructions for configuring or launching. Make sure each is properly set up before continuing to the next step.
```
brew install git
brew install caskroom/cask/brew-cask
brew cask install virtualbox
brew install docker
brew install docker-machine
brew install docker-compose
```
### Install on Linux
Most of the above are available via apt and yum, but depending on your distribution, you may have to install via other
means to get the latest versions.
It is recommended to use Ubuntu, simply because it best supports AUFS, used by docker to mount volumes. Alternate file
systems may not fully support docker-in-docker.
In order to build Kubernetes, the current user must be in a docker group with sudo privileges.
See the docker docs for [instructions](https://docs.docker.com/installation/ubuntulinux/#create-a-docker-group).
#### Docker Machine Config (Mac)
If on a mac using docker-machine, the following steps will make the docker IPs (in the virtualbox VM) reachable from the
host machine (mac).
1. Create VM
oracle-virtualbox
```
docker-machine create --driver virtualbox kube-dev
eval "$(docker-machine env kube-dev)"
```
2. Set the VM's host-only network to "promiscuous mode":
oracle-virtualbox
```
docker-machine stop kube-dev
VBoxManage modifyvm kube-dev --nicpromisc2 allow-all
docker-machine start kube-dev
```
This allows the VM to accept packets that were sent to a different IP.
Since the host-only network routes traffic between VMs and the host, other VMs will also be able to access the docker
IPs, if they have the following route.
1. Route traffic to docker through the docker-machine IP:
```
sudo route -n add -net 172.17.0.0 $(docker-machine ip kube-dev)
```
Since the docker-machine IP can change when the VM is restarted, this route may need to be updated over time.
To delete the route later: `sudo route delete 172.17.0.0`
## Walkthrough
1. Checkout source
```
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
```
By default, that will get you the bleeding edge of master branch.
You may want a [release branch](https://github.com/kubernetes/kubernetes/releases) instead,
if you have trouble with master.
1. Build binaries
You'll need to build kubectl (CLI) for your local architecture and operating system and the rest of the server binaries for linux/amd64.
Building a new release covers both cases:
```
KUBERNETES_CONTRIB=mesos build/release.sh
```
For developers, it may be faster to [build locally](#build-locally).
1. [Optional] Build docker images
The following docker images are built as part of `./cluster/kube-up.sh`, but it may make sense to build them manually the first time because it may take a while.
1. Test image includes all the dependencies required for running e2e tests.
```
./cluster/mesos/docker/test/build.sh
```
In the future, this image may be available to download. It doesn't contain anything specific to the current release, except its build dependencies.
1. Kubernetes-Mesos image includes the compiled linux binaries.
```
./cluster/mesos/docker/km/build.sh
```
This image needs to be built every time you recompile the server binaries.
1. [Optional] Configure Mesos resources
By default, the mesos-slaves are configured to offer a fixed amount of resources (cpus, memory, disk, ports).
If you want to customize these values, update the `MESOS_RESOURCES` environment variables in `./cluster/mesos/docker/docker-compose.yml`.
If you delete the `MESOS_RESOURCES` environment variables, the resource amounts will be auto-detected based on the host resources, which will over-provision by > 2x.
If the configured resources are not available on the host, you may want to increase the resources available to Docker Engine.
You may have to increase you VM disk, memory, or cpu allocation. See the Docker Machine docs for details
([Virtualbox](https://docs.docker.com/machine/drivers/virtualbox))
1. Configure provider
```
export KUBERNETES_PROVIDER=mesos/docker
```
This tells cluster scripts to use the code within `cluster/mesos/docker`.
1. Create cluster
```
./cluster/kube-up.sh
```
If you manually built all the above docker images, you can skip that step during kube-up:
```
MESOS_DOCKER_SKIP_BUILD=true ./cluster/kube-up.sh
```
After deploying the cluster, `~/.kube/config` will be created or updated to configure kubectl to target the new cluster.
1. Explore examples
To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the
[Kubernetes Walkthrough](../user-guide/walkthrough/).
To skip to a more advanced example, see the [Guestbook Example](../../examples/guestbook/)
1. Destroy cluster
```
./cluster/kube-down.sh
```
## Addons
The `kube-up` for the mesos/docker provider will automatically deploy KubeDNS and KubeUI addons as pods/services.
Check their status with:
```
./cluster/kubectl.sh get pods --namespace=kube-system
```
### KubeUI
The web-based Kubernetes UI is accessible in a browser through the API Server proxy: `https://<apiserver>:6443/ui/`.
By default, basic-auth is configured with user `admin` and password `admin`.
The IP of the API Server can be found using `./cluster/kubectl.sh cluster-info`.
## End To End Testing
Warning: e2e tests can take a long time to run. You may not want to run them immediately if you're just getting started.
While your cluster is up, you can run the end-to-end tests:
```
./cluster/test-e2e.sh
```
Notable parameters:
- Increase the logging verbosity: `-v=2`
- Run only a subset of the tests (regex matching): `-ginkgo.focus=<pattern>`
To build, deploy, test, and destroy, all in one command (plus unit & integration tests):
```
make test_e2e
```
## Kubernetes CLI
When compiling from source, it's simplest to use the `./cluster/kubectl.sh` script, which detects your platform &
architecture and proxies commands to the appropriate `kubectl` binary.
ex: `./cluster/kubectl.sh get pods`
## Helpful scripts
- Kill all docker containers
```
docker ps -q -a | xargs docker rm -f
```
- Clean up unused docker volumes
```
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes
```
## Build Locally
The steps above tell you how to build in a container, for minimal local dependencies. But if you have Go and Make installed you can build locally much faster:
```
KUBERNETES_CONTRIB=mesos make
```
However, if you're not on linux, you'll still need to compile the linux/amd64 server binaries:
```
KUBERNETES_CONTRIB=mesos build/run.sh hack/build-go.sh
```
The above two steps should be significantly faster than cross-compiling a whole new release for every supported platform (which is what `./build/release.sh` does).
Breakdown:
- `KUBERNETES_CONTRIB=mesos` - enables building of the contrib/mesos binaries
- `hack/build-go.sh` - builds the Go binaries for the current architecture (linux/amd64 when in a docker container)
- `make` - delegates to `hack/build-go.sh`
- `build/run.sh` - executes a command in the build container
- `build/release.sh` - cross compiles Kubernetes for all supported architectures and operating systems (slow)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,348 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started with Kubernetes on Mesos This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/mesos/
----------------------------------------
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [About Kubernetes on Mesos](#about-kubernetes-on-mesos)
- [Prerequisites](#prerequisites)
- [Deploy Kubernetes-Mesos](#deploy-kubernetes-mesos)
- [Deploy etcd](#deploy-etcd)
- [Start Kubernetes-Mesos Services](#start-kubernetes-mesos-services)
- [Validate KM Services](#validate-km-services)
- [Spin up a pod](#spin-up-a-pod)
- [Launching kube-dns](#launching-kube-dns)
- [What next?](#what-next)
<!-- END MUNGE: GENERATED_TOC -->
## About Kubernetes on Mesos
<!-- TODO: Update, clean up. -->
Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3].
Mesos also ensures applications from different frameworks running on your cluster are isolated and that resources are allocated fairly among them.
Mesos clusters can be deployed on nearly every IaaS cloud provider infrastructure or in your own physical datacenter. Kubernetes on Mesos runs on-top of that and therefore allows you to easily move Kubernetes workloads from one of these environments to the other.
This tutorial will walk you through setting up Kubernetes on a Mesos cluster.
It provides a step by step walk through of adding Kubernetes to a Mesos cluster and starting your first pod with an nginx webserver.
**NOTE:** There are [known issues with the current implementation][7] and support for centralized logging and monitoring is not yet available.
Please [file an issue against the kubernetes-mesos project][8] if you have problems completing the steps below.
Further information is available in the Kubernetes on Mesos [contrib directory][13].
### Prerequisites
- Understanding of [Apache Mesos][6]
- A running [Mesos cluster on Google Compute Engine][5]
- A [VPN connection][10] to the cluster
- A machine in the cluster which should become the Kubernetes *master node* with:
- Go (see [here](../devel/development.md#go-versions) for required versions)
- make (i.e. build-essential)
- Docker
**Note**: You *can*, but you *don't have to* deploy Kubernetes-Mesos on the same machine the Mesos master is running on.
### Deploy Kubernetes-Mesos
Log into the future Kubernetes *master node* over SSH, replacing the placeholder below with the correct IP address.
```bash
ssh jclouds@${ip_address_of_master_node}
```
Build Kubernetes-Mesos.
```bash
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
export KUBERNETES_CONTRIB=mesos
make
```
Set some environment variables.
The internal IP address of the master may be obtained via `hostname -i`.
```bash
export KUBERNETES_MASTER_IP=$(hostname -i)
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
```
Note that KUBERNETES_MASTER is used as the api endpoint. If you have existing `~/.kube/config` and point to another endpoint, you need to add option `--server=${KUBERNETES_MASTER}` to kubectl in later steps.
### Deploy etcd
Start etcd and verify that it is running:
```bash
sudo docker run -d --hostname $(uname -n) --name etcd \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.2.1 \
--listen-client-urls http://0.0.0.0:4001 \
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
```
```console
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 quay.io/coreos/etcd:v2.2.1 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
```
It's also a good idea to ensure your etcd instance is reachable by testing it
```bash
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
```
If connectivity is OK, you will see an output of the available keys in etcd (if any).
### Start Kubernetes-Mesos Services
Update your PATH to more easily run the Kubernetes-Mesos binaries:
```bash
export PATH="$(pwd)/_output/local/go/bin:$PATH"
```
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos-master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
```bash
export MESOS_MASTER=<host:port or zk:// url>
```
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
```console
$ cat <<EOF >mesos-cloud.conf
[mesos-cloud]
mesos-master = ${MESOS_MASTER}
EOF
```
Now start the kubernetes-mesos API server, controller manager, and scheduler on the master node:
```console
$ km apiserver \
--address=${KUBERNETES_MASTER_IP} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--service-cluster-ip-range=10.10.10.0/24 \
--port=8888 \
--cloud-provider=mesos \
--cloud-config=mesos-cloud.conf \
--secure-port=0 \
--v=1 >apiserver.log 2>&1 &
$ km controller-manager \
--master=${KUBERNETES_MASTER_IP}:8888 \
--cloud-provider=mesos \
--cloud-config=./mesos-cloud.conf \
--v=1 >controller.log 2>&1 &
$ km scheduler \
--address=${KUBERNETES_MASTER_IP} \
--mesos-master=${MESOS_MASTER} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--mesos-user=root \
--api-servers=${KUBERNETES_MASTER_IP}:8888 \
--cluster-dns=10.10.10.10 \
--cluster-domain=cluster.local \
--v=2 >scheduler.log 2>&1 &
```
Disown your background jobs so that they'll stay running if you log out.
```bash
disown -a
```
#### Validate KM Services
Interact with the kubernetes-mesos framework via `kubectl`:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
```console
# NOTE: your service IPs will likely differ
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
k8sm-scheduler component=scheduler,provider=k8sm <none> 10.10.10.113 10251/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.1 443/TCP
```
Lastly, look for Kubernetes in the Mesos web GUI by pointing your browser to
`http://<mesos-master-ip:port>`. Make sure you have an active VPN connection.
Go to the Frameworks tab, and look for an active framework named "Kubernetes".
## Spin up a pod
Write a JSON pod description to a local file:
```bash
$ cat <<EOPOD >nginx.yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOPOD
```
Send the pod description to Kubernetes using the `kubectl` CLI:
```console
$ kubectl create -f ./nginx.yaml
pods/nginx
```
Wait a minute or two while `dockerd` downloads the image layers from the internet.
We can use the `kubectl` interface to monitor the status of our pod:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 14s
```
Verify that the pod task is running in the Mesos web GUI. Click on the
Kubernetes framework. The next screen should show the running Mesos task that
started the Kubernetes pod.
## Launching kube-dns
Kube-dns is an addon for Kubernetes which adds DNS-based service discovery to the cluster. For a detailed explanation see [DNS in Kubernetes][4].
The kube-dns addon runs as a pod inside the cluster. The pod consists of three co-located containers:
- a local etcd instance
- the [skydns][11] DNS server
- the kube2sky process to glue skydns to the state of the Kubernetes cluster.
The skydns container offers DNS service via port 53 to the cluster. The etcd communication works via local 127.0.0.1 communication
We assume that kube-dns will use
- the service IP `10.10.10.10`
- and the `cluster.local` domain.
Note that we have passed these two values already as parameter to the apiserver above.
A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:
- replace `{{ pillar['dns_replicas'] }}` with `1`
- replace `{{ pillar['dns_domain'] }}` with `cluster.local.`
- add `--kube_master_url=${KUBERNETES_MASTER}` parameter to the kube2sky container command.
In addition the service template at [cluster/addons/dns/skydns-svc.yaml.in][12] needs the following replacement:
- `{{ pillar['dns_server'] }}` with `10.10.10.10`.
To do this automatically:
```bash
sed -e "s/{{ pillar\['dns_replicas'\] }}/1/g;"\
"s,\(command = \"/kube2sky\"\),\\1\\"$'\n'" - --kube_master_url=${KUBERNETES_MASTER},;"\
"s/{{ pillar\['dns_domain'\] }}/cluster.local/g" \
cluster/addons/dns/skydns-rc.yaml.in > skydns-rc.yaml
sed -e "s/{{ pillar\['dns_server'\] }}/10.10.10.10/g" \
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
```
Now the kube-dns pod and service are ready to be launched:
```bash
kubectl create -f ./skydns-rc.yaml
kubectl create -f ./skydns-svc.yaml
```
Check with `kubectl get pods --namespace=kube-system` that 3/3 containers of the pods are eventually up and running. Note that the kube-dns pods run in the `kube-system` namespace, not in `default`.
To check that the new DNS service in the cluster works, we start a busybox pod and use that to do a DNS lookup. First create the `busybox.yaml` pod spec:
```bash
cat <<EOF >busybox.yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
```
Then start the pod:
```bash
kubectl create -f ./busybox.yaml
```
When the pod is up and running, start a lookup for the Kubernetes master service, made available on 10.10.10.1 by default:
```bash
kubectl exec busybox -- nslookup kubernetes
```
If everything works fine, you will get this output:
```console
Server: 10.10.10.10
Address 1: 10.10.10.10
Name: kubernetes
Address 1: 10.10.10.1
```
## What next?
Try out some of the standard [Kubernetes examples][9].
Read about Kubernetes on Mesos' architecture in the [contrib directory][13].
**NOTE:** Some examples require Kubernetes DNS to be installed on the cluster.
Future work will add instructions to this guide to enable support for Kubernetes DNS.
**NOTE:** Please be aware that there are [known issues with the current Kubernetes-Mesos implementation][7].
[1]: http://mesosphere.com/docs/tutorials/run-hadoop-on-mesos-using-installer
[2]: http://mesosphere.com/docs/tutorials/run-spark-on-mesos
[3]: http://mesosphere.com/docs/tutorials/run-chronos-on-mesos
[4]: ../../cluster/addons/dns/README.md
[5]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/
[6]: http://mesos.apache.org/
[7]: ../../contrib/mesos/docs/issues.md
[8]: https://github.com/mesosphere/kubernetes-mesos/issues
[9]: ../../examples/
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
[11]: ../../cluster/addons/dns/skydns-rc.yaml.in
[12]: ../../cluster/addons/dns/skydns-svc.yaml.in
[13]: ../../contrib/mesos/README.md
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,60 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on oVirt
------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/ovirt/
- [What is oVirt](#what-is-ovirt)
- [oVirt Cloud Provider Deployment](#ovirt-cloud-provider-deployment)
- [Using the oVirt Cloud Provider](#using-the-ovirt-cloud-provider)
- [oVirt Cloud Provider Screencast](#ovirt-cloud-provider-screencast)
## What is oVirt
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
## oVirt Cloud Provider Deployment
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines
[generate a template]: http://www.ovirt.org/Quick_Start_Guide#Using_Templates
[install the ovirt-guest-agent]: http://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
## Using the oVirt Cloud Provider
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
[connection]
uri = https://localhost:8443/ovirt-engine/api
username = admin@internal
password = admin
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
[filters]
# Search query used to find nodes
vms = tag=kubernetes
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
## oVirt Cloud Provider Screencast
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,77 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started on Rackspace
----------------------------
**Table of Contents** This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/rackspace/
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Provider: Rackspace](#provider-rackspace)
- [Build](#build)
- [Cluster](#cluster)
- [Some notes:](#some-notes)
- [Network Design](#network-design)
## Introduction
* Supported Version: v0.18.1
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to Google Compute Engine. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
These scripts should be used to deploy development environments for Kubernetes. If your account leverages RackConnect or non-standard networking, these scripts will most likely not work without modification.
NOTE: The rackspace scripts do NOT rely on `saltstack` and instead rely on cloud-init for configuration.
The current cluster design is inspired by:
- [corekube](https://github.com/metral/corekube)
- [Angus Lees](https://github.com/anguslees/kube-openstack)
## Prerequisites
1. Python2.7
2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into.
3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details.
## Provider: Rackspace
- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh`
- Note: The get.k8s.io install method is not working yet for our scripts.
* To install the latest released version of Kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
## Build
1. The Kubernetes binaries will be built via the common build scripts in `build/`.
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/nodes when booted.
## Cluster
There is a specific `cluster/rackspace` directory with the scripts for the following steps:
1. A cloud network will be created and all instances will be attached to this network.
- flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network.
2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password).
3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems.
4. We then boot as many nodes as defined via `$NUM_NODES`.
## Some notes
- The scripts expect `eth2` to be the cloud network that the containers will communicate across.
- A number of the items in `config-default.sh` are overridable via environment variables.
- For older versions please either:
* Sync back to `v0.9` with `git checkout v0.9`
* Download a [snapshot of `v0.9`](https://github.com/kubernetes/kubernetes/archive/v0.9.tar.gz)
* Sync back to `v0.3` with `git checkout v0.3`
* Download a [snapshot of `v0.3`](https://github.com/kubernetes/kubernetes/archive/v0.3.tar.gz)
## Network Design
- eth0 - Public Interface used for servers/containers to reach the internet
- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services.
- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,223 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Run Kubernetes with rkt This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/rkt/README/
This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime.
We still have [a bunch of work](http://issue.k8s.io/8262) to do to make the experience with rkt wonderful, please stay tuned!
### **Prerequisite**
- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on the machine and should be enabled. The minimum version required at this moment (2015/09/01) is 219
*(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)*
- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt).
The minimum version required for now is [v0.8.0](https://github.com/coreos/rkt/releases/tag/v0.8.0).
- Note that for rkt version later than v0.7.0, `metadata service` is not required for running pods in private networks. So now rkt pods will not register the metadata service be default.
- Since release [v1.2.0-alpha.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.2.0-alpha.5),
the [rkt API service](https://github.com/coreos/rkt/blob/master/api/v1alpha/README.md)
must be running on the node.
### Network Setup
rkt uses the [Container Network Interface (CNI)](https://github.com/appc/cni)
to manage container networking. By default, all pods attempt to join a network
called `rkt.kubernetes.io`, which is currently defined [in `rkt.go`]
(https://github.com/kubernetes/kubernetes/blob/v1.2.0-alpha.6/pkg/kubelet/rkt/rkt.go#L91).
In order for pods to get correct IP addresses, the CNI config file must be
edited to add this `rkt.kubernetes.io` network:
#### Using flannel
In addition to the basic prerequisites above, each node must be running
a [flannel](https://github.com/coreos/flannel) daemon. This implies
that a flannel-supporting etcd service must be available to the cluster
as well, apart from the Kubernetes etcd, which will not yet be
available at flannel configuration time. Once it's running, flannel can
be set up with a CNI config like:
```console
$ cat <<EOF >/etc/rkt/net.d/k8s_cluster.conf
{
"name": "rkt.kubernetes.io",
"type": "flannel"
}
EOF
```
While `k8s_cluster.conf` is a rather arbitrary name for the config file itself,
and can be adjusted to suit local conventions, the keys and values should be exactly
as shown above. `name` must be `rkt.kubernetes.io` and `type` should be `flannel`.
More details about the flannel CNI plugin can be found
[in the CNI documentation](https://github.com/appc/cni/blob/master/Documentation/flannel.md).
#### On GCE
Each VM on GCE has an additional 256 IP addresses routed to it, so
it is possible to forego flannel in smaller clusters. This makes the
necessary CNI config file a bit more verbose:
```console
$ cat <<EOF >/etc/rkt/net.d/k8s_cluster.conf
{
"name": "rkt.kubernetes.io",
"type": "bridge",
"bridge": "cbr0",
"isGateway": true,
"ipam": {
"type": "host-local",
"subnet": "10.255.228.1/24",
"gateway": "10.255.228.1"
},
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
EOF
```
This example creates a `bridge` plugin configuration for the CNI network, specifying
the bridge name `cbr0`. It also specifies the CIDR, in the `ipam` field.
Creating these files for any moderately-sized cluster is at best inconvenient.
Work is in progress to
[enable Kubernetes to use the CNI by default]
(https://github.com/kubernetes/kubernetes/pull/18795/files).
As that work matures, such manual CNI config munging will become unnecessary
for primary use cases. For early adopters, an initial example shows one way to
[automatically generate these CNI configurations]
(https://gist.github.com/yifan-gu/fbb911db83d785915543)
for rkt.
### Local cluster
To use rkt as the container runtime, we need to supply the following flags to kubelet:
- `--container-runtime=rkt` chooses the container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.
- `--rkt-path=$PATH_TO_RKT_BINARY` sets the path of rkt binary. Leave empty to use the first rkt in $PATH.
- `--rkt-stage1-image` sets the path of the stage1 image. Local paths and http/https URLs are supported. Leave empty to use the 'stage1.aci' that locates in the same directory as the rkt binary.
If you are using the [hack/local-up-cluster.sh](../../../hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
set these flags:
```console
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
```
Then we can launch the local cluster using the script:
```console
$ hack/local-up-cluster.sh
```
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
```console
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_NODE_IMAGE=<image_id>
$ export KUBE_GCE_NODE_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```console
$ export KUBE_RKT_VERSION=0.15.0
```
Then you can launch the cluster by:
```console
$ cluster/kube-up.sh
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
### CoreOS cluster on AWS
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
```console
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```console
$ export KUBE_RKT_VERSION=0.8.0
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
```console
$ export COREOS_CHANNEL=stable
```
Then you can launch the cluster by:
```console
$ kube-up.sh
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
### Getting started with your cluster
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).
### Different UX with rkt container runtime
rkt and Docker have very different designs, as well as ACI and Docker image format. Users might experience some different experience when switching from one to the other. More information can be found [here](notes.md).
### Debugging
Here are several tips for you when you run into any issues.
##### Check logs
By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4.
For local cluster, we can set the environment variable: `LOG_LEVEL=4`.
If the cluster is using salt, we can edit the [logging.sls](../../../cluster/saltbase/pillar/logging.sls) in the saltbase.
##### Check rkt pod status
To check the pods' status, we can use rkt command, such as `rkt list`, `rkt status`, `rkt image list`, etc.
More information about rkt command line can be found [here](https://github.com/coreos/rkt/blob/master/Documentation/commands.md)
##### Check journal logs
As we use systemd to launch rkt pods(by creating service files which will run `rkt run-prepared`, we can check the pods' log
using `journalctl`:
- Check the running state of the systemd service:
```console
$ sudo journalctl -u $SERVICE_FILE
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
```console
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
##### Check Kubernetes events, logs.
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](../../../docs/user-guide/application-troubleshooting.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rkt/README.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rkt/README.md?pixel)]()

View File

@ -27,105 +27,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Notes on Different UX with rkt container runtime This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/rkt/notes/
### Doesn't support ENTRYPOINT + CMD feature
To run a Docker image, rkt will convert it into [App Container Image (ACI) format](https://github.com/appc/spec/blob/master/SPEC.md) first.
However, during the conversion, the `ENTRYPOINT` and `CMD` are concatentated to construct ACI's `Exec` field.
This means after the conversion, we are not able to replace only `ENTRYPOINT` or `CMD` without touching the other part.
So for now, users are recommended to specify the **executable path** in `Command` and **arguments** in `Args`.
(This has the same effect if users specify the **executable path + arguments** in `Command` or `Args` alone).
For example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
The above pod yaml file is valid as it's not specifying `Command` or `Args`, so the default `ENTRYPOINT` and `CMD` of the image will be used.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
command:
- /bin/sleep
- 1000
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
command:
- /bin/sleep
args:
- 1000
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sleep
- 1000
```
All the three examples above are valid as they contain both the executable path and the arguments.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- 1000
```
The last example is invalid, as we cannot override just the `CMD` of the image alone.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,863 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started from Scratch
----------------------------
This guide is for people who want to craft a custom Kubernetes cluster. If you This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/scratch/
can find an existing Getting Started Guide that meets your needs on [this
list](README.md), then we recommend using it, as you will be able to benefit
from the experience of others. However, if you have specific IaaS, networking,
configuration management, or operating system requirements not met by any of
those guides, then this guide will provide an outline of the steps you need to
take. Note that it requires considerably more effort than using one of the
pre-defined guides.
This guide is also useful for those wanting to understand at a high level some of the
steps that existing cluster setup scripts are making.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Designing and Preparing](#designing-and-preparing)
- [Learning](#learning)
- [Cloud Provider](#cloud-provider)
- [Nodes](#nodes)
- [Network](#network)
- [Cluster Naming](#cluster-naming)
- [Software Binaries](#software-binaries)
- [Downloading and Extracting Kubernetes Binaries](#downloading-and-extracting-kubernetes-binaries)
- [Selecting Images](#selecting-images)
- [Security Models](#security-models)
- [Preparing Certs](#preparing-certs)
- [Preparing Credentials](#preparing-credentials)
- [Configuring and Installing Base Software on Nodes](#configuring-and-installing-base-software-on-nodes)
- [Docker](#docker)
- [rkt](#rkt)
- [kubelet](#kubelet)
- [kube-proxy](#kube-proxy)
- [Networking](#networking)
- [Other](#other)
- [Using Configuration Management](#using-configuration-management)
- [Bootstrapping the Cluster](#bootstrapping-the-cluster)
- [etcd](#etcd)
- [Apiserver, Controller Manager, and Scheduler](#apiserver-controller-manager-and-scheduler)
- [Apiserver pod template](#apiserver-pod-template)
- [Cloud Providers](#cloud-providers)
- [Scheduler pod template](#scheduler-pod-template)
- [Controller Manager Template](#controller-manager-template)
- [Starting and Verifying Apiserver, Scheduler, and Controller Manager](#starting-and-verifying-apiserver-scheduler-and-controller-manager)
- [Starting Cluster Services](#starting-cluster-services)
- [Troubleshooting](#troubleshooting)
- [Running validate-cluster](#running-validate-cluster)
- [Inspect pods and services](#inspect-pods-and-services)
- [Try Examples](#try-examples)
- [Running the Conformance Test](#running-the-conformance-test)
- [Networking](#networking)
- [Getting Help](#getting-help)
<!-- END MUNGE: GENERATED_TOC -->
## Designing and Preparing
### Learning
1. You should be familiar with using Kubernetes already. We suggest you set
up a temporary cluster by following one of the other Getting Started Guides.
This will help you become familiar with the CLI ([kubectl](../user-guide/kubectl/kubectl.md)) and concepts ([pods](../user-guide/pods.md), [services](../user-guide/services.md), etc.) first.
1. You should have `kubectl` installed on your desktop. This will happen as a side
effect of completing one of the other Getting Started Guides. If not, follow the instructions
[here](../user-guide/prereqs.md).
### Cloud Provider
Kubernetes has the concept of a Cloud Provider, which is a module which provides
an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes.
The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to
create a custom cluster without implementing a cloud provider (for example if using
bare-metal), and not all parts of the interface need to be implemented, depending
on how flags are set on various components.
### Nodes
- You can use virtual or physical machines.
- While you can build a cluster with 1 machine, in order to run all the examples and tests you
need at least 4 nodes.
- Many Getting-started-guides make a distinction between the master node and regular nodes. This
is not strictly necessary.
- Nodes will need to run some version of Linux with the x86_64 architecture. It may be possible
to run on other OSes and Architectures, but this guide does not try to assist with that.
- Apiserver and etcd together are fine on a machine with 1 core and 1GB RAM for clusters with 10s of nodes.
Larger or more active clusters may benefit from more cores.
- Other nodes can have any reasonable amount of memory and any number of cores. They need not
have identical configurations.
### Network
Kubernetes has a distinctive [networking model](../admin/networking.md).
Kubernetes allocates an IP address to each pod. When creating a cluster, you
need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
approach is to allocate a different block of IPs to each node in the cluster as
the node is added. A process in one pod should be able to communicate with
another pod using the IP of the second pod. This connectivity can be
accomplished in two ways:
- Configure network to route Pod IPs
- Harder to setup from scratch.
- Google Compute Engine ([GCE](gce.md)) and [AWS](aws.md) guides use this approach.
- Need to make the Pod IPs routable by programming routers, switches, etc.
- Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
- Generally highest performance.
- Create an Overlay network
- Easier to setup
- Traffic is encapsulated, so per-pod IPs are routable.
- Examples:
- [Flannel](https://github.com/coreos/flannel)
- [Weave](http://weave.works/)
- [Open vSwitch (OVS)](http://openvswitch.org/)
- Does not require "Routes" portion of Cloud Provider module.
- Reduced performance (exactly how much depends on your solution).
You need to select an address range for the Pod IPs.
- Various approaches:
- GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
Kubernetes cluster from that space, which leaves room for several clusters.
Each node gets a further subdivision of this space.
- AWS: use one VPC for whole organization, carve off a chunk for each
cluster, or use different VPC for different clusters.
- IPv6 is not supported yet.
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
from which smaller CIDRs are automatically allocated to each node (if nodes
are dynamically added).
- You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per
node supports 254 pods per machine and is a common choice. If IPs are
scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient.
- e.g. use `10.10.0.0/16` as the range for the cluster, with up to 256 nodes
using `10.10.0.0/24` through `10.10.255.0/24`, respectively.
- Need to make these routable or connect with overlay.
Kubernetes also allocates an IP to each [service](../user-guide/services.md). However,
service IPs do not necessarily need to be routable. The kube-proxy takes care
of translating Service IPs to Pod IPs before traffic leaves the node. You do
need to Allocate a block of IPs for services. Call this
`SERVICE_CLUSTER_IP_RANGE`. For example, you could set
`SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"`, allowing 65534 distinct services to
be active at once. Note that you can grow the end of this range, but you
cannot move it without disrupting the services and pods that already use it.
Also, you need to pick a static IP for master node.
- Call this `MASTER_IP`.
- Open any firewalls to allow access to the apiserver ports 80 and/or 443.
- Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1`
### Cluster Naming
You should pick a name for your cluster. Pick a short name for each cluster
which is unique from future cluster names. This will be used in several ways:
- by kubectl to distinguish between various clusters you have access to. You will probably want a
second one sometime later, such as for testing new Kubernetes releases, running in a different
region of the world, etc.
- Kubernetes clusters can create cloud provider resources (e.g. AWS ELBs) and different clusters
need to distinguish which resources each created. Call this `CLUSTERNAME`.
### Software Binaries
You will need binaries for:
- etcd
- A container runner, one of:
- docker
- rkt
- Kubernetes
- kubelet
- kube-proxy
- kube-apiserver
- kube-controller-manager
- kube-scheduler
#### Downloading and Extracting Kubernetes Binaries
A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd.
You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the
[Developer Documentation](../devel/README.md). Only using a binary release is covered in this guide.
Download the [latest binary release](https://github.com/kubernetes/kubernetes/releases/latest) and unzip it.
Then locate `./kubernetes/server/kubernetes-server-linux-amd64.tar.gz` and unzip *that*.
Then, within the second set of unzipped files, locate `./kubernetes/server/bin`, which contains
all the necessary binaries.
#### Selecting Images
You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so
you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
we recommend that you run these as containers, so you need an image to be built.
You have several choices for Kubernetes images:
- Use images hosted on Google Container Registry (GCR):
- e.g `gcr.io/google_containers/hyperkube:$TAG`, where `TAG` is the latest
release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest).
- Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy.
- The [hyperkube](../../cmd/hyperkube/) binary is an all in one binary
- `hyperkube kubelet ...` runs the kublet, `hyperkube apiserver ...` runs an apiserver, etc.
- Build your own images.
- Useful if you are using a private registry.
- The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which
can be converted into docker images using a command like
`docker load -i kube-apiserver.tar`
- You can verify if the image is loaded successfully with the right repository and tag using
command like `docker images`
For etcd, you can:
- Use images hosted on Google Container Registry (GCR), such as `gcr.io/google_containers/etcd:2.2.1`
- Use images hosted on [Docker Hub](https://hub.docker.com/search/?q=etcd) or [Quay.io](https://quay.io/repository/coreos/etcd), such as `quay.io/coreos/etcd:v2.2.1`
- Use etcd binary included in your OS distro.
- Build your own image
- You can do: `cd kubernetes/cluster/images/etcd; make`
We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release
were tested extensively with this version of etcd and not with any other version.
The recommended version number can also be found as the value of `ETCD_VERSION` in `kubernetes/cluster/images/etcd/Makefile`.
The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
- `HYPERKUBE_IMAGE==gcr.io/google_containers/hyperkube:$TAG`
- `ETCD_IMAGE=gcr.io/google_containers/etcd:$ETCD_VERSION`
### Security Models
There are two main options for security:
- Access the apiserver using HTTP.
- Use a firewall for security.
- This is easier to setup.
- Access the apiserver using HTTPS
- Use https with certs, and credentials for user.
- This is the recommended approach.
- Configuring certs can be tricky.
If following the HTTPS approach, you will need to prepare certs and credentials.
#### Preparing Certs
You need to prepare several certs:
- The master needs a cert to act as an HTTPS server.
- The kubelets optionally need certs to identify themselves as clients of the master, and when
serving its own API over HTTPS.
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs.
- see function `create-certs` in `cluster/gce/util.sh`
- see also `cluster/saltbase/salt/generate-cert/make-ca-cert.sh` and
`cluster/saltbase/salt/generate-cert/make-cert.sh`
You will end up with the following files (we will use these variables later on)
- `CA_CERT`
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/ca.crt`.
- `MASTER_CERT`
- signed by CA_CERT
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/server.crt`
- `MASTER_KEY `
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/server.key`
- `KUBELET_CERT`
- optional
- `KUBELET_KEY`
- optional
#### Preparing Credentials
The admin user (and any users) need:
- a token or a password to identify them.
- tokens are just long alphanumeric strings, e.g. 32 chars. See
- `TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)`
Your tokens and passwords need to be stored in a file for the apiserver
to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
The format for this file is described in the [authentication documentation](../admin/authentication.md).
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
into a [kubeconfig file](../user-guide/kubeconfig-file.md).
The kubeconfig file for the administrator can be created as follows:
- If you have already used Kubernetes with a non-custom cluster (for example, used a Getting Started
Guide), you will already have a `$HOME/.kube/config` file.
- You need to add certs, keys, and the master IP to the kubeconfig file:
- If using the firewall-only security option, set the apiserver this way:
- `kubectl config set-cluster $CLUSTER_NAME --server=http://$MASTER_IP --insecure-skip-tls-verify=true`
- Otherwise, do this to set the apiserver ip, client certs, and user credentials.
- `kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP`
- `kubectl config set-credentials $USER --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN`
- Set your cluster as the default cluster to use:
- `kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER`
- `kubectl config use-context $CONTEXT_NAME`
Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how
many distinct files to make:
1. Use the same credential as the admin
- This is simplest to setup.
1. One token and kubeconfig file for all kubelets, one for all kube-proxy, one for admin.
- This mirrors what is done on GCE today
1. Different credentials for every kubelet, etc.
- We are working on this but all the pieces are not ready yet.
You can make the files by copying the `$HOME/.kube/config`, by following the code
in `cluster/gce/configure-vm.sh` or by using the following template:
```yaml
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
token: ${KUBELET_TOKEN}
clusters:
- name: local
cluster:
certificate-authority-data: ${CA_CERT_BASE64_ENCODED}
contexts:
- context:
cluster: local
user: kubelet
name: service-account-context
current-context: service-account-context
```
Put the kubeconfig(s) on every node. The examples later in this
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
`/var/lib/kubelet/kubeconfig`.
## Configuring and Installing Base Software on Nodes
This section discusses how to configure machines to be Kubernetes nodes.
You should run three daemons on every node:
- docker or rkt
- kubelet
- kube-proxy
You will also need to do assorted other configuration on top of a
base OS install.
Tip: One possible starting point is to setup a cluster using an existing Getting
Started Guide. After getting a cluster running, you can then copy the init.d scripts or systemd unit files from that
cluster, and then modify them for use on your custom cluster.
### Docker
The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it.
If you previously had Docker installed on a node without setting Kubernetes-specific
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
as follows before proceeding to configure Docker for Kubernetes.
```sh
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
```
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
Some suggested docker options:
- create your own bridge for the per-node CIDR ranges, call it cbr0, and set `--bridge=cbr0` option on docker.
- set `--iptables=false` so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions)
so that kube-proxy can manage iptables instead of docker.
- `--ip-masq=false`
- if you have setup PodIPs to be routable, then you want this false, otherwise, docker will
rewrite the PodIP source-address to a NodeIP.
- some environments (e.g. GCE) still need you to masquerade out-bound traffic when it leaves the cloud environment. This is very environment specific.
- if you are using an overlay network, consult those instructions.
- `--mtu=`
- may be required when using Flannel, because of the extra packet size due to udp encapsulation
- `--insecure-registry $CLUSTER_SUBNET`
- to connect to a private registry, if you set one up, without using SSL.
You may want to increase the number of open files for docker:
- `DOCKER_NOFILE=1000000`
Where this config goes depends on your node OS. For example, GCE's Debian-based distro uses `/etc/default/docker`.
Ensure docker is working correctly on your system before proceeding with the rest of the
installation, by following examples given in the Docker documentation.
### rkt
[rkt](https://github.com/coreos/rkt) is an alternative to Docker. You only need to install one of Docker or rkt.
The minimum version required is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
[systemd](http://www.freedesktop.org/wiki/Software/systemd/) is required on your node to run rkt. The
minimum version required to match rkt v0.5.6 is
[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
[rkt metadata service](https://github.com/coreos/rkt/blob/master/Documentation/networking.md) is also required
for rkt networking support. You can start rkt metadata service by using command like
`sudo systemd-run rkt metadata-service`
Then you need to configure your kubelet with flag:
- `--container-runtime=rkt`
### kubelet
All nodes should run kubelet. See [Selecting Binaries](#selecting-binaries).
Arguments to consider:
- If following the HTTPS security approach:
- `--api-servers=https://$MASTER_IP`
- `--kubeconfig=/var/lib/kubelet/kubeconfig`
- Otherwise, if taking the firewall-based security approach
- `--api-servers=http://$MASTER_IP`
- `--config=/etc/kubernetes/manifests`
- `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).)
- `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses.
- `--docker-root=`
- `--root-dir=`
- `--configure-cbr0=` (described above)
- `--register-node` (described in [Node](../admin/node.md) documentation.)
### kube-proxy
All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not
strictly required, but being consistent is easier.) Obtain a binary as described for
kubelet.
Arguments to consider:
- If following the HTTPS security approach:
- `--api-servers=https://$MASTER_IP`
- `--kubeconfig=/var/lib/kube-proxy/kubeconfig`
- Otherwise, if taking the firewall-based security approach
- `--api-servers=http://$MASTER_IP`
### Networking
Each node needs to be allocated its own CIDR range for pod networking.
Call this `NODE_X_POD_CIDR`.
A bridge called `cbr0` needs to be created on each node. The bridge is explained
further in the [networking documentation](../admin/networking.md). The bridge itself
needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call
this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
because of how this is used later.
- Recommended, automatic approach:
1. Set `--configure-cbr0=true` option in kubelet init script and restart kubelet service. Kubelet will configure cbr0 automatically.
It will wait to do this until the node controller has set Node.Spec.PodCIDR. Since you have not setup apiserver and node controller
yet, the bridge will not be setup immediately.
- Alternate, manual approach:
1. Set `--configure-cbr0=false` on kubelet and restart.
1. Create a bridge
- `brctl addbr cbr0`.
1. Set appropriate MTU. NOTE: the actual value of MTU will depend on your network environment
- `ip link set dev cbr0 mtu 1460`
1. Add the node's network to the bridge (docker will go on other side of bridge).
- `ip addr add $NODE_X_BRIDGE_ADDR dev cbr0`
1. Turn it on
- `ip link set dev cbr0 up`
If you have turned off Docker's IP masquerading to allow pods to talk to each
other, then you may need to do masquerading just for destination IPs outside
the cluster network. For example:
```sh
iptables -t nat -A POSTROUTING ! -d ${CLUSTER_SUBNET} -m addrtype ! --dst-type LOCAL -j MASQUERADE
```
This will rewrite the source address from
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
[connection tracking](http://www.iptables.info/en/connection-state.html)
will ensure that responses destined to the node still reach
the pod.
NOTE: This is environment specific. Some environments will not need
any masquerading at all. Others, such as GCE, will not allow pod IPs to send
traffic to the internet, but have no problem with them inside your GCE Project.
### Other
- Enable auto-upgrades for your OS package manager, if desired.
- Configure log rotation for all node components (e.g. using [logrotate](http://linux.die.net/man/8/logrotate)).
- Setup liveness-monitoring (e.g. using [supervisord](http://supervisord.org/)).
- Setup volume plugin support (optional)
- Install any client binaries for optional volume types, such as `glusterfs-client` for GlusterFS
volumes.
### Using Configuration Management
The previous steps all involved "conventional" system administration techniques for setting up
machines. You may want to use a Configuration Management system to automate the node configuration
process. There are examples of [Saltstack](../admin/salt.md), Ansible, Juju, and CoreOS Cloud Config in the
various Getting Started Guides.
## Bootstrapping the Cluster
While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using
traditional system administration/automation approaches, the remaining *master* components of Kubernetes are
all configured and managed *by Kubernetes*:
- their options are specified in a Pod spec (yaml or json) rather than an /etc/init.d file or
systemd unit.
- they are kept running by Kubernetes rather than by init.
### etcd
You will need to run one or more instances of etcd.
- Recommended approach: run one etcd instance, with its log written to a directory backed
by durable storage (RAID, GCE PD)
- Alternative: run 3 or 5 etcd instances.
- Log can be written to non-durable storage because storage is replicated.
- run a single apiserver which connects to one of the etc nodes.
See [cluster-troubleshooting](../admin/cluster-troubleshooting.md) for more discussion on factors affecting cluster
availability.
To run an etcd instance:
1. copy `cluster/saltbase/salt/etcd/etcd.manifest`
1. make any modifications needed
1. start the pod by putting it into the kubelet manifest directory
### Apiserver, Controller Manager, and Scheduler
The apiserver, controller manager, and scheduler will each run as a pod on the master node.
For each of these components, the steps to start them running are similar:
1. Start with a provided template for a pod.
1. Set the `HYPERKUBE_IMAGE` to the values chosen in [Selecting Images](#selecting-images).
1. Determine which flags are needed for your cluster, using the advice below each template.
1. Set the flags to be individual strings in the command array (e.g. $ARGN below)
1. Start the pod by putting the completed template into the kubelet manifest directory.
1. Verify that the pod is started.
#### Apiserver pod template
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver"
},
"spec": {
"hostNetwork": true,
"containers": [
{
"name": "kube-apiserver",
"image": "${HYPERKUBE_IMAGE}",
"command": [
"/hyperkube",
"apiserver",
"$ARG1",
"$ARG2",
...
"$ARGN"
],
"ports": [
{
"name": "https",
"hostPort": 443,
"containerPort": 443
},
{
"name": "local",
"hostPort": 8080,
"containerPort": 8080
}
],
"volumeMounts": [
{
"name": "srvkube",
"mountPath": "/srv/kubernetes",
"readOnly": true
},
{
"name": "etcssl",
"mountPath": "/etc/ssl",
"readOnly": true
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 8080
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"volumes": [
{
"name": "srvkube",
"hostPath": {
"path": "/srv/kubernetes"
}
},
{
"name": "etcssl",
"hostPath": {
"path": "/etc/ssl"
}
}
]
}
}
```
Here are some apiserver flags you may need to set:
- `--cloud-provider=` see [cloud providers](#cloud-providers)
- `--cloud-config=` see [cloud providers](#cloud-providers)
- `--address=${MASTER_IP}` *or* `--bind-address=127.0.0.1` and `--address=127.0.0.1` if you want to run a proxy on the master node.
- `--cluster-name=$CLUSTER_NAME`
- `--service-cluster-ip-range=$SERVICE_CLUSTER_IP_RANGE`
- `--etcd-servers=http://127.0.0.1:4001`
- `--tls-cert-file=/srv/kubernetes/server.cert`
- `--tls-private-key-file=/srv/kubernetes/server.key`
- `--admission-control=$RECOMMENDED_LIST`
- See [admission controllers](../admin/admission-controllers.md) for recommended arguments.
- `--allow-privileged=true`, only if you trust your cluster user to run pods as root.
If you are following the firewall-only security approach, then use these arguments:
- `--token-auth-file=/dev/null`
- `--insecure-bind-address=$MASTER_IP`
- `--advertise-address=$MASTER_IP`
If you are using the HTTPS approach, then set:
- `--client-ca-file=/srv/kubernetes/ca.crt`
- `--token-auth-file=/srv/kubernetes/known_tokens.csv`
- `--basic-auth-file=/srv/kubernetes/basic_auth.csv`
This pod mounts several node file system directories using the `hostPath` volumes. Their purposes are:
- The `/etc/ssl` mount allows the apiserver to find the SSL root certs so it can
authenticate external services, such as a cloud provider.
- This is not required if you do not use a cloud provider (e.g. bare-metal).
- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image.
- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
- Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.
*TODO* document proxy-ssh setup.
##### Cloud Providers
Apiserver supports several cloud providers.
- options for `--cloud-provider` flag are `aws`, `gce`, `mesos`, `openshift`, `ovirt`, `rackspace`, `vagrant`, or unset.
- unset used for e.g. bare metal setups.
- support for new IaaS is added by contributing code [here](../../pkg/cloudprovider/providers/)
Some cloud providers require a config file. If so, you need to put config file into apiserver image or mount through hostPath.
- `--cloud-config=` set if cloud provider requires a config file.
- Used by `aws`, `gce`, `mesos`, `openshift`, `ovirt` and `rackspace`.
- You must put config file into apiserver image or mount through hostPath.
- Cloud config file syntax is [Gcfg](https://code.google.com/p/gcfg/).
- AWS format defined by type [AWSCloudConfig](../../pkg/cloudprovider/providers/aws/aws.go)
- There is a similar type in the corresponding file for other cloud providers.
- GCE example: search for `gce.conf` in [this file](../../cluster/gce/configure-vm.sh)
#### Scheduler pod template
Complete this template for the scheduler pod:
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-scheduler"
},
"spec": {
"hostNetwork": true,
"containers": [
{
"name": "kube-scheduler",
"image": "$HYBERKUBE_IMAGE",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"$SCHEDULER_FLAG1",
...
"$SCHEDULER_FLAGN"
],
"livenessProbe": {
"httpGet": {
"host" : "127.0.0.1",
"path": "/healthz",
"port": 10251
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
]
}
}
```
Typically, no additional flags are required for the scheduler.
Optionally, you may want to mount `/var/log` as well and redirect output there.
#### Controller Manager Template
Template for controller manager pod:
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-controller-manager"
},
"spec": {
"hostNetwork": true,
"containers": [
{
"name": "kube-controller-manager",
"image": "$HYPERKUBE_IMAGE",
"command": [
"/hyperkube",
"controller-manager",
"$CNTRLMNGR_FLAG1",
...
"$CNTRLMNGR_FLAGN"
],
"volumeMounts": [
{
"name": "srvkube",
"mountPath": "/srv/kubernetes",
"readOnly": true
},
{
"name": "etcssl",
"mountPath": "/etc/ssl",
"readOnly": true
}
],
"livenessProbe": {
"httpGet": {
"host": "127.0.0.1",
"path": "/healthz",
"port": 10252
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"volumes": [
{
"name": "srvkube",
"hostPath": {
"path": "/srv/kubernetes"
}
},
{
"name": "etcssl",
"hostPath": {
"path": "/etc/ssl"
}
}
]
}
}
```
Flags to consider using with controller manager:
- `--cluster-name=$CLUSTER_NAME`
- `--cluster-cidr=`
- *TODO*: explain this flag.
- `--allocate-node-cidrs=`
- *TODO*: explain when you want controller to do this and when you want to do it another way.
- `--cloud-provider=` and `--cloud-config` as described in apiserver section.
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by the [service account](../user-guide/service-accounts.md) feature.
- `--master=127.0.0.1:8080`
#### Starting and Verifying Apiserver, Scheduler, and Controller Manager
Place each completed pod template into the kubelet config dir
(whatever `--config=` argument of kubelet is set to, typically
`/etc/kubernetes/manifests`). The order does not matter: scheduler and
controller manager will retry reaching the apiserver until it is up.
Use `ps` or `docker ps` to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this:
```console
$ sudo docker ps | grep apiserver:
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
```
Then try to connect to the apiserver:
```console
$ echo $(curl -s http://localhost:8080/healthz)
ok
$ curl -s http://localhost:8080/api
{
"versions": [
"v1"
]
}
```
If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver.
You should soon be able to see all your nodes by running the `kubectl get nodes` command.
Otherwise, you will need to manually create node objects.
### Starting Cluster Services
You will want to complete your Kubernetes clusters by adding cluster-wide
services. These are sometimes called *addons*, and [an overview
of their purpose is in the admin guide](
../../docs/admin/cluster-components.md#addons).
Notes for setting up each cluster service are given below:
* Cluster DNS:
* required for many kubernetes examples
* [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/dns/)
* [Admin Guide](../admin/dns.md)
* Cluster-level Logging
* Multiple implementations with different storage backends and UIs.
* [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/)
* [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/).
* Both require running fluentd on each node.
* [User Guide](../user-guide/logging.md)
* Container Resource Monitoring
* [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/)
* GUI
* [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/)
cluster.
## Troubleshooting
### Running validate-cluster
**TODO** explain how to use `cluster/validate-cluster.sh`
### Inspect pods and services
Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](gce.md#inspect-your-cluster).
You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started.
### Try Examples
At this point you should be able to run through one of the basic examples, such as the [nginx example](../../examples/simple-nginx.md).
### Running the Conformance Test
You may want to try to run the [Conformance test](http://releases.k8s.io/HEAD/hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention.
### Networking
The nodes must be able to connect to each other using their private IP. Verify this by
pinging or SSH-ing from one node to another.
### Getting Help
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](../troubleshooting.md#slack).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,475 +31,9 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Bare Metal Kubernetes with Calico Networking
------------------------------------------------
## Introduction This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/ubuntu-calico/
This document describes how to deploy Kubernetes with Calico networking from scratch on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We'll run Calico's etcd cluster on the master and install the Calico daemon on the master and nodes.
## Prerequisites and Assumptions
- This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
- All machines should have Docker >= 1.7.0 installed.
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
- All machines should have connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
- This guide assumes that none of the hosts have been configured with any Kubernetes or Calico software.
- This guide will set up a secure, TLS-authenticated API server.
## Set up the master
### Configure TLS
The master requires the root CA public key, `ca.pem`; the apiserver certificate, `apiserver.pem` and its private key, `apiserver-key.pem`.
1. Create the file `openssl.cnf` with the following contents.
```
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
IP.1 = 10.100.0.1
IP.2 = ${MASTER_IPV4}
```
> Replace ${MASTER_IPV4} with the Master's IP address on which the Kubernetes API will be accessible.
2. Generate the necessary TLS assets.
```
# Generate the root CA.
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
# Generate the API server keypair.
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
```
3. You should now have the following three files: `ca.pem`, `apiserver.pem`, and `apiserver-key.pem`. Send the three files to your master host (using `scp` for example).
4. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
```
### Install Kubernetes on the Master
We'll use the `kubelet` to bootstrap the Kubernetes master.
1. Download and install the `kubelet` and `kubectl` binaries:
```
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
sudo chmod +x /usr/bin/kubelet /usr/bin/kubectl
```
2. Install the `kubelet` systemd unit file and start the `kubelet`:
```
# Install the unit file
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubelet.service
# Enable the unit file so that it runs on boot
sudo systemctl enable /etc/systemd/kubelet.service
# Start the kubelet service
sudo systemctl start kubelet.service
```
3. Download and install the master manifest file, which will start the Kubernetes master services automatically:
```
sudo mkdir -p /etc/kubernetes/manifests
sudo wget -N -P /etc/kubernetes/manifests https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubernetes-master.manifest
```
4. Check the progress by running `docker ps`. After a while, you should see the `etcd`, `apiserver`, `controller-manager`, `scheduler`, and `kube-proxy` containers running.
> Note: it may take some time for all the containers to start. Don't worry if `docker ps` doesn't show any containers for a while or if some containers start before others.
### Install Calico's etcd on the master
Calico needs its own etcd cluster to store its state. In this guide we install a single-node cluster on the master server.
> Note: In a production deployment we recommend running a distributed etcd cluster for redundancy. In this guide, we use a single etcd for simplicitly.
1. Download the template manifest file:
```
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest
```
2. Replace all instances of `<MASTER_IPV4>` in the `calico-etcd.manifest` file with your master's IP address.
3. Then, move the file to the `/etc/kubernetes/manifests` directory:
```
sudo mv -f calico-etcd.manifest /etc/kubernetes/manifests
```
### Install Calico on the master
We need to install Calico on the master. This allows the master to route packets to the pods on other nodes.
1. Install the `calicoctl` tool:
```
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
chmod +x calicoctl
sudo mv calicoctl /usr/bin
```
2. Prefetch the calico/node container (this ensures that the Calico service starts immediately when we enable it):
```
sudo docker pull calico/node:v0.15.0
```
3. Download the `network-environment` template from the `calico-kubernetes` repository:
```
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template
```
4. Edit `network-environment` to represent this node's settings:
- Replace `<KUBERNETES_MASTER>` with the IP address of the master. This should be the source IP address used to reach the Kubernetes worker nodes.
5. Move `network-environment` into `/etc`:
```
sudo mv -f network-environment /etc
```
6. Install, enable, and start the `calico-node` service:
```
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
## Set up the nodes
The following steps should be run on each Kubernetes node.
### Configure TLS
Worker nodes require three keys: `ca.pem`, `worker.pem`, and `worker-key.pem`. We've already generated
`ca.pem` for use on the Master. The worker public/private keypair should be generated for each Kubernetes node.
1. Create the file `worker-openssl.cnf` with the following contents.
```
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = $ENV::WORKER_IP
```
2. Generate the necessary TLS assets for this worker. This relies on the worker's IP address, and the `ca.pem` file generated earlier in the guide.
```
# Export this worker's IP address.
export WORKER_IP=<WORKER_IPV4>
```
```
# Generate keys.
openssl genrsa -out worker-key.pem 2048
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=worker-key" -config worker-openssl.cnf
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
```
3. Send the three files (`ca.pem`, `worker.pem`, and `worker-key.pem`) to the host (using scp, for example).
4. Move the files to the `/etc/kubernetes/ssl` folder with the appropriate permissions:
```
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem
# Set permissions
sudo chmod 600 /etc/kubernetes/ssl/worker-key.pem
sudo chown root:root /etc/kubernetes/ssl/worker-key.pem
```
### Configure the kubelet worker
1. With your certs in place, create a kubeconfig for worker authentication in `/etc/kubernetes/worker-kubeconfig.yaml`; replace `<KUBERNETES_MASTER>` with the IP address of the master:
```
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://<KUBERNETES_MASTER>:443
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
```
### Install Calico on the node
On your compute nodes, it is important that you install Calico before Kubernetes. We'll install Calico using the provided `calico-node.service` systemd unit file:
1. Install the `calicoctl` binary:
```
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
chmod +x calicoctl
sudo mv calicoctl /usr/bin
```
2. Fetch the calico/node container:
```
sudo docker pull calico/node:v0.15.0
```
3. Download the `network-environment` template from the `calico-cni` repository:
```
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/network-environment-template
```
4. Edit `network-environment` to represent this node's settings:
- Replace `<DEFAULT_IPV4>` with the IP address of the node.
- Replace `<KUBERNETES_MASTER>` with the IP or hostname of the master.
5. Move `network-environment` into `/etc`:
```
sudo mv -f network-environment /etc
```
6. Install the `calico-node` service:
```
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
7. Install the Calico CNI plugins:
```
sudo mkdir -p /opt/cni/bin/
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam
sudo chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
```
8. Create a CNI network configuration file, which tells Kubernetes to create a network named `calico-k8s-network` and to use the calico plugins for that network. Create file `/etc/cni/net.d/10-calico.conf` with the following contents, replacing `<KUBERNETES_MASTER>` with the IP of the master (this file should be the same on each node):
```
# Make the directory structure.
mkdir -p /etc/cni/net.d
# Make the network configuration file
cat >/etc/rkt/net.d/10-calico.conf <<EOF
{
"name": "calico-k8s-network",
"type": "calico",
"etcd_authority": "<KUBERNETES_MASTER>:6666",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
}
}
EOF
```
Since this is the only network we create, it will be used by default by the kubelet.
9. Verify that Calico started correctly:
```
calicoctl status
```
should show that Felix (Calico's per-node agent) is running and the there should be a BGP status line for each other node that you've configured and the master. The "Info" column should show "Established":
```
$ calicoctl status
calico-node container is running. Status: Up 15 hours
Running felix version 1.3.0rc5
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| Peer address | Peer type | State | Since | Info |
+---------------+-------------------+-------+----------+-------------+
| 172.18.203.41 | node-to-node mesh | up | 17:32:26 | Established |
| 172.18.203.42 | node-to-node mesh | up | 17:32:25 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+
```
If the "Info" column shows "Active" or some other value then Calico is having difficulty connecting to the other host. Check the IP address of the peer is correct and check that Calico is using the correct local IP address (set in the `network-environment` file above).
### Install Kubernetes on the Node
1. Download and Install the kubelet binary:
```
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
sudo chmod +x /usr/bin/kubelet
```
2. Install the `kubelet` systemd unit file:
```
# Download the unit file.
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kubelet.service
# Enable and start the unit files so that they run on boot
sudo systemctl enable /etc/systemd/kubelet.service
sudo systemctl start kubelet.service
```
3. Download the `kube-proxy` manifest:
```
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kube-proxy.manifest
```
4. In that file, replace `<KUBERNETES_MASTER>` with your master's IP. Then move it into place:
```
sudo mkdir -p /etc/kubernetes/manifests/
sudo mv kube-proxy.manifest /etc/kubernetes/manifests/
```
## Configure kubectl remote access
To administer your cluster from a separate host (e.g your laptop), you will need the root CA generated earlier, as well as an admin public/private keypair (`ca.pem`, `admin.pem`, `admin-key.pem`). Run the following steps on the machine which you will use to control your cluster.
1. Download the kubectl binary.
```
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
sudo chmod +x /usr/bin/kubectl
```
2. Generate the admin public/private keypair.
3. Export the necessary variables, substituting in correct values for your machine.
```
# Export the appropriate paths.
export CA_CERT_PATH=<PATH_TO_CA_PEM>
export ADMIN_CERT_PATH=<PATH_TO_ADMIN_PEM>
export ADMIN_KEY_PATH=<PATH_TO_ADMIN_KEY_PEM>
# Export the Master's IP address.
export MASTER_IPV4=<MASTER_IPV4>
```
4. Configure your host `kubectl` with the admin credentials:
```
kubectl config set-cluster calico-cluster --server=https://${MASTER_IPV4} --certificate-authority=${CA_CERT_PATH}
kubectl config set-credentials calico-admin --certificate-authority=${CA_CERT_PATH} --client-key=${ADMIN_KEY_PATH} --client-certificate=${ADMIN_CERT_PATH}
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
Check your work with `kubectl get nodes`, which should succeed and display the nodes.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. This step makes use of the kubectl configuration made above.
```
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
[![Analytics](https://ga-beacon.appspot.com/UA-52125893-3/kubernetes/docs/getting-started-guides/ubuntu_calico.md?pixel)](https://github.com/igrigorik/ga-beacon)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ubuntu-calico.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ubuntu-calico.md?pixel)]()

View File

@ -31,293 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Kubernetes Deployment On Bare-metal Ubuntu Nodes
------------------------------------------------
- [Introduction](#introduction) This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/ubuntu/
- [Prerequisites](#prerequisites)
- [Starting a Cluster](#starting-a-cluster)
- [Set up working directory](#set-up-working-directory)
- [Configure and start the kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
- [Test it out](#test-it-out)
- [Deploy addons](#deploy-addons)
- [Trouble shooting](#trouble-shooting)
- [Upgrading a Cluster](#upgrading-a-cluster)
- [Test it out](#test-it-out-ii)
## Introduction
This document describes how to deploy kubernetes on ubuntu nodes, 1 master and 3 nodes involved
in the given examples. You can scale to **any number of nodes** by changing some settings with ease.
The original idea was heavily inspired by @jainvipin 's ubuntu single node
work, which has been merge into this document.
The scripting referenced here can be used to deploy Kubernetes with
networking based either on Flannel or on a CNI plugin that you supply.
This document is focused on the Flannel case. See
`kubernetes/cluster/ubuntu/config-default.sh` for remarks on how to
use a CNI plugin instead.
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
## Prerequisites
1. The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
2. All machines can communicate with each other. Master node needs to be connected to the
Internet to download the necessary files, while worker nodes do not.
3. These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with
Ubuntu 15 which uses systemd instead of upstart.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.4, may work with higher versions.
5. All the remote servers can be ssh logged in without a password by using key authentication.
## Starting a Cluster
### Set up working directory
Clone the kubernetes github repo locally
``` console
$ git clone https://github.com/kubernetes/kubernetes.git
```
#### Configure and start the Kubernetes cluster
The startup process will first download all the required binaries automatically.
By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.1.4.
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` like following.
```console
$ export KUBE_VERSION=1.0.5
$ export FLANNEL_VERSION=0.5.0
$ export ETCD_VERSION=2.2.0
```
**Note**
For users who want to bring up a cluster with k8s version v1.1.1, `controller manager` may fail to start
due to [a known issue](https://github.com/kubernetes/kubernetes/issues/17109). You could raise it
up manually by using following command on the remote master server. Note that
you should do this only after `api-server` is up. Moreover this issue is fixed in v1.1.2 and later.
```console
$ sudo service kube-controller-manager start
```
Note that we use flannel here to set up overlay network, yet it's optional. Actually you can build up k8s
cluster natively, or use flannel, Open vSwitch or any other SDN tool you like.
An example cluster is listed below:
| IP Address | Role |
|-------------|----------|
|10.10.103.223| node |
|10.10.103.162| node |
|10.10.103.250| both master and node|
First configure the cluster information in cluster/ubuntu/config-default.sh, following is a simple sample.
```sh
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
export role="ai i i"
export NUM_NODES=${NUM_NODES:-3}
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
```
The first variable `nodes` defines all your cluster nodes, master node comes first and
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
Then the `role` variable defines the role of above machine in the same order, "ai" stands for machine
acts as both master and node, "a" stands for master, "i" stands for node.
The `NUM_NODES` variable defines the total number of nodes.
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure
that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips.
You can use below three private network range according to rfc1918. Besides you'd better not choose the one
that conflicts with your own private network range.
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network,
should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
You can optionally provide additional Flannel network configuration
through `FLANNEL_OTHER_NET_CONFIG`, as explained in `cluster/ubuntu/config-default.sh`.
**Note:** When deploying, master needs to be connected to the Internet to download the necessary files.
If your machines are located in a private network that need proxy setting to connect the Internet,
you can set the config `PROXY_SETTING` in cluster/ubuntu/config-default.sh such as:
PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"
After all the above variables being set correctly, we can use following command in `cluster/` directory to
bring up the whole cluster.
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
The scripts automatically copy binaries and config files to all the machines via `scp` and start kubernetes
service on them. The only thing you need to do is to type the sudo password when promoted.
```console
Deploying node on machine 10.10.103.223
...
[sudo] password to start node:
```
If everything works correctly, you will see the following message from console indicating the k8s cluster is up.
```console
Cluster validation succeeded
```
### Test it out
You can use `kubectl` command to check if the newly created cluster is working correctly.
The `kubectl` binary is under the `cluster/ubuntu/binaries` directory.
You can make it available via PATH, then you can use the below command smoothly.
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.
```console
$ kubectl get nodes
NAME LABELS STATUS
10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready
10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
```
Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster
### Deploy addons
Assuming you have a starting cluster now, this section will tell you how to deploy addons like DNS
and UI onto the existing cluster.
The configuration of DNS is configured in cluster/ubuntu/config-default.sh.
```sh
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="192.168.3.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
```
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the `SERVICE_CLUSTER_IP_RANGE`.
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
By default, we also take care of kube-ui addon.
```sh
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
```
After all the above variables have been set, just type the following command.
```console
$ cd cluster/ubuntu
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
```
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster.
### On going
We are working on these features which we'd like to let everybody know:
1. Run kubernetes binaries in Docker using [kube-in-docker](https://github.com/ZJU-SEL/kube-in-docker/tree/baremetal-kube),
to eliminate OS-distro differences.
2. Tearing Down scripts: clear and re-create the whole stack by one click.
### Troubleshooting
Generally, what this approach does is quite simple:
1. Download and copy binaries and configuration files to proper directories on every node.
2. Configure `etcd` for master node using IPs based on input from user.
3. Create and start flannel network for worker nodes.
So if you encounter a problem, check etcd configuration of master node first.
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
2. You may find following commands useful, the former one to bring down the cluster, while the latter one could start it again.
```console
$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
3. You can also customize your own settings in `/etc/default/{component_name}` and restart it via
`$ sudo service {component_name} restart`.
## Upgrading a Cluster
If you already have a kubernetes cluster, and want to upgrade to a new version,
you can use following command in `cluster/` directory to update the whole cluster
or a specified node to a new version.
```console
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
```
It can be done for all components (by default), master(`-m`) or specified node(`-n`).
Upgrading a single node is currently experimental.
If the version is not specified, the script will try to use local binaries. You should ensure all
the binaries are well prepared in the expected directory path cluster/ubuntu/binaries.
```console
$ tree cluster/ubuntu/binaries
binaries/
├── kubectl
├── master
│   ├── etcd
│   ├── etcdctl
│   ├── flanneld
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
└── minion
├── flanneld
├── kubelet
└── kube-proxy
```
You can use following command to get a help.
```console
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
```
Here are some examples:
* upgrade master to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -m 1.0.5`
* upgrade node `vcap@10.10.103.223` to version 1.0.5 : `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -n 10.10.103.223 1.0.5`
* upgrade master and all nodes to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh 1.0.5`
The script will not delete any resources of your cluster, it just replaces the binaries.
### Test it out
You can use the `kubectl` command to check if the newly upgraded kubernetes cluster is working correctly. See
also [test-it-out](ubuntu.md#test-it-out)
To make sure the version of the upgraded cluster is what you expect, you will find these commands helpful.
* upgrade all components or master: `$ kubectl version`. Check the *Server Version*.
* upgrade node `vcap@10.10.102.223`: `$ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'`
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,397 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Getting started with Vagrant This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/vagrant/
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Interacting with your Kubernetes cluster with Vagrant.](#interacting-with-your-kubernetes-cluster-with-vagrant)
- [Authenticating with your master](#authenticating-with-your-master)
- [Running containers](#running-containers)
- [Troubleshooting](#troubleshooting)
- [I keep downloading the same (large) box all the time!](#i-keep-downloading-the-same-large-box-all-the-time)
- [I am getting timeouts when trying to curl the master from my host!](#i-am-getting-timeouts-when-trying-to-curl-the-master-from-my-host)
- [I just created the cluster, but I am getting authorization errors!](#i-just-created-the-cluster-but-i-am-getting-authorization-errors)
- [I just created the cluster, but I do not see my container running!](#i-just-created-the-cluster-but-i-do-not-see-my-container-running)
- [I want to make changes to Kubernetes code!](#i-want-to-make-changes-to-kubernetes-code)
- [I have brought Vagrant up but the nodes cannot validate!](#i-have-brought-vagrant-up-but-the-nodes-cannot-validate)
- [I want to change the number of nodes!](#i-want-to-change-the-number-of-nodes)
- [I want my VMs to have more memory!](#i-want-my-vms-to-have-more-memory)
- [I ran vagrant suspend and nothing works!](#i-ran-vagrant-suspend-and-nothing-works)
- [I want vagrant to sync folders via nfs!](#i-want-vagrant-to-sync-folders-via-nfs)
### Prerequisites
1. Install latest version >= 1.7.4 of vagrant from http://www.vagrantup.com/downloads.html
2. Install one of:
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt`
### Setup
Setting up a cluster is as simple as running:
```sh
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
```
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
```sh
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
```sh
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
By default, each VM in the cluster is running Fedora.
To access the master or any node:
```sh
vagrant ssh master
vagrant ssh node-1
```
If you are running more than one node, you can access the others by:
```sh
vagrant ssh node-2
vagrant ssh node-3
```
Each node in the cluster installs the docker daemon and the kubelet.
The master node instantiates the Kubernetes master components as pods on the machine.
To view the service status and/or logs on the kubernetes-master:
```console
[vagrant@kubernetes-master ~] $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
```
To view the services on any of the nodes:
```console
[vagrant@kubernetes-master ~] $ vagrant ssh node-1
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
```
### Interacting with your Kubernetes cluster with Vagrant.
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
To push updates to new Kubernetes code after making source changes:
```sh
./cluster/kube-push.sh
```
To stop and then restart the cluster:
```sh
vagrant halt
./cluster/kube-up.sh
```
To destroy the cluster:
```sh
vagrant destroy
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
```console
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
```sh
cat ~/.kubernetes_vagrant_auth
```
```json
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
```sh
./cluster/kubectl.sh get nodes
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
```console
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
```
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
```console
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
Start a container running nginx with a replication controller and three replicas
```console
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
```console
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 0/1 Pending 0 10s
my-nginx-gr3hh 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
```
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
```console
$ vagrant ssh node-1 -c 'sudo docker images'
kubernetes-node-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```console
$ vagrant ssh node-1 -c 'sudo docker ps'
kubernetes-node-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
```
Going back to listing the pods, services and replicationcontrollers, you now have:
```console
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 1m
my-nginx-gr3hh 1/1 Running 0 1m
my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
my-nginx my-nginx nginx run=my-nginx 3 1m
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
You can already play with scaling the replicas with:
```console
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
```
Congratulations!
### Troubleshooting
#### I keep downloading the same (large) box all the time!
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
```sh
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
#### I am getting timeouts when trying to curl the master from my host!
During provision of the cluster, you may see the following message:
```sh
Validating node-1
.............
Waiting for each node to be registered with cloud provider
error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout
```
Some users have reported VPNs may prevent traffic from being routed to the host machine into the virtual machine network.
To debug, first verify that the master is binding to the proper IP address:
```
$ vagrant ssh master
$ ifconfig | grep eth1 -C 2
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.245.1.2 netmask
255.255.255.0 broadcast 10.245.1.255
```
Then verify that your host machine has a network connection to a bridge that can serve that address:
```sh
$ ifconfig | grep 10.245.1 -C 2
vboxnet5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255
inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20<link>
ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet)
```
If you do not see a response on your host machine, you will most likely need to connect your host to the virtual network created by the virtualization provider.
If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request.
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
```sh
rm ~/.kubernetes_vagrant_auth
```
After using kubectl.sh make sure that the correct credentials are set:
```sh
cat ~/.kubernetes_vagrant_auth
```
```json
{
"User": "vagrant",
"Password": "vagrant"
}
```
#### I just created the cluster, but I do not see my container running!
If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
#### I want to make changes to Kubernetes code!
To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md).
#### I have brought Vagrant up but the nodes cannot validate!
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
#### I want to change the number of nodes!
You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so:
```sh
export NUM_NODES=1
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
```sh
export KUBERNETES_MEMORY=2048
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
```sh
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_NODE_MEMORY=2048
```
#### I ran vagrant suspend and nothing works!
`vagrant suspend` seems to mess up the network. This is not supported at this time.
#### I want vagrant to sync folders via nfs!
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
```sh
export KUBERNETES_VAGRANT_USE_NFS=true
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,106 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Getting started with vSphere
-------------------------------
The example below creates a Kubernetes cluster with 4 worker node Virtual This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/vsphere/
Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This
cluster is set up and controlled from your workstation (or wherever you find
convenient).
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Starting a cluster](#starting-a-cluster)
- [Extra: debugging deployment failure](#extra-debugging-deployment-failure)
### Prerequisites
1. You need administrator credentials to an ESXi machine or vCenter instance.
2. You must have Go (see [here](../devel/development.md#go-versions) for supported versions) installed: [www.golang.org](http://www.golang.org).
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
```sh
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
```
4. Install the govc tool to interact with ESXi/vCenter:
```sh
go get github.com/vmware/govmomi/govc
```
5. Get or build a [binary release](binary_release.md)
### Setup
Download a prebuilt Debian 8.2 VMDK that we'll use as a base image:
```sh
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2016-01-08/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
```
Import this VMDK into your vSphere datastore:
```sh
export GOVC_URL='hostname' # hostname of the vc
export GOVC_USERNAME='username' # username for logging into the vsphere.
export GOVC_PASSWORD='password' # password for the above username
export GOVC_NETWORK='Network Name' # Name of the network the vms should join. Many times it could be "VM Network"
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
govc import.vmdk kube.vmdk ./kube/
```
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
```sh
govc datastore.ls ./kube/
```
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
parameters. The guest login for the image that you imported is `kube:kube`.
### Starting a cluster
Now, let's continue with deploying Kubernetes.
This process takes about ~20-30 minutes depending on your network.
#### From extracted binary release
```sh
cd kubernetes
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
```
#### Build from source
```sh
cd kubernetes
make release
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
```
Refer to the top level README and the getting started guide for Google Compute
Engine. Once you have successfully reached this point, your vSphere Kubernetes
deployment works just as any other one!
**Enjoy!**
### Extra: debugging deployment failure
The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
can log into any VM as the `kube` user to poke around and figure out what is
going on (find yourself authorized with your SSH key, or use the password
`kube` otherwise).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,103 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications This file has moved to: http://kubernetes.github.io/docs/user-guide/README/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications](#kubernetes-user-guide-managing-applications)
- [Quick walkthrough](#quick-walkthrough)
- [Thorough walkthrough](#thorough-walkthrough)
- [Concept guide](#concept-guide)
- [Further reading](#further-reading)
<!-- END MUNGE: GENERATED_TOC -->
The user guide is intended for anyone who wants to run programs and services on an existing Kubernetes cluster. Setup and administration of a Kubernetes cluster is described in the [Cluster Admin Guide](../../docs/admin/README.md). The [Developer Guide](../../docs/devel/README.md) is for anyone wanting to either write code which directly accesses the Kubernetes API, or to contribute directly to the Kubernetes project.
Please ensure you have completed the [prerequisites for running examples from the user guide](prereqs.md).
## Quick walkthrough
1. [Kubernetes 101](walkthrough/README.md)
1. [Kubernetes 201](walkthrough/k8s201.md)
## Thorough walkthrough
If you don't have much familiarity with Kubernetes, we recommend you read the following sections in order:
1. [Quick start: launch and expose an application](quick-start.md)
1. [Configuring and launching containers: configuring common container parameters](configuring-containers.md)
1. [Deploying continuously running applications](deploying-applications.md)
1. [Connecting applications: exposing applications to clients and users](connecting-applications.md)
1. [Working with containers in production](production-pods.md)
1. [Managing deployments](managing-deployments.md)
1. [Application introspection and debugging](introspection-and-debugging.md)
1. [Using the Kubernetes web user interface](ui.md)
1. [Logging](logging.md)
1. [Monitoring](monitoring.md)
1. [Getting into containers via `exec`](getting-into-containers.md)
1. [Connecting to containers via proxies](connecting-to-applications-proxy.md)
1. [Connecting to containers via port forwarding](connecting-to-applications-port-forward.md)
## Concept guide
[**Overview**](overview.md)
: A brief overview of Kubernetes concepts.
[**Cluster**](../admin/README.md)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications.
[**Node**](../admin/node.md)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
[**Pod**](pods.md)
: A pod is a co-located group of containers and volumes.
[**Label**](labels.md)
: A label is a key/value pair that is attached to a resource, such as a pod, to convey a user-defined identifying attribute. Labels can be used to organize and to select subsets of resources.
[**Selector**](labels.md#label-selectors)
: A selector is an expression that matches labels in order to identify related resources, such as which pods are targeted by a load-balanced service.
[**Replication Controller**](replication-controller.md)
: A replication controller ensures that a specified number of pod replicas are running at any one time. It both allows for easy scaling of replicated systems and handles re-creation of a pod when the machine it is on reboots or otherwise fails.
[**Service**](services.md)
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
[**Volume**](volumes.md)
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the volume directory and/or device.
[**Secret**](secrets.md)
: A secret stores sensitive data, such as authentication tokens, which can be made available to containers upon request.
[**Name**](identifiers.md)
: A user- or client-provided name for a resource.
[**Namespace**](namespaces.md)
: A namespace is like a prefix to the name of a resource. Namespaces help different projects, teams, or customers to share a cluster, such as by preventing name collisions between unrelated teams.
[**Annotation**](annotations.md)
: A key/value pair that can hold larger (compared to a label), and possibly not human-readable, data, intended to store non-identifying auxiliary data, especially data manipulated by tools and system extensions. Efficient filtering by annotation values is not supported.
## Further reading
* API resources
* [Working with resources](working-with-resources.md)
* Pods and containers
* [Pod lifecycle and restart policies](pod-states.md)
* [Lifecycle hooks](container-environment.md)
* [Compute resources, such as cpu and memory](compute-resources.md)
* [Specifying commands and requesting capabilities](containers.md)
* [Downward API: accessing system configuration from a pod](downward-api.md)
* [Images and registries](images.md)
* [Migrating from docker-cli to kubectl](docker-cli-to-kubectl.md)
* [Configuration Best Practices and Tips](config-best-practices.md)
* [Assign pods to selected nodes](node-selection/)
* [Perform a rolling update on a running group of pods](update-demo/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,298 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# User Guide to Accessing the Cluster This file has moved to: http://kubernetes.github.io/docs/user-guide/accessing-the-cluster/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [User Guide to Accessing the Cluster](#user-guide-to-accessing-the-cluster)
- [Accessing the cluster API](#accessing-the-cluster-api)
- [Accessing for the first time with kubectl](#accessing-for-the-first-time-with-kubectl)
- [Directly accessing the REST API](#directly-accessing-the-rest-api)
- [Using kubectl proxy](#using-kubectl-proxy)
- [Without kubectl proxy](#without-kubectl-proxy)
- [Programmatic access to the API](#programmatic-access-to-the-api)
- [Accessing the API from a Pod](#accessing-the-api-from-a-pod)
- [Accessing services running on the cluster](#accessing-services-running-on-the-cluster)
- [Ways to connect](#ways-to-connect)
- [Discovering builtin services](#discovering-builtin-services)
- [Manually constructing apiserver proxy URLs](#manually-constructing-apiserver-proxy-urls)
- [Examples](#examples)
- [Using web browsers to access services running on the cluster](#using-web-browsers-to-access-services-running-on-the-cluster)
- [Requesting redirects](#requesting-redirects)
- [So Many Proxies](#so-many-proxies)
<!-- END MUNGE: GENERATED_TOC -->
## Accessing the cluster API
### Accessing for the first time with kubectl
When accessing the Kubernetes API for the first time, we suggest using the
Kubernetes CLI, `kubectl`.
To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
though a [Getting started guide](../getting-started-guides/README.md),
or someone else setup the cluster and provided you with credentials and a location.
Check the location and credentials that kubectl knows about with this command:
```console
$ kubectl config view
```
Many of the [examples](../../examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl.md).
### Directly accessing the REST API
Kubectl handles locating and authenticating to the apiserver.
If you want to directly access the REST API with an http client like
curl or wget, or a browser, there are several ways to locate and authenticate:
- Run kubectl in proxy mode.
- Recommended approach.
- Uses stored apiserver location.
- Verifies identity of apiserver using self-signed cert. No MITM possible.
- Authenticates to apiserver.
- In future, may do intelligent client-side load-balancing and failover.
- Provide the location and credentials directly to the http client.
- Alternate approach.
- Works with some types of client code that are confused by using a proxy.
- Need to import a root cert into your browser to protect against MITM.
#### Using kubectl proxy
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
locating the apiserver and authenticating.
Run it like this:
```console
$ kubectl proxy --port=8080 &
```
See [kubectl proxy](kubectl/kubectl_proxy.md) for more details.
Then you can explore the API with curl, wget, or a browser, like so:
```console
$ curl http://localhost:8080/api/
{
"versions": [
"v1"
]
}
```
#### Without kubectl proxy
It is also possible to avoid using kubectl proxy by passing an authentication token
directly to the apiserver, like this:
```console
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
{
"versions": [
"v1"
]
}
```
The above example uses the `--insecure` flag. This leaves it subject to MITM
attacks. When kubectl accesses the cluster it uses a stored root certificate
and client certificates to access the server. (These are installed in the
`~/.kube` directory). Since cluster certificates are typically self-signed, it
may take special configuration to get your http client to use root
certificate.
On some clusters, the apiserver does not require authentication; it may serve
on localhost, or be protected by a firewall. There is not a standard
for this. [Configuring Access to the API](../admin/accessing-the-api.md)
describes how a cluster admin can configure this. Such approaches may conflict
with future high-availability support.
### Programmatic access to the API
There are [client libraries](../devel/client-libraries.md) for accessing the API
from several languages. The Kubernetes project-supported
[Go](http://releases.k8s.io/HEAD/pkg/client/)
client library can use the same [kubeconfig file](kubeconfig-file.md)
as the kubectl CLI does to locate and authenticate to the apiserver.
See documentation for other libraries for how they authenticate.
### Accessing the API from a Pod
When accessing the API from a pod, locating and authenticating
to the api server are somewhat different.
The recommended way to locate the apiserver within the pod is with
the `kubernetes` DNS name, which resolves to a Service IP which in turn
will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a
[service account](service-accounts.md) credential. By kube-system, a pod
is associated with a service account, and a credential (token) for that
service account is placed into the filesystem tree of each container in that pod,
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
If available, a certificate bundle is placed into the filesystem tree of each
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
used to verify the serving certificate of the apiserver.
Finally, the default namespace to be used for namespaced API operations is placed in a file
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
From within a pod the recommended ways to connect to API are:
- run a kubectl proxy as one of the containers in the pod, or as a background
process within a container. This proxies the
Kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](../../examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
This handles locating and authenticating to the apiserver.
In each case, the credentials of the pod are used to communicate securely with the apiserver.
## Accessing services running on the cluster
The previous section was about connecting the Kubernetes API server. This section is about
connecting to other services running on Kubernetes cluster. In Kubernetes, the
[nodes](../admin/node.md), [pods](pods.md) and [services](services.md) all have
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
routable, so they will not be reachable from a machine outside the cluster,
such as your desktop machine.
### Ways to connect
You have several options for connecting to nodes, pods and services from outside the cluster:
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](services.md) and
[kubectl expose](kubectl/kubectl_expose.md) documentation.
- Depending on your cluster environment, this may just expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
place a unique label on the pod it and create a new service which selects this label.
- In most cases, it should not be necessary for application developer to directly access
nodes via their nodeIPs.
- Access services, nodes, or pods using the Proxy Verb.
- Does apiserver authentication and authorization prior to accessing the remote service.
Use this if the services are not secure enough to expose to the internet, or to gain
access to ports on the node IP, or for debugging.
- Proxies may cause problems for some web applications.
- Only works for HTTP/HTTPS.
- Described [here](#discovering-builtin-services).
- Access from a node or pod in the cluster.
- Run a pod, and then connect to a shell in it using [kubectl exec](kubectl/kubectl_exec.md).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
### Discovering builtin services
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
with the `kubectl cluster-info` command:
```console
$ kubectl cluster-info
Kubernetes master is running at https://104.197.5.247
elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kibana-logging
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kube-dns
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
```
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
`http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/`.
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
#### Manually constructing apiserver proxy URLs
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/proxy/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL
##### Examples
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?q=user:kimchy`
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_cluster/health?pretty=true`
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5
}
```
#### Using web browsers to access services running on the cluster
You may be able to put an apiserver proxy url into the address bar of a browser. However:
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
but your cluster may not be configured to accept basic auth.
- Some web apps may not work, particularly those with client side javascript that construct urls in a
way that is unaware of the proxy path prefix.
## Requesting redirects
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
## So Many Proxies
There are several different proxies you may encounter when using Kubernetes:
1. The [kubectl proxy](#directly-accessing-the-rest-api):
- runs on a user's desktop or in a pod
- proxies from a localhost address to the Kubernetes apiserver
- client to proxy uses HTTP
- proxy to apiserver uses HTTPS
- locates apiserver
- adds authentication headers
1. The [apiserver proxy](#discovering-builtin-services):
- is a bastion built into the apiserver
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
- runs in the apiserver processes
- client to proxy uses HTTPS (or http if apiserver so configured)
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
- can be used to reach a Node, Pod, or Service
- does load balancing when used to reach a Service
1. The [kube proxy](services.md#ips-and-vips):
- runs on each node
- proxies UDP and TCP
- does not understand HTTP
- provides load balancing
- is just used to reach services
1. A Proxy/Load-balancer in front of apiserver(s):
- existence and implementation varies from cluster to cluster (e.g. nginx)
- sits between all clients and one or more apiservers
- acts as load balancer if there are several apiservers.
1. Cloud Load Balancers on external services:
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
- are created automatically when the Kubernetes service has type `LoadBalancer`
- use UDP/TCP only
- implementation varies by cloud provider.
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
will typically ensure that the latter types are setup correctly.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,32 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Annotations This file has moved to: http://kubernetes.github.io/docs/user-guide/annotations/
We have [labels](labels.md) for identifying metadata.
It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels.
Like labels, annotations are key-value maps.
```json
"annotations": {
"key1" : "value1",
"key2" : "value2"
}
```
Possible information that could be recorded in annotations:
* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging
* build/release/image information (timestamps, release ids, git branch, PR numbers, image hashes, registry address, etc.)
* pointers to logging/monitoring/analytics/audit repos
* client library/tool information (e.g. for debugging purposes -- name, version, build info)
* other user and/or tool/system provenance info, such as URLs of related objects from other ecosystem components
* lightweight rollout tool metadata (config and/or checkpoints)
* phone/pager number(s) of person(s) responsible, or directory entry where that info could be found, such as a team website
Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,210 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Application Troubleshooting This file has moved to: http://kubernetes.github.io/docs/user-guide/application-troubleshooting/
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
This is *not* a guide for people who want to debug their cluster. For that you should check out
[this guide](../admin/cluster-troubleshooting.md)
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Application Troubleshooting](#application-troubleshooting)
- [FAQ](#faq)
- [Diagnosing the problem](#diagnosing-the-problem)
- [Debugging Pods](#debugging-pods)
- [My pod stays pending](#my-pod-stays-pending)
- [My pod stays waiting](#my-pod-stays-waiting)
- [My pod is crashing or otherwise unhealthy](#my-pod-is-crashing-or-otherwise-unhealthy)
- [My pod is running but not doing what I told it to do](#my-pod-is-running-but-not-doing-what-i-told-it-to-do)
- [Debugging Replication Controllers](#debugging-replication-controllers)
- [Debugging Services](#debugging-services)
- [My service is missing endpoints](#my-service-is-missing-endpoints)
- [Network traffic is not forwarded](#network-traffic-is-not-forwarded)
- [More information](#more-information)
<!-- END MUNGE: GENERATED_TOC -->
## FAQ
Users are highly encouraged to check out our [FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ)
## Diagnosing the problem
The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or
your Service?
* [Debugging Pods](#debugging-pods)
* [Debugging Replication Controllers](#debugging-replication-controllers)
* [Debugging Services](#debugging-services)
### Debugging Pods
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
```console
$ kubectl describe pods ${POD_NAME}
```
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
Continue debugging depending on the state of the pods.
#### My pod stays pending
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
your pod. Reasons include:
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](compute-resources.md#my-pods-are-pending-with-event-message-failedscheduling) for more information.
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
#### My pod stays waiting
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
* Make sure that you have the name of the image correct
* Have you pushed the image to the repository?
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
#### My pod is crashing or otherwise unhealthy
First, take a look at the logs of
the current container:
```console
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
```
If your container has previously crashed, you can access the previous container's crash log with:
```console
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
```
Alternately, you can run commands inside that container with `exec`:
```console
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
```
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for Pods that only contain a single container.
As an example, to look at the logs from a running Cassandra pod, you might run
```console
$ kubectl exec cassandra -- cat /var/log/cassandra/system.log
```
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
but this should generally not be necessary given tools in the Kubernetes API. Therefore, if you find yourself needing to ssh into a machine, please file a
feature request on GitHub describing your use case and why these tools are insufficient.
#### My pod is running but not doing what I told it to do
If your pod is not behaving as you expected, it may be that there was an error in your
pod description (e.g. `mypod.yaml` file on your local machine), and that the error
was silently ignored when you created the pod. Often a section of the pod description
is nested incorrectly, or a key name is typed incorrectly, and so the key is ignored.
For example, if you misspelled `command` as `commnd` then the pod will be created but
will not use the command line you intended it to use.
The first thing to do is to delete your pod and try creating it again with the `--validate` option.
For example, run `kubectl create --validate -f mypod.yaml`.
If you misspelled `command` as `commnd` then will give an error like this:
```
I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd
I0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842
pods/mypod
```
<!-- TODO: Now that #11914 is merged, this advice may need to be updated -->
The next thing to check is whether the pod on the apiserver
matches the pod you meant to create (e.g. in a yaml file on your local machine).
For example, run `kubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml` and then
manually compare the original pod description, `mypod.yaml` with the one you got
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
lines on the "apiserver" version that are not on the original version. This is
expected. However, if there are lines on the original that are not on the apiserver
version, then this may indicate a problem with your pod spec.
### Debugging Replication Controllers
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
controller.
### Debugging Services
Services provide load balancing across a set of pods. There are several common problems that can make Services
not work properly. The following instructions should help debug Service problems.
First, verify that there are endpoints for the service. For every Service object, the apiserver makes an `endpoints` resource available.
You can view this resource with:
```console
$ kubectl get endpoints ${SERVICE_NAME}
```
Make sure that the endpoints match up with the number of containers that you expect to be a member of your service.
For example, if your Service is for an nginx container with 3 replicas, you would expect to see three different
IP addresses in the Service's endpoints.
#### My service is missing endpoints
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
a Service where the labels are:
```yaml
...
spec:
- selector:
name: nginx
type: frontend
```
You can use:
```console
$ kubectl get pods --selector=name=nginx,type=frontend
```
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
have the right ports exposed. If your service has a `containerPort` specified, but the Pods that are
selected don't have that port listed, then they won't be added to the endpoints list.
Verify that the pod's `containerPort` matches up with the Service's `containerPort`
#### Network traffic is not forwarded
If you can connect to the service, but the connection is immediately dropped, and there are endpoints
in the endpoints list, it's likely that the proxy can't contact your pods.
There are three things to
check:
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods)
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
#### More information
If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services.md) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,265 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Compute Resources This file has moved to: http://kubernetes.github.io/docs/user-guide/compute-resources/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Compute Resources](#compute-resources)
- [Resource Requests and Limits of Pod and Container](#resource-requests-and-limits-of-pod-and-container)
- [How Pods with Resource Requests are Scheduled](#how-pods-with-resource-requests-are-scheduled)
- [How Pods with Resource Limits are Run](#how-pods-with-resource-limits-are-run)
- [Monitoring Compute Resource Usage](#monitoring-compute-resource-usage)
- [Troubleshooting](#troubleshooting)
- [My pods are pending with event message failedScheduling](#my-pods-are-pending-with-event-message-failedscheduling)
- [My container is terminated](#my-container-is-terminated)
- [Planned Improvements](#planned-improvements)
<!-- END MUNGE: GENERATED_TOC -->
When specifying a [pod](pods.md), you can optionally specify how much CPU and memory (RAM) each
container needs. When containers have their resource requests specified, the scheduler is
able to make better decisions about which nodes to place pods on; and when containers have their
limits specified, contention for resources on a node can be handled in a specified manner. For
more details about the difference between requests and limits, please refer to
[Resource QoS](../proposals/resource-qos.md).
*CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified
in units of cores. Memory is specified in units of bytes.
CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute
resources are measureable quantities which can be requested, allocated, and consumed. They are
distinct from [API resources](working-with-resources.md). API resources, such as pods and
[services](services.md) are objects that can be written to and retrieved from the Kubernetes API
server.
## Resource Requests and Limits of Pod and Container
Each container of a Pod can optionally specify `spec.container[].resources.limits.cpu` and/or
`spec.container[].resources.limits.memory` and/or `spec.container[].resources.requests.cpu`
and/or `spec.container[].resources.requests.memory`.
Specifying resource requests and/or limits is optional. In some clusters, unset limits or requests
may be replaced with default values when a pod is created or updated. The default value depends on
how the cluster is configured. If value of requests is not specified, they are set to be equal
to limits by default. Please note that resource limits must be greater than or equal to resource
requests.
Although requests/limits can only be specified on individual containers, it is convenient to talk
about pod resource requests/limits. A *pod resource request/limit* for a particular resource
type is the sum of the resource requests/limits of that type for each container in the pod, with
unset values treated as zero (or equal to default values in some cluster configurations).
The following pod has two containers. Each has a request of 0.25 core of cpu and 64MiB
(2<sup>20</sup> bytes) of memory and a limit of 0.5 core of cpu and 128MiB of memory. The pod can
be said to have a request of 0.5 core and 128 MiB of memory and a limit of 1 core and 256MiB of
memory.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
```
## How Pods with Resource Requests are Scheduled
When a pod is created, the Kubernetes scheduler selects a node for the pod to
run on. Each node has a maximum capacity for each of the resource types: the
amount of CPU and memory it can provide for pods. The scheduler ensures that,
for each resource type (CPU and memory), the sum of the resource requests of the
containers scheduled to the node is less than the capacity of the node. Note
that although actual memory or CPU resource usage on nodes is very low, the
scheduler will still refuse to place pods onto nodes if the capacity check
fails. This protects against a resource shortage on a node when resource usage
later increases, such as due to a daily peak in request rate.
## How Pods with Resource Limits are Run
When kubelet starts a container of a pod, it passes the CPU and memory limits to the container
runner (Docker or rkt).
When using Docker:
- The `spec.container[].resources.limits.cpu` is multiplied by 1024, converted to an integer, and
used as the value of the [`--cpu-shares`](
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
command.
- The `spec.container[].resources.limits.memory` is converted to an integer, and used as the value
of the [`--memory`](https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag
to the `docker run` command.
**TODO: document behavior for rkt**
If a container exceeds its memory limit, it may be terminated. If it is restartable, it will be
restarted by kubelet, as will any other type of runtime failure.
A container may or may not be allowed to exceed its CPU limit for extended periods of time.
However, it will not be killed for excessive CPU usage.
To determine if a container cannot be scheduled or is being killed due to resource limits, see the
"Troubleshooting" section below.
## Monitoring Compute Resource Usage
The resource usage of a pod is reported as part of the Pod status.
If [optional monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
then pod resource usage can be retrieved from the monitoring system.
## Troubleshooting
### My pods are pending with event message failedScheduling
If the scheduler cannot find any node where a pod can fit, then the pod will remain unscheduled
until a place can be found. An event will be produced each time the scheduler fails to find a
place for the pod, like this:
```console
$ kubectl describe pod frontend | grep -A 3 Events
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
```
In the case shown above, the pod "frontend" fails to be scheduled due to insufficient
CPU resource on the node. Similar error messages can also suggest failure due to insufficient
memory (PodExceedsFreeMemory). In general, if a pod or pods are pending with this message and
alike, then there are several things to try:
- Add more nodes to the cluster.
- Terminate unneeded pods to make room for pending pods.
- Check that the pod is not larger than all the nodes. For example, if all the nodes
have a capacity of `cpu: 1`, then a pod with a limit of `cpu: 1.1` will never be scheduled.
You can check node capacities and amounts allocated with the `kubectl describe nodes` command.
For example:
```console
$ kubectl describe nodes gke-cluster-4-386701dd-node-ww4p
Name: gke-cluster-4-386701dd-node-ww4p
[ ... lines removed for clarity ...]
Capacity:
cpu: 1
memory: 464Mi
pods: 40
Allocated resources (total requests):
cpu: 910m
memory: 2370Mi
pods: 4
[ ... lines removed for clarity ...]
Pods: (4 in total)
Namespace Name CPU(milliCPU) Memory(bytes)
frontend webserver-ffj8j 500 (50% of total) 2097152000 (50% of total)
kube-system fluentd-cloud-logging-gke-cluster-4-386701dd-node-ww4p 100 (10% of total) 209715200 (5% of total)
kube-system kube-dns-v8-qopgw 310 (31% of total) 178257920 (4% of total)
TotalResourceLimits:
CPU(milliCPU): 910 (91% of total)
Memory(bytes): 2485125120 (59% of total)
[ ... lines removed for clarity ...]
```
Here you can see from the `Allocated resources` section that that a pod which ask for more than
90 millicpus or more than 1341MiB of memory will not be able to fit on this node.
Looking at the `Pods` section, you can see which pods are taking up space on the node.
The [resource quota](../admin/resource-quota.md) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.
### My container is terminated
Your container may be terminated because it's resource-starved. To check if a container is being killed because it is hitting a resource limit, call `kubectl describe pod`
on the pod you are interested in:
```console
[12:54:41] $ ./cluster/kubectl.sh describe pod simmemleak-hra99
Name: simmemleak-hra99
Namespace: default
Image(s): saadali/simmemleak
Node: kubernetes-node-tf0f/10.240.216.66
Labels: name=simmemleak
Status: Running
Reason:
Message:
IP: 10.244.2.75
Replication Controllers: simmemleak (1/1 replicas created)
Containers:
simmemleak:
Image: saadali/simmemleak
Limits:
cpu: 100m
memory: 50Mi
State: Running
Started: Tue, 07 Jul 2015 12:54:41 -0700
Last Termination State: Terminated
Exit Code: 1
Started: Fri, 07 Jul 2015 12:54:30 -0700
Finished: Fri, 07 Jul 2015 12:54:33 -0700
Ready: False
Restart Count: 5
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
```
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
You can call `get pod` with the `-o go-template=...` option to fetch the status of previously terminated containers:
```console
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
```
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
## Planned Improvements
The current system only allows resource quantities to be specified on a container.
It is planned to improve accounting for resources which are shared by all containers in a pod,
such as [EmptyDir volumes](volumes.md#emptydir).
The current system only supports container requests and limits for CPU and Memory.
It is planned to add new resource types, including a node disk space
resource, and a framework for adding custom [resource types](../design/resources.md#resource-types).
Kubernetes supports overcommitment of resources by supporting multiple levels of [Quality of Service](http://issue.k8s.io/168).
Currently, one unit of CPU means different things on different cloud providers, and on different
machine types within the same cloud providers. For example, on AWS, the capacity of a node
is reported in [ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
cores. We plan to revise the definition of the cpu resource to allow for more consistency
across providers and platforms.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,131 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Configuration Best Practices and Tips This file has moved to: http://kubernetes.github.io/docs/user-guide/config-best-practices/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Configuration Best Practices and Tips](#configuration-best-practices-and-tips)
- [General Config Tips](#general-config-tips)
- ["Naked" Pods vs Replication Controllers and Jobs](#naked-pods-vs-replication-controllers-and-jobs)
- [Services](#services)
- [Using Labels](#using-labels)
- [Container Images](#container-images)
- [Using kubectl](#using-kubectl)
<!-- END MUNGE: GENERATED_TOC -->
This document is meant to highlight and consolidate in one place configuration best practices that are introduced throughout the user-guide and getting-started documentation and examples. This is a living document, so if you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
## General Config Tips
- When defining configurations, specify the latest stable API version (currently v1).
- Configuration files should be stored in version control before being pushed to the cluster. This allows a configuration to be quickly rolled back if needed, and will aid with cluster re-creation and restoration if necessary.
- Write your configuration files using YAML rather than JSON. They can be used interchangeably in almost all scenarios, but YAML tends to be more user-friendly for config.
- Group related objects together in a single file where this makes sense. This format is often easier to manage than separate files. See the [guestbook-all-in-one.yaml](../../examples/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
(Note also that many `kubectl` commands can be called on a directory, and so you can also call
`kubectl create` on a directory of config files— see below for more detail).
- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to
reduce error. For example, omit the selector and labels in a `ReplicationController` if you want
them to be the same as the labels in its `podTemplate`, since those fields are populated from the
`podTemplate` labels by default. See the [guestbook app's](../../examples/guestbook/) .yaml files for some [examples](../../examples/guestbook/frontend-controller.yaml) of this.
- Put an object description in an annotation to allow better introspection.
## "Naked" Pods vs Replication Controllers and Jobs
- If there is a viable alternative to naked pods (i.e., pods not bound to a [replication controller
](replication-controller.md)), go with the alternative. Naked pods will not be rescheduled in the
event of node failure.
Replication controllers are almost always preferable to creating pods, except for some explicit
[`restartPolicy: Never`](pod-states.md#restartpolicy) scenarios. A
[Job](jobs.md) object (currently in Beta), may also be appropriate.
## Services
- It's typically best to create a [service](services.md) before corresponding [replication
controllers](replication-controller.md), so that the scheduler can spread the pods comprising the
service. You can also create a replication controller without specifying replicas (this will set
replicas=1), create a service, then scale up the replication controller. This can be useful in
ensuring that one replica works before creating lots of them.
- Don't use `hostPort` (which specifies the port number to expose on the host) unless absolutely
necessary, e.g., for a node daemon. When you bind a Pod to a `hostPort`, there are a limited
number of places that pod can be scheduled, due to port conflicts— you can only schedule as many
such Pods as there are nodes in your Kubernetes cluster.
If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](connecting-to-applications-proxy.md) or [kubectl port-forward](connecting-to-applications-port-forward.md).
You can use a [Service](services.md) object for external service access.
If you do need to expose a pod's port on the host machine, consider using a [NodePort](services.md#type-nodeport) service before resorting to `hostPort`.
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
- Use _headless services_ for easy service discovery when you don't need kube-proxy load balancing.
See [headless services](services.md#headless-services).
## Using Labels
- Define and use [labels](labels.md) that identify __semantic attributes__ of your application or
deployment. For example, instead of attaching a label to a set of pods to explicitly represent
some service (e.g., `service: myservice`), or explicitly representing the replication
controller managing the pods (e.g., `controller: mycontroller`), attach labels that identify
semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This
will let you select the object groups appropriate to the context— e.g., a service for all "tier:
frontend" pods, or all "test" phase components of app "myapp". See the
[guestbook](../../examples/guestbook/) app for an example of this approach.
A service can be made to span multiple deployments, such as is done across [rolling updates](kubectl/kubectl_rolling-update.md), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.
- To facilitate rolling updates, include version info in replication controller names, e.g. as a
suffix to the name. It is useful to set a 'version' label as well. The rolling update creates a
new controller as opposed to modifying the existing controller. So, there will be issues with
version-agnostic controller names. See the [documentation](kubectl/kubectl_rolling-update.md) on
the rolling-update command for more detail.
Note that the [Deployment](deployments.md) object obviates the need to manage replication
controller 'version names'. A desired state of an object is described by a Deployment, and if
changes to that spec are _applied_, the deployment controller changes the actual state to the
desired state at a controlled rate. (Deployment objects are currently part of the [`extensions`
API Group](../api.md#api- groups), and are not enabled by default.)
- You can manipulate labels for debugging. Because Kubernetes replication controllers and services
match to pods using labels, this allows you to remove a pod from being considered by a
controller, or served traffic by a service, by removing the relevant selector labels. If you
remove the labels of an existing pod, its controller will create a new pod to take its place.
This is a useful way to debug a previously "live" pod in a quarantine environment. See the
[`kubectl label`](kubectl/kubectl_label.md) command.
## Container Images
- The [default container image pull policy](images.md) is `IfNotPresent`, which causes the
[Kubelet](../admin/kubelet.md) to not pull an image if it already exists. If you would like to
always force a pull, you must specify a pull image policy of `Always` in your .yaml file
(`imagePullPolicy: Always`) or specify a `:latest` tag on your image.
That is, if you're specifying an image with other than the `:latest` tag, e.g. `myimage:v1`, and
there is an image update to that same tag, the Kubelet won't pull the updated image. You can
address this by ensuring that any updates to an image bump the image tag as well (e.g.
`myimage:v2`), and ensuring that your configs point to the correct version.
## Using kubectl
- Use `kubectl create -f <directory>` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes them to `create`.
- Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated.
- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](labels.md#label-selectors) and [using labels effectively](managing-deployments.md#using-labels-effectively).
- Use `kubectl run` and `expose` to quickly create and expose single container replication controllers. See the [quick start guide](quick-start.md) for an example.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -27,566 +27,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# ConfigMap This file has moved to: http://kubernetes.github.io/docs/user-guide/configmap/
Many applications require configuration via some combination of config files, command line
arguments, and environment variables. These configuration artifacts should be decoupled from image
content in order to keep containerized applications portable. The ConfigMap API resource provides
mechanisms to inject containers with configuration data while keeping containers agnostic of
Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or
coarse-grained information like entire config files or JSON blobs.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [ConfigMap](#configmap)
- [Overview of ConfigMap](#overview-of-configmap)
- [Creating ConfigMaps](#creating-configmaps)
- [Creating from directories](#creating-from-directories)
- [Creating from files](#creating-from-files)
- [Creating from literal values](#creating-from-literal-values)
- [Consuming ConfigMap in pods](#consuming-configmap-in-pods)
- [Use-Case: Consume ConfigMap in environment variables](#use-case-consume-configmap-in-environment-variables)
- [Use-Case: Set command-line arguments with ConfigMap](#use-case-set-command-line-arguments-with-configmap)
- [Use-Case: Consume ConfigMap via volume plugin](#use-case-consume-configmap-via-volume-plugin)
- [Real World Example: Configuring Redis](#real-world-example-configuring-redis)
- [Restrictions](#restrictions)
<!-- END MUNGE: GENERATED_TOC -->
## Overview of ConfigMap
The ConfigMap API resource holds key-value pairs of configuration data that can be consumed in pods
or used to store configuration data for system components such as controllers. ConfigMap is similar
to [Secrets](secrets.md), but designed to more conveniently support working with strings that do not
contain sensitive information.
Let's look at a made-up example:
```yaml
kind: ConfigMap
apiVersion: v1
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: example-config
namespace: default
data:
example.property.1: hello
example.property.2: world
example.property.file: |-
property.1=value-1
property.2=value-2
property.3=value-3
```
The `data` field contains the configuration data. As you can see, ConfigMaps can be used to hold
fine-grained information like individual properties or coarse-grained information like the contents
of configuration files.
Configuration data can be consumed in pods in a variety of ways. ConfigMaps can be used to:
1. Populate the value of environment variables
2. Set command-line arguments in a container
3. Populate config files in a volume
Both users and system components may store configuration data in ConfigMap.
## Creating ConfigMaps
You can use the `kubectl create configmap` command to create configmaps easily from literal values,
files, or directories.
Let's take a look at some different ways to create a ConfigMap:
### Creating from directories
Say that we have a directory with some files that already contain the data we want to populate a ConfigMap with:
```console
$ ls docs/user-guide/configmap/kubectl/
game.properties
ui.properties
$ cat docs/user-guide/configmap/kubectl/game.properties
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
$ cat docs/user-guide/configmap/kubectl/ui.properties
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
```
The `kubectl create configmap` command can be used to create a ConfigMap holding the content of each
file in this directory:
```console
$ kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
```
When `--from-file` points to a directory, each file directly in that directory is used to populate a
key in the ConfigMap, where the name of the key is the filename, and the value of the key is the
content of the file.
Let's take a look at the ConfigMap that this command created:
```console
$ cluster/kubectl.sh describe configmaps game-config
Name: game-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties: 121 bytes
ui.properties: 83 bytes
```
You can see the two keys in the map are created from the filenames in the directory we pointed
kubectl to. Since the content of those keys may be large, in the output of `kubectl describe`,
you'll see only the names of the keys and their sizes.
If we want to see the values of the keys, we can simply `kubectl get` the resource:
```console
$ kubectl get configmaps game-config -o yaml
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:34:05Z
name: game-config
namespace: default
resourceVersion: "407"-
selfLink: /api/v1/namespaces/default/configmaps/game-config
uid: 30944725-d66e-11e5-8cd0-68f728db1985
```
### Creating from files
We can also pass `--from-file` a specific file, and pass it multiple times to kubectl. The
following command yields equivalent results to the above example:
```console
$ kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
$ cluster/kubectl.sh get configmaps game-config-2 -o yaml
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:52:05Z
name: game-config-2
namespace: default
resourceVersion: "516"
selfLink: /api/v1/namespaces/default/configmaps/game-config-2
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
```
We can also set the key to use for an individual file with `--from-file` by passing an expression
of `key=value`: `--from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties`:
```console
$ kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
$ kubectl get configmaps game-config-3 -o yaml
apiVersion: v1
data:
game-special-key: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:54:22Z
name: game-config-3
namespace: default
resourceVersion: "530"
selfLink: /api/v1/namespaces/default/configmaps/game-config-3
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
```
### Creating from literal values
It is also possible to supply literal values for ConfigMaps using `kubectl create configmap`. The
`--from-literal` option takes a `key=value` syntax that allows literal values to be supplied
directly on the command line:
```console
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
$ kubectl get configmaps special-config -o yaml
apiVersion: v1
data:
special.how: very
special.type: charm
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: special-config
namespace: default
resourceVersion: "651"
selfLink: /api/v1/namespaces/default/configmaps/special-config
uid: dadce046-d673-11e5-8cd0-68f728db1985
```
## Consuming ConfigMap in pods
### Use-Case: Consume ConfigMap in environment variables
ConfigMaps can be used to populate the value of command line arguments. As an example, consider
the following ConfigMap:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
```
We can consume the keys of this ConfigMap in a pod like so:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-configmap
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: data-1
restartPolicy: Never
```
When this pod is run, its output will include the lines:
```console
SPECIAL_LEVEL_KEY=very
SPECIAL_TYPE_KEY=charm
```
### Use-Case: Set command-line arguments with ConfigMap
ConfigMaps can also be used to set the value of the command or arguments in a container. This is
accomplished using the kubernetes substitution syntax `$(VAR_NAME)`. Consider the ConfigMap:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
```
In order to inject values into the command line, we must consume the keys we want to use as
environment variables, as in the last example. Then we can refer to them in a container's command
using the `$(VAR_NAME)` syntax.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-configmap
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: data-1
restartPolicy: Never
```
When this pod is run, the output from the `test-container` container will be:
```console
very charm
```
### Use-Case: Consume ConfigMap via volume plugin
ConfigMaps can also be consumed in volumes. Returning again to our example ConfigMap:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
```
We have a couple different options for consuming this ConfigMap in a volume. The most basic
way is to populate the volume with files where the key is the filename and the content of the file
is the value of the key:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "cat", "/etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
```
When this pod is run, the output will be:
```console
very
```
We can also control the paths within the volume where ConfigMap keys are projected:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "cat", "/etc/config/path/to/special-key" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: path/to/special-key
restartPolicy: Never
```
When this pod is run, the output will be:
```console
very
```
## Real World Example: Configuring Redis
Let's take a look at a real-world example: configuring redis using ConfigMap. Say we want to inject
redis with the recommendation configuration for using redis as a cache. The redis config file
should contain:
```
maxmemory 2mb
maxmemory-policy allkeys-lru
```
Such a file is in `docs/user-guide/configmap/redis`; we can use the following command to create a
ConfigMap instance with it:
```console
$ kubectl create configmap example-redis-config --from-file=docs/user-guide/configmap/redis/redis-config
$ kubectl get configmap redis-config -o yaml
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "example-redis-config",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/example-redis-config",
"uid": "07fd0419-d97b-11e5-b443-68f728db1985",
"resourceVersion": "15",
"creationTimestamp": "2016-02-22T15:43:34Z"
},
"data": {
"redis-config": "maxmemory 2mb\nmaxmemory-policy allkeys-lru\n"
}
}
```
Now, let's create a pod that uses this config:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
```
Notice that this pod has a ConfigMap volume that places the `redis-config` key of the
`example-redis-config` ConfigMap into a file called `redis.conf`. This volume is mounted into the
`/redis-master` directory in the redis container, placing our config file at
`/redis-master/redis.conf`, which is where the image looks for the redis config file for the master.
```console
$ kubectl create -f docs/user-guide/configmap/redis/redis-pod.yaml
```
If we `kubectl exec` into this pod and run the `redis-cli` tool, we can check that our config was
applied correctly:
```console
$ kubectl exec -it redis redis-cli
127.0.0.1:6379> CONFIG GET maxmemory
1) "maxmemory"
2) "2097152"
127.0.0.1:6379> CONFIG GET maxmemory-policy
1) "maxmemory-policy"
2) "allkeys-lru"
```
## Restrictions
ConfigMaps must be created before they are consumed in pods. Controllers may be written to tolerate
missing configuration data; consult individual components configured via ConfigMap on a case-by-case
basis.
ConfigMaps reside in a namespace. They can only be referenced by pods in the same namespace.
Quota for ConfigMap size is a planned feature.
Kubelet only supports use of ConfigMap for pods it gets from the API server. This includes any pods
created using kubectl, or indirectly via a replication controller. It does not include pods created
via the Kubelet's `--manifest-url` flag, its `--config` flag, or its REST API (these are not common
ways to create pods.)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configmap.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configmap.md?pixel)]()

View File

@ -27,108 +27,9 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# ConfigMap example This file has moved to: http://kubernetes.github.io/docs/user-guide/configmap/README/
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the ConfigMap
A ConfigMap contains a set of named strings.
Use the [`examples/configmap/configmap.yaml`](configmap.yaml) file to create a ConfigMap:
```console
$ kubectl create -f docs/user-guide/configmap/configmap.yaml
```
You can use `kubectl` to see information about the ConfigMap:
```console
$ kubectl get configmap
NAME DATA
test-secret 2
$ kubectl describe configMap test-configmap
Name: test-configmap
Labels: <none>
Annotations: <none>
Data
====
data-1: 7 bytes
data-2: 7 bytes
```
View the values of the keys with `kubectl get`:
```console
$ cluster/kubectl.sh get configmaps test-configmap -o yaml
apiVersion: v1
data:
data-1: value-1
data-2: value-2
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T20:28:50Z
name: test-configmap
namespace: default
resourceVersion: "1090"
selfLink: /api/v1/namespaces/default/configmaps/test-configmap
uid: 384bd365-d67e-11e5-8cd0-68f728db1985
```
## Step Two: Create a pod that consumes a configMap in environment variables
Use the [`examples/configmap/env-pod.yaml`](env-pod.yaml) file to create a Pod that consumes the
ConfigMap in environment variables.
```console
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
```
This pod runs the `env` command to display the environment of the container:
```console
$ kubectl logs secret-test-pod
KUBE_CONFIG_1=value-1
KUBE_CONFIG_2=value-2
```
## Step Three: Create a pod that sets the command line using ConfigMap
Use the [`examples/configmap/command-pod.yaml`](env-pod.yaml) file to create a Pod with a container
whose command is injected with the keys of a ConfigMap
```console
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
```
This pod runs an `echo` command to display the keys:
```console
value-1 value-2
```
## Step Four: Create a pod that consumes a configMap in a volume
Pods can also consume ConfigMaps in volumes. Use the [`examples/configmap/volume-pod.yaml`](volume-pod.yaml) file to create a Pod that consume the ConfigMap in a volume.
```console
$ kubectl create -f docs/user-guide/configmap/volume-pod.yaml
```
This pod runs a `cat` command to print the value of one of the keys in the volume:
```console
value-1
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configmap/README.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configmap/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS --> <!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,175 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Configuring and launching containers This file has moved to: http://kubernetes.github.io/docs/user-guide/configuring-containers/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Configuring and launching containers](#kubernetes-user-guide-managing-applications-configuring-and-launching-containers)
- [Configuration in Kubernetes](#configuration-in-kubernetes)
- [Launching a container using a configuration file](#launching-a-container-using-a-configuration-file)
- [Validating configuration](#validating-configuration)
- [Environment variables and variable expansion](#environment-variables-and-variable-expansion)
- [Viewing pod status](#viewing-pod-status)
- [Viewing pod output](#viewing-pod-output)
- [Deleting pods](#deleting-pods)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Configuration in Kubernetes
In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](quick-start.md), Kubernetes supports declarative configuration. Often times, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system.
In the declarative style, all configuration is stored in YAML or JSON configuration files using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently “v1”), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation.
## Launching a container using a configuration file
Kubernetes executes containers in [*Pods*](pods.md). A pod containing a simple Hello World container can be specified in YAML as follows:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pods contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo","hello”,”world"]
```
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
The [`command`](containers.md#containers-and-commands) overrides the Docker containers `Entrypoint`. Command arguments (corresponding to Dockers `Cmd`) may be specified using `args`, as follows:
```yaml
command: ["/bin/echo"]
args: ["hello","world"]
```
This pod can be created using the `create` command:
```console
$ kubectl create -f ./hello-world.yaml
pods/hello-world
```
`kubectl` prints the resource type and name of the resource created when successful.
## Validating configuration
We enable validation by default in `kubectl` since v1.1.
Lets say you specified `entrypoint` instead of `command`. Youd see output as follows:
```console
error validating "./hello-world.yaml": error validating data: found invalid field Entrypoint for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
```
Using `kubectl create --validate=false` to turn validation off, it creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
View the [Pod API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_pod)
to see the list of valid fields.
## Environment variables and variable expansion
Kubernetes [does not automatically run commands in a shell](https://github.com/kubernetes/kubernetes/wiki/User-FAQ#use-of-environment-variables-on-the-command-line) (not all images contain shells). If you would like to run your command in a shell, such as to expand environment variables (specified using `env`), you could do the following:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pods contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
```
However, a shell isnt necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](../../docs/design/expansion.md):
```yaml
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
```
## Viewing pod status
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
If youre quick, it will look as follows:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 0s
```
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldnt see pods in an unscheduled state unless theres a problem.
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadnt been pulled already. After a few seconds, you should see the container running:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 5s
```
The `READY` column shows how many containers in the pod are running.
Almost immediately after it starts running, this command will terminate. `kubectl` shows that the container is no longer running and displays the exit status:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 ExitCode:0 0 15s
```
## Viewing pod output
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/), `kubectl logs` will show you the output:
```console
$ kubectl logs hello-world
hello world
```
## Deleting pods
When youre done looking at the output, you should delete the pod:
```console
$ kubectl delete pod hello-world
pods/hello-world
```
As with `create`, `kubectl` prints the resource type and name of the resource deleted when successful.
You can also use the resource/name format to specify the pod:
```console
$ kubectl delete pods/hello-world
pods/hello-world
```
Terminated pods arent currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.
On the other hand, containers and their logs are eventually deleted automatically in order to free up disk space on the nodes.
## What's next?
[Learn about deploying continuously running applications.](deploying-applications.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,414 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Connecting applications This file has moved to: http://kubernetes.github.io/docs/user-guide/connecting-applications/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Connecting applications](#kubernetes-user-guide-managing-applications-connecting-applications)
- [The Kubernetes model for connecting containers](#the-kubernetes-model-for-connecting-containers)
- [Exposing pods to the cluster](#exposing-pods-to-the-cluster)
- [Creating a Service](#creating-a-service)
- [Accessing the Service](#accessing-the-service)
- [Environment Variables](#environment-variables)
- [DNS](#dns)
- [Securing the Service](#securing-the-service)
- [Exposing the Service](#exposing-the-service)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
# The Kubernetes model for connecting containers
Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker.
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, they must be allocated ports on the machine's own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or else be allocated ports dynamically.
Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each others ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model.
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html).
## Exposing pods to the cluster
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
```yaml
$ cat nginxrc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
```console
$ kubectl create -f ./nginxrc.yaml
$ kubectl get pods -l app=nginx -o wide
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-node-93ly
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-node-93ly
```
Check your pods' IPs:
```console
$ kubectl get pods -l app=nginx -o json | grep podIP
"podIP": "10.245.0.15",
"podIP": "10.245.0.14",
```
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
You can read more about [how we achieve this](../admin/networking.md#how-to-achieve-this) if youre curious.
## Creating a Service
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the replication controller will create new ones, with different IPs. This is the problem a Service solves.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
You can create a Service for your 2 nginx replicas with the following yaml:
```yaml
$ cat nginxsvc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginxsvc
labels:
app: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx
```
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_service) to see the list of supported fields in service definition.
Check your Service:
```console
$ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.179.240.1 <none> 443/TCP <none> 8d
nginxsvc 10.179.252.126 122.222.183.144 80/TCP,81/TCP,82/TCP run=nginx2 11m
```
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Services selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
```console
$ kubectl describe svc nginxsvc
Name: nginxsvc
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.0.116.146
Port: <unnamed> 80/TCP
Endpoints: 10.245.0.14:80,10.245.0.15:80
Session Affinity: None
No events.
$ kubectl get ep
NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
```
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if youre curious about how this works you can read more about the [service proxy](services.md#virtual-ips-and-service-proxies).
## Accessing the Service
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md).
### Environment Variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
```console
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
```
Note theres no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the replication controller to recreate them. This time around the Service exists *before* the replicas. This will give you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
```console
$ kubectl scale rc my-nginx --replicas=0; kubectl scale rc my-nginx --replicas=2;
$ kubectl get pods -l app=nginx -o wide
NAME READY STATUS RESTARTS AGE NODE
my-nginx-5j8ok 1/1 Running 0 2m node1
my-nginx-90vaf 1/1 Running 0 2m node2
$ kubectl exec my-nginx-5j8ok -- printenv | grep SERVICE
KUBERNETES_SERVICE_PORT=443
NGINXSVC_SERVICE_HOST=10.0.116.146
KUBERNETES_SERVICE_HOST=10.0.0.1
NGINXSVC_SERVICE_PORT=80
```
### DNS
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if its running on your cluster:
```console
$ kubectl get services kube-dns --namespace=kube-system
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kube-dns 10.179.240.10 <none> 53/UDP,53/TCP k8s-app=kube-dns 8d
```
If it isnt running, you can [enable it](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Lets create another pod to test this:
```yaml
$ cat curlpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: curlpod
spec:
containers:
- image: radial/busyboxplus:curl
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: curlcontainer
restartPolicy: Always
```
And perform a lookup of the nginx Service
```console
$ kubectl create -f ./curlpod.yaml
default/curlpod
$ kubectl get pods curlpod
NAME READY STATUS RESTARTS AGE
curlpod 1/1 Running 0 18s
$ kubectl exec curlpod -- nslookup nginxsvc
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: nginxsvc
Address 1: 10.0.116.146
```
## Securing the Service
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
* Self signed certificates for https (unless you already have an identity certificate)
* An nginx server configured to use the certificates
* A [secret](secrets.md) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:
```console
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
$ kubectl create -f /tmp/secret.json
secrets/nginxsecret
$ kubectl get secrets
NAME TYPE DATA
default-token-il9rc kubernetes.io/service-account-token 1
nginxsecret Opaque 2
```
Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
```yaml
$ cat nginx-app.yaml
apiVersion: v1
kind: Service
metadata:
name: nginxsvc
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: nginx
---
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
containers:
- name: nginxhttps
image: bprashanth/nginxhttps:1.0
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
```
Noteworthy points about the nginx-app manifest:
- It contains both rc and service specification in the same file
- The [nginx server](../../examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
```console
$ kubectl delete rc,svc -l app=nginx; kubectl create -f ./nginx-app.yaml
replicationcontrollers/my-nginx
services/nginxsvc
services/nginxsvc
replicationcontrollers/my-nginx
```
At this point you can reach the nginx server from any node.
```console
$ kubectl get pods -o json | grep -i podip
"podIP": "10.1.0.80",
node $ curl -k https://10.1.0.80
...
<h1>Welcome to nginx!</h1>
```
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
```console
$ cat curlpod.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: curlrc
spec:
replicas: 1
template:
metadata:
labels:
app: curlpod
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
containers:
- name: curlpod
command:
- sh
- -c
- while true; do sleep 1; done
image: radial/busyboxplus:curl
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
$ kubectl create -f ./curlpod.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
curlpod 1/1 Running 0 2m
my-nginx-7006w 1/1 Running 0 24m
$ kubectl exec curlpod -- curl https://nginxsvc --cacert /etc/nginx/ssl/nginx.crt
...
<title>Welcome to nginx!</title>
...
```
## Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
```console
$ kubectl get svc nginxsvc -o json | grep -i nodeport -C 5
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 80,
"nodePort": 32188
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 443,
"nodePort": 30645
}
$ kubectl get nodes -o json | grep ExternalIP -C 2
{
"type": "ExternalIP",
"address": "104.197.63.17"
}
--
},
{
"type": "ExternalIP",
"address": "104.154.89.170"
}
$ curl https://104.197.63.17:30645 -k
...
<h1>Welcome to nginx!</h1>
```
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
```console
$ kubectl delete rc,svc -l app=nginx
$ kubectl create -f ./nginx-app.yaml
$ kubectl get svc nginxsvc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
nginxsvc 10.179.252.126 162.222.184.144 80/TCP,81/TCP,82/TCP run=nginx2 13m
$ curl https://162.22.184.144 -k
...
<title>Welcome to nginx!</title>
```
The IP address in the `EXTERNAL_IP` column is the one that is available on the public internet. The `CLUSTER_IP` is only available inside your
cluster/private cloud network.
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`
output, in fact, so you'll need to do `kubectl describe service nginxsvc` to
see it. You'll see something like this:
```
> kubectl describe service nginxsvc
...
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
...
```
## What's next?
[Learn about more Kubernetes features that will help you run containers reliably in production.](production-pods.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,53 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Connecting to applications: kubectl port-forward This file has moved to: http://kubernetes.github.io/docs/user-guide/connecting-to-applications-port-forward/
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward.md). Compared to [kubectl proxy](accessing-the-cluster.md#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
## Creating a Redis master
```console
$ kubectl create examples/redis/redis-master.yaml
pods/redis-master
```
wait until the Redis master pod is Running and Ready,
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master 2/2 Running 0 41s
```
## Connecting to the Redis master[a]
The Redis master is listening on port 6397, to verify this,
```console
$ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379
```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
```console
$ kubectl port-forward redis-master 6379:6379
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
```
To verify the connection is successful, we run a redis-cli on the local workstation,
```console
$ redis-cli
127.0.0.1:6379> ping
PONG
```
Now one can debug the database from the local workstation.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,41 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Connecting to applications: kubectl proxy and apiserver proxy This file has moved to: http://kubernetes.github.io/docs/user-guide/connecting-to-applications-proxy/
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Connecting to applications: kubectl proxy and apiserver proxy](#connecting-to-applications-kubectl-proxy-and-apiserver-proxy)
- [Getting the apiserver proxy URL of kube-ui](#getting-the-apiserver-proxy-url-of-kube-ui)
- [Connecting to the kube-ui service from your local workstation](#connecting-to-the-kube-ui-service-from-your-local-workstation)
<!-- END MUNGE: GENERATED_TOC -->
You have seen the [basics](accessing-the-cluster.md) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui.md)) running on the Kubernetes cluster from your workstation.
## Getting the apiserver proxy URL of kube-ui
kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
```console
$ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
```
if this command does not find the URL, try the steps [here](ui.md#accessing-the-ui).
## Connecting to the kube-ui service from your local workstation
The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication.
```console
$ kubectl proxy --port=8001
Starting to serve on localhost:8001
```
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,102 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Container Environment This file has moved to: http://kubernetes.github.io/docs/user-guide/container-environment/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes Container Environment](#kubernetes-container-environment)
- [Overview](#overview)
- [Cluster Information](#cluster-information)
- [Container Information](#container-information)
- [Cluster Information](#cluster-information)
- [Container Hooks](#container-hooks)
- [Hook Details](#hook-details)
- [Hook Handler Execution](#hook-handler-execution)
- [Hook delivery guarantees](#hook-delivery-guarantees)
- [Hook Handler Implementations](#hook-handler-implementations)
<!-- END MUNGE: GENERATED_TOC -->
## Overview
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode).  In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
This cluster information makes it possible to build applications that are *cluster aware*.
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers.  Container hooks are somewhat analogous to operating system signals in a traditional process model.   However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster.  Containers that participate in this cluster lifecycle become *cluster native*.
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](images.md) and one or more [volumes](volumes.md).
The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system.
## Cluster Information
There are two types of information that are available within the container environment.  There is information about the container itself, and there is information about other objects in the system.
### Container Information
Currently, the Pod name for the pod in which the container is running is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc), but this is planned to change in the future and should not be used.
The Pod name and namespace are also available as environment variables via the [downward API](downward-api.md). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image.
In the future, we anticipate expanding this information with richer information about the container.  Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server.
### Cluster Information
Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables.  The set of environment variables matches the syntax of Docker links.
For a service named **foo** that maps to a container port named **bar**, the following variables are defined:
```sh
FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/HEAD/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
## Container Hooks
Container hooks provide information to the container about events in its management lifecycle.  For example, immediately after a container is started, it receives a *PostStart* hook.  These hooks are broadcast *into* the container with information about the life-cycle of the container.  They are different from the events provided by Docker and other systems which are *output* from the container.  Output events provide a log of what has already happened.  Input hooks provide real-time notification about things that are happening, but no historical log.
### Hook Details
There are currently two container hooks that are surfaced to containers:
*PostStart*
This hook is sent immediately after a container is created.  It notifies the container that it has been created.  No parameters are passed to the handler.
*PreStop*
This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](pods.md#termination-of-pods).
### Hook Handler Execution
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop).
### Hook delivery guarantees
Hook delivery is intended to be "at least once", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
correctly.
We expect double delivery to be rare, but in some cases if the Kubelet restarts in the middle of sending a hook, the hook may be resent after the Kubelet comes back up.
Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend.
Currently, there are (hopefully rare) scenarios where PostStart hooks may not be delivered.
### Hook Handler Implementations
Hook handlers are the way that hooks are surfaced to containers.  Containers can select the type of hook handler they would like to implement.  Kubernetes currently supports two different hook handler types:
* Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroups and namespaces of the container.  Resources consumed by the command are counted against the container.
* HTTP - Executes an HTTP request against a specific endpoint on the container.
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,95 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Containers with Kubernetes This file has moved to: http://kubernetes.github.io/docs/user-guide/containers/
## Containers and commands
So far the Pods we've seen have all used the `image` field to indicate what process Kubernetes
should run in a container. In this case, Kubernetes runs the image's default command. If we want
to run a particular command or override the image's defaults, there are two additional fields that
we can use:
1. `Command`: Controls the actual command run by the image
2. `Args`: Controls the arguments passed to the command
### How docker handles command and arguments
Docker images have metadata associated with them that is used to store information about the image.
The image author may use this to define defaults for the command and arguments to run a container
when the user does not supply values. Docker calls the fields for commands and arguments
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
describe here, mostly due to the fact that the docker API allows users to specify both of these
fields as either a string array or a string and there are subtle differences in how those cases are
handled. We encourage the curious to check out [docker's documentation]() for this feature.
Kubernetes allows you to override both the image's default command (docker `Entrypoint`) and args
(docker `Cmd`) with the `Command` and `Args` fields of `Container`. The rules are:
1. If you do not supply a `Command` or `Args` for a container, the defaults defined by the image
will be used
2. If you supply a `Command` but no `Args` for a container, only the supplied `Command` will be
used; the image's default arguments are ignored
3. If you supply only `Args`, the image's default command will be used with the arguments you
supply
4. If you supply a `Command` **and** `Args`, the image's defaults will be ignored and the values
you supply will be used
Here are examples for these rules in table format
| Image `Entrypoint` | Image `Cmd` | Container `Command` | Container `Args` | Command Run |
|--------------------|------------------|---------------------|--------------------|------------------|
| `[/ep-1]` | `[foo bar]` | &lt;not set&gt; | &lt;not set&gt; | `[ep-1 foo bar]` |
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | &lt;not set&gt; | `[ep-2]` |
| `[/ep-1]` | `[foo bar]` | &lt;not set&gt; | `[zoo boo]` | `[ep-1 zoo boo]` |
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
## Capabilities
By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. We can have fine grain control over the capabilities using cap-add and cap-drop.More details [here](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration).
The relationship between Docker's capabilities and [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html)
| Docker's capabilities | Linux capabilities |
| ---- | ---- |
| SETPCAP | CAP_SETPCAP |
| SYS_MODULE | CAP_SYS_MODULE |
| SYS_RAWIO | CAP_SYS_RAWIO |
| SYS_PACCT | CAP_SYS_PACCT |
| SYS_ADMIN | CAP_SYS_ADMIN |
| SYS_NICE | CAP_SYS_NICE |
| SYS_RESOURCE | CAP_SYS_RESOURCE |
| SYS_TIME | CAP_SYS_TIME |
| SYS_TTY_CONFIG | CAP_SYS_TTY_CONFIG |
| MKNOD | CAP_MKNOD |
| AUDIT_WRITE | CAP_AUDIT_WRITE |
| AUDIT_CONTROL | CAP_AUDIT_CONTROL |
| MAC_OVERRIDE | CAP_MAC_OVERRIDE |
| MAC_ADMIN | CAP_MAC_ADMIN |
| NET_ADMIN | CAP_NET_ADMIN |
| SYSLOG | CAP_SYSLOG |
| CHOWN | CAP_CHOWN |
| NET_RAW | CAP_NET_RAW |
| DAC_OVERRIDE | CAP_DAC_OVERRIDE |
| FOWNER | CAP_FOWNER |
| DAC_READ_SEARCH | CAP_DAC_READ_SEARCH |
| FSETID | CAP_FSETID |
| KILL | CAP_KILL |
| SETGID | CAP_SETGID |
| SETUID | CAP_SETUID |
| LINUX_IMMUTABLE | CAP_LINUX_IMMUTABLE |
| NET_BIND_SERVICE | CAP_NET_BIND_SERVICE |
| NET_BROADCAST | CAP_NET_BROADCAST |
| IPC_LOCK | CAP_IPC_LOCK |
| IPC_OWNER | CAP_IPC_OWNER |
| SYS_CHROOT | CAP_SYS_CHROOT |
| SYS_PTRACE | CAP_SYS_PTRACE |
| SYS_BOOT | CAP_SYS_BOOT |
| LEASE | CAP_LEASE |
| SETFCAP | CAP_SETFCAP |
| WAKE_ALARM | CAP_WAKE_ALARM |
| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND |
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,550 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# My Service is not working - how to debug This file has moved to: http://kubernetes.github.io/docs/user-guide/debugging-services/
An issue that comes up rather frequently for new installations of Kubernetes is
that `Services` are not working properly. You've run all your `Pod`s and
`ReplicationController`s, but you get no response when you try to access them.
This document will hopefully help you to figure out what's going wrong.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [My Service is not working - how to debug](#my-service-is-not-working---how-to-debug)
- [Conventions](#conventions)
- [Running commands in a Pod](#running-commands-in-a-pod)
- [Setup](#setup)
- [Does the Service exist?](#does-the-service-exist)
- [Does the Service work by DNS?](#does-the-service-work-by-dns)
- [Does any Service exist in DNS?](#does-any-service-exist-in-dns)
- [Does the Service work by IP?](#does-the-service-work-by-ip)
- [Is the Service correct?](#is-the-service-correct)
- [Does the Service have any Endpoints?](#does-the-service-have-any-endpoints)
- [Are the Pods working?](#are-the-pods-working)
- [Is the kube-proxy working?](#is-the-kube-proxy-working)
- [Is kube-proxy running?](#is-kube-proxy-running)
- [Is kube-proxy writing iptables rules?](#is-kube-proxy-writing-iptables-rules)
- [Userspace](#userspace)
- [Iptables](#iptables)
- [Is kube-proxy proxying?](#is-kube-proxy-proxying)
- [Seek help](#seek-help)
- [More information](#more-information)
<!-- END MUNGE: GENERATED_TOC -->
## Conventions
Throughout this doc you will see various commands that you can run. Some
commands need to be run within `Pod`, others on a Kubernetes `Node`, and others
can run anywhere you have `kubectl` and credentials for the cluster. To make it
clear what is expected, this document will use the following conventions.
If the command "COMMAND" is expected to run in a `Pod` and produce "OUTPUT":
```console
u@pod$ COMMAND
OUTPUT
```
If the command "COMMAND" is expected to run on a `Node` and produce "OUTPUT":
```console
u@node$ COMMAND
OUTPUT
```
If the command is "kubectl ARGS":
```console
$ kubectl ARGS
OUTPUT
```
## Running commands in a Pod
For many steps here you will want to see what a `Pod` running in the cluster
sees. Kubernetes does not directly support interactive `Pod`s (yet), but you can
approximate it:
```console
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
EOF
pods/busybox-sleep
```
Now, when you need to run a command (even an interactive shell) in a `Pod`-like
context, use:
```console
$ kubectl exec busybox-sleep -- <COMMAND>
```
or
```console
$ kubectl exec -ti busybox-sleep sh
/ #
```
## Setup
For the purposes of this walk-through, let's run some `Pod`s. Since you're
probably debugging your own `Service` you can substitute your own details, or you
can follow along and get a second data point.
```console
$ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
--port=9376 \
--replicas=3
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hostnames hostnames gcr.io/google_containers/serve_hostname app=hostnames 3
```
Note that this is the same as if you had started the `ReplicationController` with
the following YAML:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: hostnames
spec:
selector:
app: hostnames
replicas: 3
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: gcr.io/google_containers/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
```
Confirm your `Pod`s are running:
```console
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 12s
hostnames-bvc05 1/1 Running 0 12s
hostnames-yp2kp 1/1 Running 0 12s
```
## Does the Service exist?
The astute reader will have noticed that we did not actually create a `Service`
yet - that is intentional. This is a step that sometimes gets forgotten, and
is the first thing to check.
So what would happen if I tried to access a non-existent `Service`? Assuming you
have another `Pod` that consumes this `Service` by name you would get something
like:
```console
u@pod$ wget -qO- hostnames
wget: bad address 'hostname'
```
or:
```console
u@pod$ echo $HOSTNAMES_SERVICE_HOST
```
So the first thing to check is whether that `Service` actually exists:
```console
$ kubectl get svc hostnames
Error from server: service "hostnames" not found
```
So we have a culprit, let's create the `Service`. As before, this is for the
walk-through - you can use your own `Service`'s details here.
```console
$ kubectl expose rc hostnames --port=80 --target-port=9376
service "hostnames" exposed
```
And read it back, just to be sure:
```console
$ kubectl get svc hostnames
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hostnames 10.0.0.1 <none> 80/TCP run=hostnames 1h
```
As before, this is the same as if you had started the `Service` with YAML:
```yaml
apiVersion: v1
kind: Service
metadata:
name: hostnames
spec:
selector:
app: hostnames
ports:
- name: default
protocol: TCP
port: 80
targetPort: 9376
```
Now you can confirm that the `Service` exists.
## Does the Service work by DNS?
From a `Pod` in the same `Namespace`:
```console
u@pod$ nslookup hostnames
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames
Address: 10.0.1.175
```
If this fails, perhaps your `Pod` and `Service` are in different
`Namespace`s, try a namespace-qualified name:
```console
u@pod$ nslookup hostnames.default
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames.default
Address: 10.0.1.175
```
If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
`Namespace`. If this still fails, try a fully-qualified name:
```console
u@pod$ nslookup hostnames.default.svc.cluster.local
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
Note the suffix here: "default.svc.cluster.local". The "default" is the
`Namespace` we're operating in. The "svc" denotes that this is a `Service`.
The "cluster.local" is your cluster domain.
You can also try this from a `Node` in the cluster (note: 10.0.0.10 is my DNS
`Service`):
```console
u@node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
If you are able to do a fully-qualified name lookup but not a relative one, you
need to check that your `kubelet` is running with the right flags.
The `--cluster-dns` flag needs to point to your DNS `Service`'s IP and the
`--cluster-domain` flag needs to be your cluster's domain - we assumed
"cluster.local" in this document, but yours might be different, in which case
you should change that in all of the commands above.
### Does any Service exist in DNS?
If the above still fails - DNS lookups are not working for your `Service` - we
can take a step back and see what else is not working. The Kubernetes master
`Service` should always work:
```console
u@pod$ nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: kubernetes
Address 1: 10.0.0.1
```
If this fails, you might need to go to the kube-proxy section of this doc, or
even go back to the top of this document and start over, but instead of
debugging your own `Service`, debug DNS.
## Does the Service work by IP?
The next thing to test is whether your `Service` works at all. From a
`Node` in your cluster, access the `Service`'s IP (from `kubectl get` above).
```console
u@node$ curl 10.0.1.175:80
hostnames-0uton
u@node$ curl 10.0.1.175:80
hostnames-yp2kp
u@node$ curl 10.0.1.175:80
hostnames-bvc05
```
If your `Service` is working, you should get correct responses. If not, there
are a number of things that could be going wrong. Read on.
## Is the Service correct?
It might sound silly, but you should really double and triple check that your
`Service` is correct and matches your `Pods`. Read back your `Service` and
verify it:
```console
$ kubectl get service hostnames -o json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "hostnames",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/hostnames",
"uid": "428c8b6c-24bc-11e5-936d-42010af0a9bc",
"resourceVersion": "347189",
"creationTimestamp": "2015-07-07T15:24:29Z",
"labels": {
"app": "hostnames"
}
},
"spec": {
"ports": [
{
"name": "default",
"protocol": "TCP",
"port": 80,
"targetPort": 9376,
"nodePort": 0
}
],
"selector": {
"app": "hostnames"
},
"clusterIP": "10.0.1.175",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
```
Is the port you are trying to access in `spec.ports[]`? Is the `targetPort`
correct for your `Pod`s? If you meant it to be a numeric port, is it a number
(9376) or a string "9376"? If you meant it to be a named port, do your `Pod`s
expose a port with the same name? Is the port's `protocol` the same as the
`Pod`'s?
## Does the Service have any Endpoints?
If you got this far, we assume that you have confirmed that your `Service`
exists and resolves by DNS. Now let's check that the `Pod`s you ran are
actually being selected by the `Service`.
Earlier we saw that the `Pod`s were running. We can re-check that:
```console
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 1h
hostnames-bvc05 1/1 Running 0 1h
hostnames-yp2kp 1/1 Running 0 1h
```
The "AGE" column says that these `Pod`s are about an hour old, which implies that
they are running fine and not crashing.
The `-l app=hostnames` argument is a label selector - just like our `Service`
has. Inside the Kubernetes system is a control loop which evaluates the
selector of every `Service` and save the results into an `Endpoints` object.
```console
$ kubectl get endpoints hostnames
NAME ENDPOINTS
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
```
This confirms that the control loop has found the correct `Pod`s for your
`Service`. If the `hostnames` row is blank, you should check that the
`spec.selector` field of your `Service` actually selects for `metadata.labels`
values on your `Pod`s.
## Are the Pods working?
At this point, we know that your `Service` exists and has selected your `Pod`s.
Let's check that the `Pod`s are actually working - we can bypass the `Service`
mechanism and go straight to the `Pod`s.
```console
u@pod$ wget -qO- 10.244.0.5:9376
hostnames-0uton
pod $ wget -qO- 10.244.0.6:9376
hostnames-bvc05
u@pod$ wget -qO- 10.244.0.7:9376
hostnames-yp2kp
```
We expect each `Pod` in the `Endpoints` list to return its own hostname. If
this is not what happens (or whatever the correct behavior is for your own
`Pod`s), you should investigate what's happening there. You might find
`kubectl logs` to be useful or `kubectl exec` directly to your `Pod`s and check
service from there.
## Is the kube-proxy working?
If you get here, your `Service` is running, has `Endpoints`, and your `Pod`s
are actually serving. At this point, the whole `Service` proxy mechanism is
suspect. Let's confirm it, piece by piece.
### Is kube-proxy running?
Confirm that `kube-proxy` is running on your `Node`s. You should get something
like the below:
```console
u@node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
```
Next, confirm that it is not failing something obvious, like contacting the
master. To do this, you'll have to look at the logs. Accessing the logs
depends on your `Node` OS. On some OSes it is a file, such as
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
should see something like:
```console
I1027 22:14:53.995134 5063 server.go:200] Running in resource-only container "/kube-proxy"
I1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.
I1027 22:14:53.999055 5063 server.go:255] Tearing down userspace rules. Errors here are acceptable.
I1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns-tcp" to [10.244.1.3:53]
I1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns" to [10.244.1.3:53]
I1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for "default/kubernetes:https" to [10.240.0.2:443]
I1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received from master
I1027 22:14:54.040048 5063 proxier.go:294] Adding new service "default/kubernetes:https" at 10.0.0.1:443/TCP
I1027 22:14:54.040154 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns" at 10.0.0.10:53/UDP
I1027 22:14:54.040223 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns-tcp" at 10.0.0.10:53/TCP
```
If you see error messages about not being able to contact the master, you
should double-check your `Node` configuration and installation steps.
### Is kube-proxy writing iptables rules?
One of the main responsibilities of `kube-proxy` is to write the `iptables`
rules which implement `Service`s. Let's check that those rules are getting
written.
The kube-proxy can run in either "userspace" mode or "iptables" mode.
Hopefully you are using the newer, faster, more stable "iptables" mode. You
should see one of the following cases.
#### Userspace
```console
u@node$ iptables-save | grep hostnames
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
```
There should be 2 rules for each port on your `Service` (just one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.
#### Iptables
```console
u@node$ iptables-save | grep hostnames
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.1.7:9376
-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.2.3:9376
-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBR
```
There should be 1 rule in `KUBE-SERVICES`, 1 or 2 rules per endpoint in
`KUBE-SVC-(hash)` (depending on `SessionAffinity`), one `KUBE-SEP-(hash)` chain
per endpoint, and a few rules in each `KUBE-SEP-(hash)` chain. The exact rules
will vary based on your exact config (including node-ports and load-balancers).
### Is kube-proxy proxying?
Assuming you do see the above rules, try again to access your `Service` by IP:
```console
u@node$ curl 10.0.1.175:80
hostnames-0uton
```
If this fails and you are using the userspace proxy, you can try accessing the
proxy directly. If you are using the iptables proxy, skip this section.
Look back at the `iptables-save` output above, and extract the
port number that `kube-proxy` is using for your `Service`. In the above
examples it is "48577". Now connect to that:
```console
u@node$ curl localhost:48577
hostnames-yp2kp
```
If this still fails, look at the `kube-proxy` logs for specific lines like:
```console
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
```
If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.
## Seek help
If you get this far, something very strange is happening. Your `Service` is
running, has `Endpoints`, and your `Pod`s are actually serving. You have DNS
working, `iptables` rules installed, and `kube-proxy` does not seem to be
misbehaving. And yet your `Service` is not working. You should probably let
us know, so we can help investigate!
Contact us on
[Slack](../troubleshooting.md#slack) or
[email](https://groups.google.com/forum/#!forum/google-containers) or
[GitHub](https://github.com/kubernetes/kubernetes).
## More information
Visit [troubleshooting document](../troubleshooting.md) for more information.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,128 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Deploying continuously running applications This file has moved to: http://kubernetes.github.io/docs/user-guide/deploying-applications/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Deploying continuously running applications](#kubernetes-user-guide-managing-applications-deploying-continuously-running-applications)
- [Launching a set of replicas using a configuration file](#launching-a-set-of-replicas-using-a-configuration-file)
- [Viewing replication controller status](#viewing-replication-controller-status)
- [Deleting replication controllers](#deleting-replication-controllers)
- [Labels](#labels)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start.md) and how to configure and launch single-run containers using pods ([Configuring containers](configuring-containers.md)). Here youll use the configuration-based approach to deploy a continuously running, replicated application.
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods.md)) using [*Replication Controllers*](replication-controller.md).
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Its analogous to Google Compute Engines [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWSs [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).
The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start.md) could be specified using YAML as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods dont need to be specified explicitly because they are generated from the name of the replication controller.
View the [replication controller API
object](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html#_v1_replicationcontroller)
to view the list of supported fields.
This replication controller can be created using `create`, just as with pods:
```console
$ kubectl create -f ./nginx-rc.yaml
replicationcontrollers/my-nginx
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
## Viewing replication controller status
You can view the replication controller you created using `get`:
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
```
This tells you that your controller will ensure that you have two nginx replicas.
You can see those replicas using `get`, just as with pods you created directly:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-065jq 1/1 Running 0 51s
my-nginx-buaiq 1/1 Running 0 51s
```
## Deleting replication controllers
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start.md):
```console
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
```
By default, this will also cause the pods managed by the replication controller to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running, specify `--cascade=false`.
If you try to delete the pods before deleting the replication controller, it will just replace them, as it is supposed to do.
## Labels
Kubernetes uses user-defined key-value attributes called [*labels*](labels.md) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
```console
$ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
my-nginx-afv12 0/1 Running 0 3s nginx
my-nginx-lg99z 0/1 Running 0 3s nginx
```
The labels from the pod template are copied to the replication controllers labels by default, as well -- all resources in Kubernetes support labels:
```console
$ kubectl get rc my-nginx -L app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
```
More importantly, the pod templates labels are used to create a [`selector`](labels.md#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get.md):
```console
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didnt want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it wont match pods created by other replication controllers. The most straightforward way to ensure the latter is to create a unique label value for the replication controller, and to specify it in both the pod templates labels and in the selector.
## What's next?
[Learn about exposing applications to users and clients, and connecting tiers of your application together.](connecting-applications.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,357 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Deployments This file has moved to: http://kubernetes.github.io/docs/user-guide/deployments/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Deployments](#deployments)
- [What is a _Deployment_?](#what-is-a-deployment)
- [Creating a Deployment](#creating-a-deployment)
- [Updating a Deployment](#updating-a-deployment)
- [Multiple Updates](#multiple-updates)
- [Writing a Deployment Spec](#writing-a-deployment-spec)
- [Pod Template](#pod-template)
- [Replicas](#replicas)
- [Selector](#selector)
- [Strategy](#strategy)
- [Recreate Deployment](#recreate-deployment)
- [Rolling Update Deployment](#rolling-update-deployment)
- [Max Unavailable](#max-unavailable)
- [Max Surge](#max-surge)
- [Min Ready Seconds](#min-ready-seconds)
- [Alternative to Deployments](#alternative-to-deployments)
- [kubectl rolling update](#kubectl-rolling-update)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _Deployment_?
A _Deployment_ provides declarative updates for Pods and ReplicationControllers.
Users describe the desired state in a Deployment object, and the deployment
controller changes the actual state to the desired state at a controlled rate.
Users can define Deployments to create new resources, or replace existing ones
by new ones.
A typical use case is:
* Create a Deployment to bring up a replication controller and pods.
* Later, update that Deployment to recreate the pods (for example, to use a new image).
## Creating a Deployment
Here is an example Deployment. It creates a replication controller to
bring up 3 nginx pods.
<!-- BEGIN MUNGE: EXAMPLE nginx-deployment.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
[Download example](nginx-deployment.yaml?raw=true)
<!-- END MUNGE: EXAMPLE nginx-deployment.yaml -->
Run the example by downloading the example file and then running this command:
```console
$ kubectl create -f docs/user-guide/nginx-deployment.yaml
deployment "nginx-deployment" created
```
Running
```console
$ kubectl get deployments
```
immediately will give:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 8s
```
This indicates that the Deployment is trying to update 3 replicas, and has not updated any of them yet.
Running the `get` again after a minute, should give:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 1m
```
This indicates that the Deployment has created all three replicas.
Running ```kubectl get rc``` and ```kubectl get pods``` will show the replication controller (RC) and pods created.
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
REPLICAS AGE
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 3 2m
```
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1975012602-4f2tb 1/1 Running 0 1m
deploymentrc-1975012602-j975u 1/1 Running 0 1m
deploymentrc-1975012602-uashb 1/1 Running 0 1m
```
The created RC will ensure that there are three nginx pods at all times.
## Updating a Deployment
Suppose that we now want to update the nginx pods to start using the `nginx:1.9.1` image
instead of the `nginx:1.7.9` image.
For this, we update our deployment file as follows:
<!-- BEGIN MUNGE: EXAMPLE new-nginx-deployment.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
ports:
- containerPort: 80
```
[Download example](new-nginx-deployment.yaml?raw=true)
<!-- END MUNGE: EXAMPLE new-nginx-deployment.yaml -->
We can then `apply` the Deployment:
```console
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
deployment "nginx-deployment" configured
```
Running a `get` immediately will still give:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 8s
```
This indicates that deployment status has not been updated yet (it is still
showing old status).
Running a `get` again after a minute, should show:
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 1/3 1m
```
This indicates that the Deployment has updated one of the three pods that it needs
to update.
Eventually, it will update all the pods.
```console
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 3m
```
We can run ```kubectl get rc``` to see that the Deployment updated the pods by creating a new RC,
which it scaled up to 3 replicas, and has scaled down the old RC to 0 replicas.
```console
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
deploymentrc-1562004724 nginx nginx:1.9.1 pod-template-hash=1562004724,app=nginx 3 5m
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 0 7m
```
Running `get pods` should now show only the new pods:
```console
kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1562004724-0tgk5 1/1 Running 0 9m
deploymentrc-1562004724-1rkfl 1/1 Running 0 8m
deploymentrc-1562004724-6v702 1/1 Running 0 8m
```
Next time we want to update these pods, we can just update and re-apply the Deployment again.
Deployment ensures that not all pods are down while they are being updated. By
default, it ensures that minimum of 1 less than the desired number of pods are
up. For example, if you look at the above deployment closely, you will see that
it first created a new pod, then deleted some old pods and created new ones. It
does not kill old pods until a sufficient number of new pods have come up.
```console
$ kubectl describe deployments
Name: nginx-deployment
Namespace: default
CreationTimestamp: Thu, 22 Oct 2015 17:58:49 -0700
Labels: app=nginx-deployment
Selector: app=nginx
Replicas: 3 updated / 3 total
StrategyType: RollingUpdate
RollingUpdateStrategy: 1 max unavailable, 1 max surge, 0 min ready seconds
OldReplicationControllers: deploymentrc-1562004724 (3/3 replicas created)
NewReplicationController: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
10m 10m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1975012602 to 3
2m 2m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 1
2m 2m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 1
1m 1m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 3
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0
```
Here we see that when we first created the Deployment, it created an RC and scaled it up to 3 replicas directly.
When we updated the Deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.
It then scaled up the new RC to 3 and when those pods were ready, it scaled down the old RC to 0.
### Multiple Updates
Each time a new deployment object is observed, a replication controller is
created to bring up the desired pods if there is no existing RC doing so.
Existing RCs controlling pods whose labels match `.spec.selector` but whose
template does not match `.spec.template` are scaled down.
Eventually, the new RC will be scaled to `.spec.replicas` and all old RCs will
be scaled to 0.
If the user updates a Deployment while an existing deployment is in progress,
the Deployment will create a new RC as per the update and start scaling that up, and
will roll the RC that it was scaling up previously-- it will add it to its list of old RCs and will
start scaling it down.
For example, suppose the user creates a deployment to create 5 replicas of `nginx:1.7.9`,
but then updates the deployment to create 5 replicas of `nginx:1.9.1`, when only 3
replicas of `nginx:1.7.9` had been created. In that case, deployment will immediately start
killing the 3 `nginx:1.7.9` pods that it had created, and will start creating
`nginx:1.9.1` pods. It will not wait for 5 replicas of `nginx:1.7.9` to be created
before changing course.
## Writing a Deployment Spec
As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and
`metadata` fields. For general information about working with config files,
see [here](deploying-applications.md), [here](configuring-containers.md), and [here](working-with-resources.md).
A Deployment also needs a [`.spec` section](../devel/api-conventions.md#spec-and-status).
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller.md#pod-template). It has exactly
the same schema as a [pod](pods.md), except that it is nested and does not have an
`apiVersion` or `kind`.
### Replicas
`.spec.replicas` is an optional field that specifies the number of desired pods. It defaults
to 1.
### Selector
`.spec.selector` is an optional field that specifies label selectors for pods
targeted by this deployment. Deployment kills some of these pods, if their
template is different than `.spec.template` or if the total number of such pods
exceeds `.spec.replicas`. It will bring up new pods with `.spec.template` if
number of pods are less than the desired number.
### Strategy
`.spec.strategy` specifies the strategy used to replace old pods by new ones.
`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is
the default value.
#### Recreate Deployment
All existing pods are killed before new ones are created when
`.spec.strategy.type==Recreate`.
__Note: This is not implemented yet__.
#### Rolling Update Deployment
The Deployment updates pods in a [rolling update](update-demo/) fashion
when `.spec.strategy.type==RollingUpdate`.
Users can specify `maxUnavailable`, `maxSurge` and `minReadySeconds` to control
the rolling update process.
##### Max Unavailable
`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the
maximum number of pods that can be unavailable during the update process.
The value can be an absolute number (e.g. 5) or a percentage of desired pods
(e.g. 10%).
The absolute number is calculated from percentage by rounding up.
This can not be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0.
By default, a fixed value of 1 is used.
For example, when this value is set to 30%, the old RC can be scaled down to
70% of desired pods immediately when the rolling update starts. Once new pods are
ready, old RC can be scaled down further, followed by scaling up the new RC,
ensuring that the total number of pods available at all times during the
update is at least 70% of the desired pods.
##### Max Surge
`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the
maximum number of pods that can be created above the desired number of pods.
Value can be an absolute number (e.g. 5) or a percentage of desired pods
(e.g. 10%).
This can not be 0 if `MaxUnavailable` is 0.
The absolute number is calculated from percentage by rounding up.
By default, a value of 1 is used.
For example, when this value is set to 30%, the new RC can be scaled up immediately when
the rolling update starts, such that the total number of old and new pods do not exceed
130% of desired pods. Once old pods have been killed,
the new RC can be scaled up further, ensuring that the total number of pods running
at any time during the update is at most 130% of desired pods.
##### Min Ready Seconds
`.spec.minReadySeconds` is an optional field that specifies the
minimum number of seconds for which a newly created pod should be ready
without any of its containers crashing, for it to be considered available.
This defaults to 0 (the pod will be considered available as soon as it is ready).
To learn more about when a pod is considered ready, see [Container Probes](pod-states.md#container-probes).
## Alternative to Deployments
### kubectl rolling update
[Kubectl rolling update](kubectl/kubectl_rolling-update.md) also updates pods and replication controllers in a similar fashion.
But Deployments is declarative and is server side.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,313 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# kubectl for docker users This file has moved to: http://kubernetes.github.io/docs/user-guide/docker-cli-to-kubectl/
In this doc, we introduce the Kubernetes command line for interacting with the api to docker-cli users. The tool, kubectl, is designed to be familiar to docker-cli users but there are a few necessary differences. Each section of this doc highlights a docker subcommand explains the kubectl equivalent.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [kubectl for docker users](#kubectl-for-docker-users)
- [docker run](#docker-run)
- [docker ps](#docker-ps)
- [docker attach](#docker-attach)
- [docker exec](#docker-exec)
- [docker logs](#docker-logs)
- [docker stop and docker rm](#docker-stop-and-docker-rm)
- [docker login](#docker-login)
- [docker version](#docker-version)
- [docker info](#docker-info)
<!-- END MUNGE: GENERATED_TOC -->
#### docker run
How do I run an nginx container and expose it to the world? Checkout [kubectl run](kubectl/kubectl_run.md).
With docker:
```console
$ docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
a9ec34d9878748d2f33dc20cb25c714ff21da8d40558b45bfaec9955859075d0
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```console
# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
replicationcontroller "nginx-app" created
# expose a port through with a service
$ kubectl expose rc nginx-app --port=80 --name=nginx-http
```
With kubectl, we create a [replication controller](replication-controller.md) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services.md) with a selector that matches the replication controller's selector. See the [Quick start](quick-start.md) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
```console
kubectl run [-i] [--tty] --attach <name> --image=<image>
```
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
Because we start a replication controller for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different than `docker run -it`.
To destroy the replication controller (and it's pods) you need to run `kubectl delete rc <name>`
#### docker ps
How do I list what is currently running? Checkout [kubectl get](kubectl/kubectl_get.md).
With docker:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```console
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 1h
```
#### docker attach
How do I attach to a process that is already running in a container? Checkout [kubectl attach](kubectl/kubectl_attach.md)
With docker:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker attach -it a9ec34d98787
...
```
With kubectl:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl attach -it nginx-app-5jyvm
...
```
#### docker exec
How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubectl_exec.md).
With docker:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker exec a9ec34d98787 cat /etc/hostname
a9ec34d98787
```
With kubectl:
```console
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm
```
What about interactive commands?
With docker:
```console
$ docker exec -ti a9ec34d98787 /bin/sh
# exit
```
With kubectl:
```console
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
For more information see [Getting into containers](getting-into-containers.md).
#### docker logs
How do I follow stdout/stderr of a running process? Checkout [kubectl logs](kubectl/kubectl_logs.md).
With docker:
```console
$ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
With kubectl:
```console
$ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
```console
$ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
See [Logging](logging.md) for more information.
#### docker stop and docker rm
How do I stop and delete a running process? Checkout [kubectl delete](kubectl/kubectl_delete.md).
With docker
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker stop a9ec34d98787
a9ec34d98787
$ docker rm a9ec34d98787
a9ec34d98787
```
With kubectl:
```console
$ kubectl get rc nginx-app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
nginx-app nginx-app nginx run=nginx-app 1
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-aualv 1/1 Running 0 16s
$ kubectl delete rc nginx-app
NAME READY STATUS RESTARTS AGE
nginx-app-aualv 1/1 Running 0 16s
$ kubectl get po
NAME READY STATUS RESTARTS AGE
```
Notice that we don't delete the pod directly. With kubectl we want to delete the replication controller that owns the pod. If we delete the pod directly, the replication controller will recreate the pod.
#### docker login
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](images.md#using-a-private-registry).
#### docker version
How do I get the version of my client and server? Checkout [kubectl version](kubectl/kubectl_version.md).
With docker:
```console
$ docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
```
With kubectl:
```console
$ kubectl version
Client Version: version.Info{Major:"0", Minor:"20.1", GitVersion:"v0.20.1", GitCommit:"", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32699e873ae1ca-dirty", GitCommit:"32699e873ae1caa01812e41de7eab28df4358ee4", GitTreeState:"dirty"}
```
#### docker info
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](kubectl/kubectl_cluster-info.md).
With docker:
```console
$ docker info
Containers: 40
Images: 168
Storage Driver: aufs
Root Dir: /usr/local/google/docker/aufs
Backing Filesystem: extfs
Dirs: 248
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-53-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 12
Total Memory: 31.32 GiB
Name: k8s-is-fun.mtv.corp.google.com
ID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO
WARNING: No swap limit support
```
With kubectl:
```console
$ kubectl cluster-info
Kubernetes master is running at https://108.59.85.141
KubeDNS is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
Heapster is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,159 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Downward API This file has moved to: http://kubernetes.github.io/docs/user-guide/downward-api/
It is sometimes useful for a container to have information about itself, but we
want to be careful not to over-couple containers to Kubernetes. The downward
API allows containers to consume information about themselves or the system and
expose that information how they want it, without necessarily coupling to the
Kubernetes client or REST API.
An example of this is a "legacy" app that is already written assuming
that a particular environment variable will hold a unique identifier. While it
is often possible to "wrap" such applications, this is tedious and error prone,
and violates the goal of low coupling. Instead, the user should be able to use
the Pod's name, for example, and inject it into this well-known variable.
## Capabilities
The following information is available to a `Pod` through the downward API:
* The pod's name
* The pod's namespace
* The pod's IP
More information will be exposed through this same API over time.
## Exposing pod information into a container
Containers consume information from the downward API using environment
variables or using a volume plugin.
### Environment variables
Most environment variables in the Kubernetes API use the `value` field to carry
simple values. However, the alternate `valueFrom` field allows you to specify
a `fieldRef` to select fields from the pod's definition. The `fieldRef` field
is a structure that has an `apiVersion` field and a `fieldPath` field. The
`fieldPath` field is an expression designating a field of the pod. The
`apiVersion` field is the version of the API schema that the `fieldPath` is
written in terms of. If the `apiVersion` field is not specified it is
defaulted to the API version of the enclosing object.
The `fieldRef` is evaluated and the resulting value is used as the value for
the environment variable. This allows users to publish their pod's name in any
environment variable they want.
## Example
This is an example of a pod that consumes its name and namespace via the
downward API:
<!-- BEGIN MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
restartPolicy: Never
```
[Download example](downward-api/dapi-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
### Downward API volume
Using a similar syntax it's possible to expose pod information to containers using plain text files.
Downward API are dumped to a mounted volume. This is achieved using a `downwardAPI`
volume type and the different items represent the files to be created. `fieldPath` references the field to be exposed.
Downward API volume permits to store more complex data like [`metadata.labels`](labels.md) and [`metadata.annotations`](annotations.md). Currently key/value pair set fields are saved using `key="value"` format:
```
key1="value1"
key2="value2"
```
In future, it will be possible to specify an output format option.
Downward API volumes can expose:
* The pod's name
* The pod's namespace
* The pod's labels
* The pod's annotations
The downward API volume refreshes its data in step with the kubelet refresh loop. When labels will be modifiable on the fly without respawning the pod containers will be able to detect changes through mechanisms such as [inotify](https://en.wikipedia.org/wiki/Inotify).
In future, it will be possible to specify a specific annotation or label.
## Example
This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations are dumped in `/etc/labels` and in `/etc/annotations`, respectively:
<!-- BEGIN MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
zone: us-est-coast
cluster: test-cluster1
rack: rack-22
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: gcr.io/google_containers/busybox
command: ["sh", "-c", "while true; do if [[ -e /etc/labels ]]; then cat /etc/labels; fi; if [[ -e /etc/annotations ]]; then cat /etc/annotations; fi; sleep 5; done"]
volumeMounts:
- name: podinfo
mountPath: /etc
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
```
[Download example](downward-api/volume/dapi-volume.yaml?raw=true)
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
Some more thorough examples:
* [environment variables](environment-guide/)
* [downward API](downward-api/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,40 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Downward API example This file has moved to: http://kubernetes.github.io/docs/user-guide/downward-api/README/
Following this example, you will create a pod with a container that consumes the pod's name and
namespace using the [downward API](../downward-api.md).
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the pod
Containers consume the downward API using environment variables. The downward API allows
containers to be injected with the name and namespace of the pod the container is in.
Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
downward API.
```console
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
through the pod logs to see that the pod was injected with the correct values:
```console
$ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,73 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Downward API volume plugin This file has moved to: http://kubernetes.github.io/docs/user-guide/downward-api/volume/README/
Following this example, you will create a pod with a downward API volume.
A downward API volume is a k8s volume plugin with the ability to save some pod information in a plain text file. The pod information can be for example some [metadata](../../../../docs/devel/api-conventions.md#metadata).
Supported metadata fields:
1. `metadata.annotations`
2. `metadata.namespace`
3. `metadata.name`
4. `metadata.labels`
### Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and the ```kubectl``` command line tool somewhere in your path. Please see the [gettingstarted](../../../../docs/getting-started-guides/) for installation instructions for your platform.
### Step One: Create the pod
Use the `docs/user-guide/downward-api/dapi-volume.yaml` file to create a Pod with a  downward API volume which stores pod labels and pod annotations to `/etc/labels` and  `/etc/annotations` respectively.
```shell
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
```
### Step Two: Examine pod/container output
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
```shell
$ kubectl logs kubernetes-downwardapi-volume-example
cluster="test-cluster1"
rack="rack-22"
zone="us-est-coast"
build="two"
builder="john-doe"
kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
kubernetes.io/config.source="api"
```
### Internals
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
```shell
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
/ # ls -laR /etc
/etc:
total 32
drwxrwxrwt 3 0 0 180 Aug 24 13:03 .
drwxr-xr-x 1 0 0 4096 Aug 24 13:05 ..
drwx------ 2 0 0 80 Aug 24 13:03 ..2015_08_24_13_03_44259413923
lrwxrwxrwx 1 0 0 30 Aug 24 13:03 ..downwardapi -> ..2015_08_24_13_03_44259413923
lrwxrwxrwx 1 0 0 25 Aug 24 13:03 annotations -> ..downwardapi/annotations
lrwxrwxrwx 1 0 0 20 Aug 24 13:03 labels -> ..downwardapi/labels
/etc/..2015_08_24_13_03_44259413923:
total 8
drwx------ 2 0 0 80 Aug 24 13:03 .
drwxrwxrwt 3 0 0 180 Aug 24 13:03 ..
-rw-r--r-- 1 0 0 115 Aug 24 13:03 annotations
-rw-r--r-- 1 0 0 53 Aug 24 13:03 labels
/ #
```
The file `labels` is stored in a temporary directory (`..2015_08_24_13_03_44259413923` in the example above) which is symlinked to by `..downwardapi`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..downwardapi` indirection.  This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..downwardapi` symlink is updated atomically using `rename(2)`.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,95 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Environment Guide Example
=========================
This example demonstrates running pods, replication controllers, and
services. It shows two types of pods: frontend and backend, with
services on top of both. Accessing the frontend pod will return
environment information about itself, and a backend pod that it has
accessed through the service. The goal is to illuminate the
environment metadata available to running containers inside the
Kubernetes cluster. The documentation for the Kubernetes environment
is [here](../../../docs/user-guide/container-environment.md).
![Diagram](diagram.png) This file has moved to: http://kubernetes.github.io/docs/user-guide/environment-guide/README/
Prerequisites
-------------
This example assumes that you have a Kubernetes cluster installed and
running, and that you have installed the `kubectl` command line tool
somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions
for your platform.
Optional: Build your own containers
-----------------------------------
The code for the containers is under
[containers/](containers/)
Get everything running
----------------------
kubectl create -f ./backend-rc.yaml
kubectl create -f ./backend-srv.yaml
kubectl create -f ./show-rc.yaml
kubectl create -f ./show-srv.yaml
Query the service
-----------------
Use `kubectl describe service show-srv` to determine the public IP of
your service.
> Note: If your platform does not support external load balancers,
you'll need to open the proper port and direct traffic to the
internal IP shown for the frontend service with the above command
Run `curl <public ip>:80` to query the service. You should get
something like this back:
```
Pod Name: show-rc-xxu6i
Pod Namespace: default
USER_VAR: important information
Kubernetes environment variables
BACKEND_SRV_SERVICE_HOST = 10.147.252.185
BACKEND_SRV_SERVICE_PORT = 5000
KUBERNETES_RO_SERVICE_HOST = 10.147.240.1
KUBERNETES_RO_SERVICE_PORT = 80
KUBERNETES_SERVICE_HOST = 10.147.240.2
KUBERNETES_SERVICE_PORT = 443
KUBE_DNS_SERVICE_HOST = 10.147.240.10
KUBE_DNS_SERVICE_PORT = 53
Found backend ip: 10.147.252.185 port: 5000
Response from backend
Backend Container
Backend Pod Name: backend-rc-6qiya
Backend Namespace: default
```
First the frontend pod's information is printed. The pod name and
[namespace](../../../docs/design/namespaces.md) are retrieved from the
[Downward API](../../../docs/user-guide/downward-api.md). Next, `USER_VAR` is the name of
an environment variable set in the [pod
definition](show-rc.yaml). Then, the dynamic Kubernetes environment
variables are scanned and printed. These are used to find the backend
service, named `backend-srv`. Finally, the frontend pod queries the
backend service and prints the information returned. Again the backend
pod returns its own pod name and namespace.
Try running the `curl` command a few times, and notice what
changes. Ex: `watch -n 1 curl -s <ip>` Firstly, the frontend service
is directing your request to different frontend pods each time. The
frontend pods are always contacting the backend through the backend
service. This results in a different backend pod servicing each
request as well.
Cleanup
-------
kubectl delete rc,service -l type=show-type
kubectl delete rc,service -l type=backend-type
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,26 +31,8 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE --> <!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Building
--------
For each container, the build steps are the same. The examples below
are for the `show` container. Replace `show` with `backend` for the
backend container.
Google Container Registry ([GCR](https://cloud.google.com/tools/container-registry/)) This file has moved to: http://kubernetes.github.io/docs/user-guide/environment-guide/containers/README/
---
docker build -t gcr.io/<project-name>/show .
gcloud docker push gcr.io/<project-name>/show
Docker Hub
----------
docker build -t <username>/show .
docker push <username>/show
Change Pod Definitions
----------------------
Edit both `show-rc.yaml` and `backend-rc.yaml` and replace the
specified `image:` with the one that you built.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,77 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Getting into containers: kubectl exec This file has moved to: http://kubernetes.github.io/docs/user-guide/getting-into-containers/
Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases.
## Using kubectl exec to check the environment variables of a container
Kubernetes exposes [services](services.md#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
We first create a pod and a service,
```console
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml
```
wait until the pod is Running and Ready,
```console
$ kubectl get pod
NAME READY REASON RESTARTS AGE
redis-master-ft9ex 1/1 Running 0 12s
```
then we can check the environment variables of the pod,
```console
$ kubectl exec redis-master-ft9ex env
...
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_SERVICE_HOST=10.0.0.219
...
```
We can use these environment variables in applications to find the service.
## Using kubectl exec to check the mounted volumes
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
We first create a Pod with a volume mounted at /data/redis,
```console
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
```
wait until the pod is Running and Ready,
```console
$ kubectl get pods
NAME READY REASON RESTARTS AGE
storage 1/1 Running 0 1m
```
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
```console
$ kubectl exec storage ls /data
redis
```
## Using kubectl exec to open a bash terminal in a pod
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
```console
$ kubectl exec -ti storage -- bash
root@storage:/data#
```
This gets you a terminal.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,98 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Horizontal Pod Autoscaler This file has moved to: http://kubernetes.github.io/docs/user-guide/horizontal-pod-autoscaler/
This document describes the current state of Horizontal Pod Autoscaler in Kubernetes.
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Horizontal Pod Autoscaler](#horizontal-pod-autoscaler)
- [What is Horizontal Pod Autoscaler?](#what-is-horizontal-pod-autoscaler)
- [How does Horizontal Pod Autoscaler work?](#how-does-horizontal-pod-autoscaler-work)
- [API Object](#api-object)
- [Support for horizontal pod autoscaler in kubectl](#support-for-horizontal-pod-autoscaler-in-kubectl)
- [Autoscaling during rolling update](#autoscaling-during-rolling-update)
- [Further reading](#further-reading)
<!-- END MUNGE: GENERATED_TOC -->
## What is Horizontal Pod Autoscaler?
Horizontal pod autoscaling allows the number of pods in a replication controller or deployment
to scale automatically based on observed CPU utilization.
It is a [beta](../api.md#api-versioning) feature in Kubernetes 1.1.
The autoscaler is implemented as a Kubernetes API resource and a controller.
The resource describes behavior of the controller.
The controller periodically adjusts the number of replicas in a replication controller or deployment
to match the observed average CPU utilization to the target specified by user.
## How does Horizontal Pod Autoscaler work?
![Horizontal Pod Autoscaler diagram](horizontal-pod-autoscaler.png)
The autoscaler is implemented as a control loop.
It periodically queries CPU utilization for the pods it targets.
(The period of the autoscaler is controlled by `--horizontal-pod-autoscaler-sync-period` flag of controller manager.
The default value is 30 seconds).
Then, it compares the arithmetic mean of the pods' CPU utilization with the target and adjust the number of replicas if needed.
CPU utilization is the recent CPU usage of a pod divided by the sum of CPU requested by the pod's containers.
Please note that if some of the pod's containers do not have CPU request set,
CPU utilization for the pod will not be defined and the autoscaler will not take any action.
Further details of the autoscaling algorithm are given [here](../design/horizontal-pod-autoscaler.md#autoscaling-algorithm).
Autoscaler uses heapster to collect CPU utilization.
Therefore, it is required to deploy heapster monitoring in your cluster for autoscaling to work.
Autoscaler accesses corresponding replication controller or deployment by scale sub-resource.
Scale is an interface which allows to dynamically set the number of replicas and to learn the current state of them.
More details on scale sub-resource can be found [here](../design/horizontal-pod-autoscaler.md#scale-subresource).
## API Object
Horizontal pod autoscaler is a top-level resource in the Kubernetes REST API (currently in [beta](../api.md#api-versioning)).
More details about the API object can be found at
[HorizontalPodAutoscaler Object](../design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
## Support for horizontal pod autoscaler in kubectl
Horizontal pod autoscaler, like every API resource, is supported in a standard way by `kubectl`.
We can create a new autoscaler using `kubectl create` command.
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
Finally, we can delete an autoscaler using `kubectl delete hpa`.
In addition, there is a special `kubectl autoscale` command that allows for easy creation of horizontal pod autoscaler.
For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
The detailed documentation of `kubectl autoscale` can be found [here](kubectl/kubectl_autoscale.md).
## Autoscaling during rolling update
Currently in Kubernetes, it is possible to perform a rolling update by managing replication controllers directly,
or by using the deployment object, which manages the underlying replication controllers for you.
Horizontal pod autoscaler only supports the latter approach: the horizontal pod autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replication controllers.
Horizontal pod autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a horizontal pod autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
The reason this doesn't work is that when rolling update creates a new replication controller,
the horizontal pod autoscaler will not be bound to the new replication controller.
## Further reading
* Design documentation: [Horizontal Pod Autoscaling](../design/horizontal-pod-autoscaler.md).
* Manual of autoscale command in kubectl: [kubectl autoscale](kubectl/kubectl_autoscale.md).
* Usage example of [Horizontal Pod Autoscaler](horizontal-pod-autoscaling/README.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,190 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Horizontal Pod Autoscaler This file has moved to: http://kubernetes.github.io/docs/user-guide/horizontal-pod-autoscaling/README/
Horizontal pod autoscaling is a [beta](../../../docs/api.md#api-versioning) feature in Kubernetes 1.1.
It allows the number of pods in a replication controller or deployment to scale automatically based on observed CPU usage.
In the future also other metrics will be supported.
In this document we explain how this feature works by walking you through an example of enabling horizontal pod autoscaling with the php-apache server.
## Prerequisites
This example requires a running Kubernetes cluster and kubectl in the version at least 1.1.
[Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster
as horizontal pod autoscaler uses it to collect metrics
(if you followed [getting started on GCE guide](../../../docs/getting-started-guides/gce.md),
heapster monitoring will be turned-on by default).
## Step One: Run & expose php-apache server
To demonstrate horizontal pod autoscaler we will use a custom docker image based on php-apache server.
The image can be found [here](image/).
It defines [index.php](image/index.php) page which performs some CPU intensive computations.
First, we will start a replication controller running the image and expose it as an external service:
<a name="kubectl-run"></a>
```console
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m
replicationcontroller "php-apache" created
$ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
```
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
php-apache-wa3t1 1/1 Running 0 12m
$ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
```
We may now check that php-apache server works correctly by calling ``curl`` with the service's IP:
```console
$ curl http://146.148.24.244
OK!
```
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
```console
$ kubectl expose rc php-apache --port=80 --type=ClusterIP
service "php-apache" exposed
$ kubectl cluster-info | grep master
Kubernetes master is running at https://146.148.6.215
$ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/default/services/php-apache/
OK!
```
## Step Two: Create horizontal pod autoscaler
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this:
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleRef:
kind: ReplicationController
name: php-apache
namespace: default
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
```
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
(via the replication controller) so as to maintain an average CPU utilization across all Pods of 50%
(since each pod requests 200 milli-cores by [kubectl run](#kubectl-run), this means average CPU utilization of 100 milli-cores).
See [here](../../../docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
We will create the autoscaler by executing the following command:
```console
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
```
Alternatively, we can create the autoscaler using [kubectl autoscale](../kubectl/kubectl_autoscale.md).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
```
$ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10
replicationcontroller "php-apache" autoscaled
```
We may check the current status of autoscaler by running:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
## Step Three: Increase load
Now, we will see how the autoscaler reacts on the increased load of the server.
We will start an infinite loop of queries to our server (please run it in a different terminal):
```console
$ while true; do curl http://146.148.6.244; done
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
```console
$ while true; do curl http://146.148.6.244; done
```
In the case presented here, it increased the number of serving pods to 10:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 65% 1 10 14m
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
```console
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 21m
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,19 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Identifiers This file has moved to: http://kubernetes.github.io/docs/user-guide/identifiers/
All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID.
For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md).
## Names
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](../design/identifiers.md) for the precise syntax rules for names.
## UIDs
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,284 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Images This file has moved to: http://kubernetes.github.io/docs/user-guide/images/
Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/userguide/dockerimages/).
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Images](#images)
- [Updating Images](#updating-images)
- [Using a Private Registry](#using-a-private-registry)
- [Using Google Container Registry](#using-google-container-registry)
- [Using AWS EC2 Container Registry](#using-aws-ec2-container-registry)
- [Configuring Nodes to Authenticate to a Private Repository](#configuring-nodes-to-authenticate-to-a-private-repository)
- [Pre-pulling Images](#pre-pulling-images)
- [Specifying ImagePullSecrets on a Pod](#specifying-imagepullsecrets-on-a-pod)
- [Creating a Secret with a Docker Config](#creating-a-secret-with-a-docker-config)
- [Bypassing kubectl create secrets](#bypassing-kubectl-create-secrets)
- [Referring to an imagePullSecrets on a Pod](#referring-to-an-imagepullsecrets-on-a-pod)
- [Use Cases](#use-cases)
<!-- END MUNGE: GENERATED_TOC -->
## Updating Images
The default pull policy is `IfNotPresent` which causes the Kubelet to not
pull an image if it already exists. If you would like to always force a pull
you must set a pull image policy of `Always` or specify a `:latest` tag on
your image.
If you did not specify tag of your image, it will be assumed as `:latest`, with
pull image policy of `Always` correspondingly.
## Using a Private Registry
Private registries may require keys to read images from them.
Credentials can be provided in several ways:
- Using Google Container Registry
- Per-cluster
- automatically configured on Google Compute Engine or Google Container Engine
- all pods can read the project's private registry
- Configuring Nodes to Authenticate to a Private Registry
- all pods can read any configured private registries
- requires node configuration by cluster administrator
- Pre-pulling Images
- all pods can use any images cached on a node
- requires root access to all nodes to setup
- Specifying ImagePullSecrets on a Pod
- only pods which provide own keys can access the private registry
Each option is described in more detail below.
### Using Google Container Registry
Kubernetes has native support for the [Google Container
Registry (GCR)](https://cloud.google.com/tools/container-registry/), when running on Google Compute
Engine (GCE). If you are running your cluster on GCE or Google Container Engine (GKE), simply
use the full image name (e.g. gcr.io/my_project/image:tag).
All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance's
Google service account. The service account on the instance
will have a `https://www.googleapis.com/auth/devstorage.read_only`,
so it can pull from the project's GCR, but not push.
### Using AWS EC2 Container Registry
Kubernetes has native support for the [AWS EC2 Container
Registry](https://aws.amazon.com/ecr/), when nodes are AWS instances.
Simply use the full image name (e.g. `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)
in the Pod definition.
All users of the cluster who can create pods will be able to run pods that use any of the
images in the ECR registry.
The kubelet will fetch and periodically refresh ECR credentials. It needs the
`ecr:GetAuthorizationToken` permission to do this.
### Configuring Nodes to Authenticate to a Private Repository
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
with credentials for Google Container Registry. You cannot use this approach.
**Note:** this approach is suitable if you can control node configuration. It
will not work reliably on GCE, and any other cloud provider that does automatic
node replacement.
Docker stores keys for private registries in the `$HOME/.dockercfg` or `$HOME/.docker/config.json` file. If you put this
in the `$HOME` of user `root` on a kubelet, then docker will use it.
Here are the recommended steps to configuring your nodes to use a private registry. In this
example, run these on your desktop/laptop:
1. run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json`.
1. view `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
1. get a list of your nodes, for example:
- if you want the names: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
- if you want to get the IPs: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
1. copy your local `.docker/config.json` to the home directory of root on each node.
- for example: `for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done`
Verify by creating a pod that uses a private image, e.g.:
```yaml
$ cat <<EOF > /tmp/private-image-test-1.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
EOF
$ kubectl create -f /tmp/private-image-test-1.yaml
pods/private-image-test-1
$
```
If everything is working, then, after a few moments, you should see:
```console
$ kubectl logs private-image-test-1
SUCCESS
```
If it failed, then you will see:
```console
$ kubectl describe pods/private-image-test-1 | grep "Failed"
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
```
You must ensure all nodes in the cluster have the same `.docker/config.json`. Otherwise, pods will run on
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
template needs to include the `.docker/config.json` or mount a drive that contains it.
All pods will have read access to images in any private registry once private
registry keys are added to the `.docker/config.json`.
**This was tested with a private docker repository as of 26 June with Kubernetes version v0.19.3.
It should also work for a private registry such as quay.io, but that has not been tested.**
### Pre-pulling Images
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
with credentials for Google Container Registry. You cannot use this approach.
**Note:** this approach is suitable if you can control node configuration. It
will not work reliably on GCE, and any other cloud provider that does automatic
node replacement.
Be default, the kubelet will try to pull each image from the specified registry.
However, if the `imagePullPolicy` property of the container is set to `IfNotPresent` or `Never`,
then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication,
you must ensure all nodes in the cluster have the same pre-pulled images.
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
All pods will have read access to any pre-pulled images.
### Specifying ImagePullSecrets on a Pod
**Note:** This approach is currently the recommended approach for GKE, GCE, and any cloud-providers
where node creation is automated.
Kubernetes supports specifying registry keys on a pod.
#### Creating a Secret with a Docker Config
Run the following command, substituting the appropriate uppercase values:
```console
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
```
If you need access to multiple registries, you can create one secret for each registry.
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
when pulling images for your Pods.
Pods can only reference image pull secrets in their own namespace,
so this process needs to be done one time per namespace.
##### Bypassing kubectl create secrets
If for some reason you need multiple items in a single `.docker/config.json` or need
control not given by the above command, then you can [create a secret using
json or yaml](secrets.md#creating-a-secret-manually).
Be sure to:
- set the name of the data item to `.dockerconfigjson`
- base64 encode the docker file and paste that string, unbroken
as the value for field `data[".dockerconfigjson"]`
- set `type` to `kubernetes.io/dockerconfigjson`
Example:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: awesomeapps
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
```
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...` it means
the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file.
#### Referring to an imagePullSecrets on a Pod
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
section to a pod definition.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
```
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [serviceAccount](service-accounts.md) resource.
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
will be merged. This approach will work on Google Container Engine (GKE).
### Use Cases
There are a number of solutions for configuring private registries. Here are some
common use cases and suggested solutions.
1. Cluster running only non-proprietary (e.g open-source) images. No need to hide images.
- Use public images on the Docker hub.
- no configuration required
- on GCE/GKE, a local mirror is automatically used for improved speed and availability
1. Cluster running some proprietary images which should be hidden to those outside the company, but
visible to all cluster users.
- Use a hosted private [Docker registry](https://docs.docker.com/registry/)
- may be hosted on the [Docker Hub](https://hub.docker.com/account/signup/), or elsewhere.
- manually configure .docker/config.json on each node as described above
- Or, run an internal private registry behind your firewall with open read access.
- no Kubernetes configuration required
- Or, when on GCE/GKE, use the project's Google Container Registry.
- will work better with cluster autoscaling than manual node configuration
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
1. Cluster with a proprietary images, a few of which require stricter access control
- ensure [AlwaysPullImages admission controller](../../docs/admin/admission-controllers.md#alwayspullimages) is active, otherwise, all Pods potentially have access to all images
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
1. A multi-tenant cluster where each tenant needs own private registry
- ensure [AlwaysPullImages admission controller](../../docs/admin/admission-controllers.md#alwayspullimages) is active, otherwise, all Pods of all tenants potentially have access to all images
- run a private registry with authorization required.
- generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
- tenant adds that secret to imagePullSecrets of each namespace.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,287 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Ingress This file has moved to: http://kubernetes.github.io/docs/user-guide/ingress/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Ingress](#ingress)
- [What is Ingress?](#what-is-ingress)
- [Prerequisites](#prerequisites)
- [The Ingress Resource](#the-ingress-resource)
- [Ingress controllers](#ingress-controllers)
- [Types of Ingress](#types-of-ingress)
- [Single Service Ingress](#single-service-ingress)
- [Simple fanout](#simple-fanout)
- [Name based virtual hosting](#name-based-virtual-hosting)
- [Loadbalancing](#loadbalancing)
- [Updating an Ingress](#updating-an-ingress)
- [Future Work](#future-work)
- [Alternatives](#alternatives)
<!-- END MUNGE: GENERATED_TOC -->
__Terminology__
Throughout this doc you will see a few terms that are sometimes used interchangeably elsewhere, that might cause confusion. This section attempts to clarify them.
* Node: A single virtual or physical machine in a Kubernetes cluster.
* Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloudprovider or a physical piece of hardware.
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/admin/networking.md). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/admin/ovs-networking.md).
* Service: A Kubernetes [Service](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md) that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
## What is Ingress?
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:
```
internet
|
------------
[ Services ]
```
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
```
internet
|
[ Ingress ]
--|-----|--
[ Services ]
```
It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server. An [Ingress controller](#ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
## Prerequisites
Before you start using the Ingress resource, there are a few things you should understand:
* The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1.
* You need an Ingress controller to satisfy an Ingress. Simply creating the resource will have no effect.
* On GCE/GKE there should be a [L7 cluster addon](../../cluster/addons/cluster-loadbalancing/glbc/README.md#prerequisites), on other platforms you either need to write your own or [deploy an existing controller](https://github.com/kubernetes/contrib/tree/master/Ingress) as a pod.
* The resource currently does not support HTTPS, but will do so before it leaves beta.
## The Ingress Resource
A minimal Ingress might look like:
```yaml
01. apiVersion: extensions/v1beta1
02. kind: Ingress
03. metadata:
04. name: test-ingress
05. spec:
06. rules:
07. - http:
08. paths:
09. - path: /testpath
10. backend:
11. serviceName: test
12. servicePort: 80
```
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [deploying applications](deploying-applications.md), [configuring containers](configuring-containers.md), and [working with resources](working-with-resources.md) documents.
__Lines 5-7__: Ingress [spec](../devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
__Lines 8-9__: Each http rule contains the following information: A host (eg: foo.bar.com, defaults to * in this example), a list of paths (eg: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
__Lines 10-12__: A backend is a service:port combination as described in the [services doc](services.md). Ingress traffic is typically sent directly to the endpoints matching a backend.
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [api-reference](../../pkg/apis/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller. Though the Ingress resource doesn't support HTTPS yet, security configs would also be global.
## Ingress controllers
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://github.com/kubernetes/contrib/tree/master/Ingress).
## Types of Ingress
### Single Service Ingress
There are existing Kubernetes concepts that allow you to expose a single service (see [alternatives](#alternatives)), however you can do so through an Ingress as well, by specifying a *default backend* with no rules.
<!-- BEGIN MUNGE: EXAMPLE ingress.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
```
[Download example](ingress.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ingress.yaml -->
If you create it using `kubectl -f` you should see:
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 107.178.254.228
```
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic send to the IP is directed to the Kubernetes Service listed under `BACKEND`.
### Simple fanout
As described previously, pods within kubernetes have ips only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer/s. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
```
foo.bar.com -> 178.91.123.132 -> / foo s1:80
/ bar s2:80
```
would require an Ingress such as:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
```
When you create the Ingress with `kubectl create -f`:
```
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test -
foo.bar.com
/foo s1:80
/bar s2:80
```
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
### Name based virtual hosting
Name-based virtual hosts use multiple host names for the same IP address.
```
foo.bar.com --| |-> foo.bar.com s1:80
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com s2:80
```
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
```
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the url of the request.
### Loadbalancing
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/production-pods.md#liveness-and-readiness-probes-aka-health-checks) which allow you to achieve the same end result.
## Updating an Ingress
Say you'd like to add a new Host to an existing Ingress, you can update it by editing the resource:
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test - 178.91.123.132
foo.bar.com
/foo s1:80
$ kubectl edit ing test
```
This should pop up an editor with the existing yaml, modify it to include the new Host.
```yaml
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
path: /foo
- host: bar.baz.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
path: /foo
..
```
saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test - 178.91.123.132
foo.bar.com
/foo s1:80
bar.baz.com
/foo s2:80
```
You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.
## Future Work
* Various modes of HTTPS/TLS support (edge termination, sni etc)
* Requesting an IP or Hostname via claims
* Combining L4 and L7 Ingress
* More Ingress controllers
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress sub-repository](https://github.com/kubernetes/contrib/tree/master/Ingress) for more details on the evolution of various Ingress controllers.
## Alternatives
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
* Use [Service.Type=LoadBalancer](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-loadbalancer)
* Use [Service.Type=NodePort](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-nodeport)
* Use a [Port Proxy] (https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)
* Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,327 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes User Guide: Managing Applications: Application Introspection and Debugging This file has moved to: http://kubernetes.github.io/docs/user-guide/introspection-and-debugging/
Once your application is running, youll inevitably need to debug problems with it.
Earlier we described how you can use `kubectl get pods` to retrieve simple status information about
your pods. But there are a number of ways to get even more information about your application.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes User Guide: Managing Applications: Application Introspection and Debugging](#kubernetes-user-guide-managing-applications-application-introspection-and-debugging)
- [Using `kubectl describe pod` to fetch details about pods](#using-kubectl-describe-pod-to-fetch-details-about-pods)
- [Example: debugging Pending Pods](#example-debugging-pending-pods)
- [Example: debugging a down/unreachable node](#example-debugging-a-downunreachable-node)
- [What's next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Using `kubectl describe pod` to fetch details about pods
For this example well use a ReplicationController to create two pods, similar to the earlier example.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
```
```console
$ kubectl create -f ./my-nginx-rc.yaml
replicationcontrollers/my-nginx
```
```console
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-gy1ij 1/1 Running 0 1m
my-nginx-yv5cn 1/1 Running 0 1m
```
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
```console
$ kubectl describe pod my-nginx-gy1ij
Name: my-nginx-gy1ij
Image(s): nginx
Node: kubernetes-node-y3vk/10.240.154.168
Labels: app=nginx
Status: Running
Reason:
Message:
IP: 10.244.1.4
Replication Controllers: my-nginx (2/2 replicas created)
Containers:
nginx:
Image: nginx
Limits:
cpu: 500m
memory: 128Mi
State: Running
Started: Thu, 09 Jul 2015 15:33:07 -0700
Ready: True
Restart Count: 0
Conditions:
Type Status
Ready True
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {scheduler } scheduled Successfully assigned my-nginx-gy1ij to kubernetes-node-y3vk
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD created Created with docker id cd1644065066
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD started Started with docker id cd1644065066
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx"
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
The container state is one of Waiting, Running, or Terminated. Depending on the state, additional information will be provided -- here you can see that for a container in Running state, the system tells you when the container started.
Ready tells you whether the container passed its last readiness probe. (In this case, the container does not have a readiness probe configured; the container is assumed to be ready if no readiness probe is configured.)
Restart Count tells you how many times the container has restarted; this information can be useful for detecting crash loops in containers that are configured with a restart policy of “always.”
Currently the only Condition associated with a Pod is the binary Ready condition, which indicates that the pod is able to service requests and should be added to the load balancing pools of all matching services.
Lastly, you see a log of recent events related to your Pod. The system compresses multiple identical events by indicating the first and last time it was seen and the number of times it was seen. "From" indicates the component that is logging the event, "SubobjectPath" tells you which object (e.g. container within the pod) is being referred to, and "Reason" and "Message" tell you what happened.
## Example: debugging Pending Pods
A common scenario that you can detect using events is when youve created a Pod that wont fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesnt match any nodes. Lets say we created the previous Replication Controller with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
```console
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-9unp9 0/1 Pending 0 8s
my-nginx-b7zs9 0/1 Running 0 8s
my-nginx-i595c 0/1 Running 0 8s
my-nginx-iichp 0/1 Running 0 8s
my-nginx-tc2j9 0/1 Running 0 8s
```
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
```console
$ kubectl describe pod my-nginx-9unp9
Name: my-nginx-9unp9
Image(s): nginx
Node: /
Labels: app=nginx
Status: Pending
Reason:
Message:
IP:
Replication Controllers: my-nginx (5/5 replicas created)
Containers:
nginx:
Image: nginx
Limits:
cpu: 600m
memory: 128Mi
State: Waiting
Ready: False
Restart Count: 0
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
```
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use
```
kubectl get events
```
but you have to remember that events are namespaced. This means that if you're interested in events for some namespaced object (e.g. what happened with Pods in namespace `my-namespace`) you need to explicitly provide a namespace to the command:
```
kubectl get events --namespace=my-namespace
```
To see events from all namespaces, you can use the `--all-namespaces` argument.
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
```yaml
$ kubectl get pod my-nginx-i595c -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: '{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"my-nginx","uid":"c555c14f-26d0-11e5-99cb-42010af00e4b","apiVersion":"v1","resourceVersion":"26174"}}'
creationTimestamp: 2015-07-10T06:56:21Z
generateName: my-nginx-
labels:
app: nginx
name: my-nginx-i595c
namespace: default
resourceVersion: "26243"
selfLink: /api/v1/namespaces/default/pods/my-nginx-i595c
uid: c558e44b-26d0-11e5-99cb-42010af00e4b
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 600m
memory: 128Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-zkhkk
readOnly: true
dnsPolicy: ClusterFirst
nodeName: kubernetes-node-u619
restartPolicy: Always
serviceAccountName: default
volumes:
- name: default-token-zkhkk
secret:
secretName: default-token-zkhkk
status:
conditions:
- status: "True"
type: Ready
containerStatuses:
- containerID: docker://9506ace0eb91fbc31aef1d249e0d1d6d6ef5ebafc60424319aad5b12e3a4e6a9
image: nginx
imageID: docker://319d2015d149943ff4d2a20ddea7d7e5ce06a64bbab1792334c0d3273bbbff1e
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2015-07-10T06:56:28Z
hostIP: 10.240.112.234
phase: Running
podIP: 10.244.3.4
startTime: 2015-07-10T06:56:21Z
```
## Example: debugging a down/unreachable node
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod thats running on the node, or to find out why a Pod wont schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
```console
$ kubectl get nodes
NAME LABELS STATUS
kubernetes-node-861h kubernetes.io/hostname=kubernetes-node-861h NotReady
kubernetes-node-bols kubernetes.io/hostname=kubernetes-node-bols Ready
kubernetes-node-st6x kubernetes.io/hostname=kubernetes-node-st6x Ready
kubernetes-node-unaj kubernetes.io/hostname=kubernetes-node-unaj Ready
$ kubectl describe node kubernetes-node-861h
Name: kubernetes-node-861h
Labels: kubernetes.io/hostname=kubernetes-node-861h
CreationTimestamp: Fri, 10 Jul 2015 14:32:29 -0700
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
Ready Unknown Fri, 10 Jul 2015 14:34:32 -0700 Fri, 10 Jul 2015 14:35:15 -0700 Kubelet stopped posting node status.
Addresses: 10.240.115.55,104.197.0.26
Capacity:
cpu: 1
memory: 3800808Ki
pods: 100
Version:
Kernel Version: 3.16.0-0.bpo.4-amd64
OS Image: Debian GNU/Linux 7 (wheezy)
Container Runtime Version: docker://Unknown
Kubelet Version: v0.21.1-185-gffc5a86098dc01
Kube-Proxy Version: v0.21.1-185-gffc5a86098dc01
PodCIDR: 10.244.0.0/24
ExternalID: 15233045891481496305
Pods: (0 in total)
Namespace Name
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-node-861h} starting Starting kubelet.
Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-node-861h} NodeReady Node kubernetes-node-861h status is now: NodeReady
Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
$ kubectl get node kubernetes-node-861h -o yaml
apiVersion: v1
kind: Node
metadata:
creationTimestamp: 2015-07-10T21:32:29Z
labels:
kubernetes.io/hostname: kubernetes-node-861h
name: kubernetes-node-861h
resourceVersion: "757"
selfLink: /api/v1/nodes/kubernetes-node-861h
uid: 2a69374e-274b-11e5-a234-42010af0d969
spec:
externalID: "15233045891481496305"
podCIDR: 10.244.0.0/24
providerID: gce://striped-torus-760/us-central1-b/kubernetes-node-861h
status:
addresses:
- address: 10.240.115.55
type: InternalIP
- address: 104.197.0.26
type: ExternalIP
capacity:
cpu: "1"
memory: 3800808Ki
pods: "100"
conditions:
- lastHeartbeatTime: 2015-07-10T21:34:32Z
lastTransitionTime: 2015-07-10T21:35:15Z
reason: Kubelet stopped posting node status.
status: Unknown
type: Ready
nodeInfo:
bootID: 4e316776-b40d-4f78-a4ea-ab0d73390897
containerRuntimeVersion: docker://Unknown
kernelVersion: 3.16.0-0.bpo.4-amd64
kubeProxyVersion: v0.21.1-185-gffc5a86098dc01
kubeletVersion: v0.21.1-185-gffc5a86098dc01
machineID: ""
osImage: Debian GNU/Linux 7 (wheezy)
systemUUID: ABE5F6B4-D44B-108B-C46A-24CCE16C8B6E
```
## What's next?
Learn about additional debugging tools, including:
* [Logging](logging.md)
* [Monitoring](monitoring.md)
* [Getting into containers via `exec`](getting-into-containers.md)
* [Connecting to containers via proxies](connecting-to-applications-proxy.md)
* [Connecting to containers via port forwarding](connecting-to-applications-port-forward.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,391 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Jobs This file has moved to: http://kubernetes.github.io/docs/user-guide/jobs/
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Jobs](#jobs)
- [What is a _job_?](#what-is-a-job)
- [Running an example Job](#running-an-example-job)
- [Writing a Job Spec](#writing-a-job-spec)
- [Pod Template](#pod-template)
- [Pod Selector](#pod-selector)
- [Parallel Jobs](#parallel-jobs)
- [Controlling Parallelism](#controlling-parallelism)
- [Handling Pod and Container Failures](#handling-pod-and-container-failures)
- [Job Patterns](#job-patterns)
- [Advanced Usage](#advanced-usage)
- [Specifying your own pod selector](#specifying-your-own-pod-selector)
- [Alternatives](#alternatives)
- [Bare Pods](#bare-pods)
- [Replication Controller](#replication-controller)
- [Single Job starts Controller Pod](#single-job-starts-controller-pod)
- [Caveats](#caveats)
- [Future work](#future-work)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _job_?
A _job_ creates one or more pods and ensures that a specified number of them successfully terminate.
As pods successfully complete, the _job_ tracks the successful completions. When a specified number
of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the
pods it created.
A simple case is to create 1 Job object in order to reliably run one Pod to completion.
The Job object will start a new Pod if the first pod fails or is deleted (for example
due to a node hardware failure or a node reboot).
A Job can also be used to run multiple pods in parallel.
## Running an example Job
Here is an example Job config. It computes π to 2000 places and prints it out.
It takes around 10s to complete.
<!-- BEGIN MUNGE: EXAMPLE job.yaml -->
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```
[Download example](job.yaml?raw=true)
<!-- END MUNGE: EXAMPLE job.yaml -->
Run the example job by downloading the example file and then running this command:
```console
$ kubectl create -f ./job.yaml
job "pi" created
```
Check on the status of the job using this command:
```console
$ kubectl describe jobs/pi
Name: pi
Namespace: default
Image(s): perl
Selector: app in (pi)
Parallelism: 1
Completions: 1
Start Time: Mon, 11 Jan 2016 15:35:52 -0800
Labels: app=pi
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q
```
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.
To list all the pods that belong to job in a machine readable form, you can use a command like this:
```console
$ pods=$(kubectl get pods --selector=app=pi --output=jsonpath={.items..metadata.name})
echo $pods
pi-aiw0a
```
Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression
that just gets the name from each pod in the returned list.
View the standard output of one of the pods:
```console
$ kubectl logs $pods
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
```
## Writing a Job Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [deploying applications](deploying-applications.md),
[configuring containers](configuring-containers.md), and [working with resources](working-with-resources.md) documents.
A Job also needs a [`.spec` section](../devel/api-conventions.md#spec-and-status).
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller.md#pod-template). It has exactly
the same schema as a [pod](pods.md), except it is nested and does not have an `apiVersion` or
`kind`.
In addition to required fields for a Pod, a pod template in a job must specify appropriate
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
Only a [`RestartPolicy`](pod-states.md) equal to `Never` or `OnFailure` are allowed.
### Pod Selector
The `.spec.selector` field is optional. In almost all cases you should not specify it.
See section [specifying your own pod selector](#specifying-your-own-pod-selector).
### Parallel Jobs
There are three main types of jobs:
1. Non-parallel Jobs
- normally only one pod is started, unless the pod fails.
- job is complete as soon as Pod terminates successfully.
1. Parallel Jobs with a *fixed completion count*:
- specify a non-zero positive value for `.spec.completions`
- the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`.
- **not implemented yet:** each pod passed a different index in the range 1 to `.spec.completions`.
1. Parallel Jobs with a *work queue*:
- do not specify `.spec.completions`
- the pods must coordinate with themselves or an external service to determine what each should work on
- each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done.
- when _any_ pod terminates with success, no new pods are created.
- once at least one pod has terminated with success and all pods are terminated, then the job is completed with success.
- once any pod has exited with success, no other pod should still be doing any work or writing any output. They should all be
in the process of exiting.
For a Non-parallel job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
unset, both are defaulted to 1.
For a Fixed Completion Count job, you should set `.spec.completions` to the number of completions needed.
You can set `.spec.parallelism`, or leave it unset and it will default to 1.
For a Work Queue Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
a non-negative integer.
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
#### Controlling Parallelism
The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
If it is unspecified, it defaults to 1.
If it is specified as 0, then the Job is effectively paused until it is increased.
A job can be scaled up using the `kubectl scale` command. For example, the following
command sets `.spec.parallelism` of a job called `myjob` to 10:
```console
$ kubectl scale --replicas=$N jobs/myjob
job "myjob" scaled
```
You can also use the `scale` subresource of the Job resource.
Actual parallelism (number of pods running at any instant) may be more or less than requested
parallelism, for a variety or reasons:
- For Fixed Completion Count jobs, the actual number of pods running in parallel will not exceed the number of
remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
- For work queue jobs, no new pods are started after any pod has succeeded -- remaining pods are allowed to complete, however.
- If the controller has not had time to react.
- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc),
then there may be fewer pods than requested.
- The controller may throttle new pod creation due to excessive previous pod failures in the same Job.
- When a pod is gracefully shutdown, it make take time to stop.
## Handling Pod and Container Failures
A Container in a Pod may fail for a number of reasons, such as because the process in it exited with
a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this
happens, and the `.spec.template.containers[].restartPolicy = "OnFailure"`, then the Pod stays
on the node, but the Container is re-run. Therefore, your program needs to handle the the case when it is
restarted locally, or else specify `.spec.template.containers[].restartPolicy = "Never"`.
See [pods-states](pod-states.md) for more information on `restartPolicy`.
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
`.spec.template.containers[].restartPolicy = "Never"`. When a Pod fails, then the Job controller
starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
caused by previous runs.
Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and
`.spec.template.containers[].restartPolicy = "Never"`, the same program may
sometimes be started twice.
If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
## Job Patterns
The Job object can be used to support reliable parallel execution of Pods. The Job object is not
designed to support closely-communicating parallel processes, as commonly found in scientific
computing. It does support parallel processing of a set of independent but related *work items*.
These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a
NoSQL database to scan, and so on.
In a complex system, there may be multiple different sets of work items. Here we are just
considering one set of work items that the user wants to manage together &mdash; a *batch job*.
There are several different patterns for parallel computation, each with strengths and weaknesses.
The tradeoffs are:
- One Job object for each work item, vs a single Job object for all work items. The latter is
better for large numbers of work items. The former creates some overhead for the user and for the
system to manage large numbers of Job objects. Also, with the latter, the resource usage of the job
(number of concurrently running pods) can be easily adjusted using the `kubectl scale` command.
- Number of pods created equals number of work items, vs each pod can process multiple work items.
The former typically requires less modification to existing code and containers. The latter
is better for large numbers of work items, for similar reasons to the previous bullet.
- Several approaches use a work queue. This requires running a queue service,
and modifications to the existing program or container to make it use the work queue.
Other approaches are easier to adapt to an existing containerised application.
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
The pattern names are also links to examples and more detailed description.
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
| -------------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
| [Job Template Expansion](../../examples/job/expansions/README.md) | | | ✓ | ✓ |
| [Queue with Pod Per Work Item](../../examples/job/work-queue-1/README.md) | ✓ | | sometimes | ✓ |
| [Queue with Variable Pod Count](../../examples/job/work-queue-2/README.md) | | ✓ | ✓ | | ✓ |
| Single Job with Static Work Assignment | ✓ | | ✓ | |
When you specify completions with `.spec.completions`, each Pod created by the Job controller
has an identical [`spec`](../devel/api-conventions.md#spec-and-status). This means that
all pods will have the same command line and the same
image, the same volumes, and (almost) the same environment variables. These patterns
are different ways to arrange for pods to work on different things.
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
Here, `W` is the number of work items.
| Pattern | `.spec.completions` | `.spec.parallelism` |
| -------------------------------------------------------------------------- |:-------------------:|:--------------------:|
| [Job Template Expansion](../../examples/job/expansions/README.md) | 1 | should be 1 |
| [Queue with Pod Per Work Item](../../examples/job/work-queue-1/README.md) | W | any |
| [Queue with Variable Pod Count](../../examples/job/work-queue-2/README.md) | 1 | any |
| Single Job with Static Work Assignment | W | any |
## Advanced Usage
### Specifying your own pod selector
Normally, when you create a job object, you do not specify `spec.selector`.
The system defaulting logic adds this field when the job is created.
It picks a selector value that will not overlap with any other jobs.
However, in some cases, you might need to override this automatically set selector.
To do this, you can specify the `spec.selector` of the job.
Be very careful when doing this. If you specify a label selector which is not
unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated
job may be deleted, or this job may count other pods as completing it, or one or both
of the jobs may refuse to create pods or run to completion. If a non-unique selector is
chosen, then other controllers (e.g. ReplicationController) and their pods may behave
in unpredicatable ways too. Kubernetes will not stop you from making a mistake when
specifying `spec.selector`.
Here is an example of a case when you might want to use this feature.
Say job `old` is already running. You want existing pods
to keep running, but you want the rest of the pods it creates
to use a different pod template and for the job to have a new name.
You cannot update the job because these fields are not updatable.
Therefore, you delete job `old` but leave its pods
running, using `kubectl delete jobs/old-one --cascade=false`.
Before deleting it, you make a note of what selector it uses:
```
kind: Job
metadata:
name: old
...
spec:
selector:
matchLabels:
job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...
```
Then you create a new job with name `new` and you explicitly specify the same selector.
Since the existing pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
they are controlled by job `new` as well.
You need to specify `manualSelector: true` in the new job since you are not using
the selector that the system normally generates for you automatically.
```
kind: Job
metadata:
name: new
...
spec:
manualSelector: true
selector:
matchLabels:
job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...
```
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
`manualSelector: true` tells the system to that you know what you are doing and to allow this
mismatch.
## Alternatives
### Bare Pods
When the node that a pod is running on reboots or fails, the pod is terminated
and will not be restarted. However, a Job will create new pods to replace terminated ones.
For this reason, we recommend that you use a job rather than a bare pod, even if your application
requires only a single pod.
### Replication Controller
Jobs are complementary to [Replication Controllers](replication-controller.md).
A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job
manages pods that are expected to terminate (e.g. batch jobs).
As discussed in [life of a pod](pod-states.md), `Job` is *only* appropriate for pods with
`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default
value is `Always`.)
### Single Job starts Controller Pod
Another pattern is for a single Job to create a pod which then creates other pods, acting as a sort
of custom controller for those pods. This allows the most flexibility, but may be somewhat
complicated to get started with and offers less integration with Kubernetes.
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
starts a Spark master controller (see [spark example](../../examples/spark/README.md)), runs a spark
driver, and then cleans up.
An advantage of this approach is that the overall process gets the completion guarantee of a Job
object, but complete control over what pods are created and how work is assigned to them.
## Caveats
Job objects are in the [`extensions` API Group](../api.md#api-groups).
Job objects have [API version `v1beta1`](../api.md#api-versioning). Beta objects may
undergo changes to their schema and/or semantics in future software releases, but
similar functionality will be supported.
## Future work
Support for creating Jobs at specified times/dates (i.e. cron) is expected in the next minor
release.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/jobs.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/jobs.md?pixel)]()

View File

@ -32,70 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# JSONPath template syntax This file has moved to: http://kubernetes.github.io/docs/user-guide/jsonpath/
JSONPath template is composed of JSONPath expressions enclosed by {}.
And we add three functions in addition to the original JSONPath syntax:
1. The `$` operator is optional since the expression always start from the root object by default.
2. We can use `""` to quote text inside JSONPath expression.
3. We can use `range` operator to iterate list.
The result object is printed as its String() function.
Given the input:
```json
{
"kind": "List",
"items":[
{
"kind":"None",
"metadata":{"name":"127.0.0.1"},
"status":{
"capacity":{"cpu":"4"},
"addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}]
}
},
{
"kind":"None",
"metadata":{"name":"127.0.0.2"},
"status":{
"capacity":{"cpu":"8"},
"addresses":[
{"type": "LegacyHostIP", "address":"127.0.0.2"},
{"type": "another", "address":"127.0.0.3"}
]
}
}
],
"users":[
{
"name": "myself",
"user": {}
},
{
"name": "e2e",
"user": {"username": "admin", "password": "secret"}
}
]
}
```
Function | Description | Example | Result
---------|--------------------|--------------------|------------------
text | the plain text | kind is {.kind} | kind is List
@ | the current object | {@} | the same as input
. or [] | child operator | {.kind} or {['kind']}| List
.. | recursive descent | {..name} | 127.0.0.1 127.0.0.2 myself e2e
* | wildcard. Get all objects| {.items[*].metadata.name} | [127.0.0.1 127.0.0.2]
[start:end :step] | subscript operator | {.users[0].name}| myself
[,] | union operator | {.items[*]['metadata.name', 'status.capacity']} | 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]
?() | filter | {.users[?(@.name=="e2e")].user.password} | secret
range, end | iterate list | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]
"" | quote interpreted string | {range .items[*]}{.metadata.name}{"\t"}{end} | 127.0.0.1 127.0.0.2
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,27 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Known Issues This file has moved to: http://kubernetes.github.io/docs/user-guide/known-issues/
This document summarizes known issues with existing Kubernetes releases.
Please consult this document before filing new bugs.
### Release 1.0.1
* `exec` liveness/readiness probes leak resources due to Docker exec leaking resources (#10659)
* `docker load` sometimes hangs which causes the `kube-apiserver` not to start. Restarting the Docker daemon should fix the issue (#10868)
* The kubelet on the master node doesn't register with the `kube-apiserver` so statistics aren't collected for master daemons (#10891)
* Heapster and InfluxDB both leak memory (#10653)
* Wrong node cpu/memory limit metrics from Heapster (https://github.com/GoogleCloudPlatform/heapster/issues/399)
* Services that set `type=LoadBalancer` can not use port `10250` because of Google Compute Engine firewall limitations
* Add-on services can not be created or deleted via `kubectl` or the Kubernetes API (#11435)
* If a pod with a GCE PD is created and deleted in rapid succession, it may fail to attach/mount correctly leaving PD data inaccessible (or corrupted in the worst case). (http://issue.k8s.io/11231#issuecomment-122049113)
* Suggested temporary work around: introduce a 1-2 minute delay between deleting and recreating a pod with a PD on the same node.
* Explicit errors while detaching GCE PD could prevent PD from ever being detached (#11321)
* GCE PDs may sometimes fail to attach (#11302)
* If multiple Pods use the same RBD volume in read-write mode, it is possible data on the RBD volume could get corrupted. This problem has been found in environments where both apiserver and etcd rebooted and Pods were redistributed.
* A workaround is to ensure there is no other Ceph client using the RBD volume before mapping RBD image in read-write mode. For example, `rados -p poolname listwatchers image_name.rbd` can list RBD clients that are mapping the image.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -32,314 +32,8 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# kubeconfig files This file has moved to: http://kubernetes.github.io/docs/user-guide/kubeconfig-file/
Authentication in kubernetes can differ for different individuals.
- A running kubelet might have one way of authenticating (i.e. certificates).
- Users might have a different way of authenticating (i.e. tokens).
- Administrators might have a list of certificates which they provide individual users.
- There may be multiple clusters, and we may want to define them all in one place - giving users the ability to use their own certificates and reusing the same global configuration.
So in order to easily switch between multiple clusters, for multiple users, a kubeconfig file was defined.
This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname.
Multiple kubeconfig files are allowed, if specified explicitly. At runtime they are loaded and merged together along with override options specified from the command line (see [rules](#loading-and-merging) below).
## Related discussion
http://issue.k8s.io/1755
## Components of a kubeconfig file
### Example kubeconfig file
```yaml
current-context: federal-context
apiVersion: v1
clusters:
- cluster:
api-version: v1
server: http://cow.org:8080
name: cow-cluster
- cluster:
certificate-authority: path/to/my/cafile
server: https://horse.org:4443
name: horse-cluster
- cluster:
insecure-skip-tls-verify: true
server: https://pig.org:443
name: pig-cluster
contexts:
- context:
cluster: horse-cluster
namespace: chisel-ns
user: green-user
name: federal-context
- context:
cluster: pig-cluster
namespace: saw-ns
user: black-user
name: queen-anne-context
kind: Config
preferences:
colors: true
users:
- name: blue-user
user:
token: blue-token
- name: green-user
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
```
### Breakdown/explanation of components
#### cluster
```
clusters:
- cluster:
certificate-authority: path/to/my/cafile
server: https://horse.org:4443
name: horse-cluster
- cluster:
insecure-skip-tls-verify: true
server: https://pig.org:443
name: pig-cluster
```
A `cluster` contains endpoint data for a kubernetes cluster. This includes the fully
qualified url for the kubernetes apiserver, as well as the cluster's certificate
authority or `insecure-skip-tls-verify: true`, if the cluster's serving
certificate is not signed by a system trusted certificate authority.
A `cluster` has a name (nickname) which acts as a dictionary key for the cluster
within this kubeconfig file. You can add or modify `cluster` entries using
[`kubectl config set-cluster`](kubectl/kubectl_config_set-cluster.md).
#### user
```
users:
- name: blue-user
user:
token: blue-token
- name: green-user
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
```
A `user` defines client credentials for authenticating to a kubernetes cluster. A
`user` has a name (nickname) which acts as its key within the list of user entries
after kubeconfig is loaded/merged. Available credentials are `client-certificate`,
`client-key`, `token`, and `username/password`. `username/password` and `token`
are mutually exclusive, but client certs and keys can be combined with them.
You can add or modify `user` entries using
[`kubectl config set-credentials`](kubectl/kubectl_config_set-credentials.md).
#### context
```
contexts:
- context:
cluster: horse-cluster
namespace: chisel-ns
user: green-user
name: federal-context
```
A `context` defines a named [`cluster`](#cluster),[`user`](#user),[`namespace`](namespaces.md) tuple
which is used to send requests to the specified cluster using the provided authentication info and
namespace. Each of the three is optional; it is valid to specify a context with only one of `cluster`,
`user`,`namespace`, or to specify none. Unspecified values, or named values that don't have corresponding
entries in the loaded kubeconfig (e.g. if the context specified a `pink-user` for the above kubeconfig file)
will be replaced with the default. See [Loading and merging rules](#loading-and-merging) below for override/merge behavior.
You can add or modify `context` entries with [`kubectl config set-conext`](kubectl/kubectl_config_set-context.md).
#### current-context
```yaml
current-context: federal-context
```
`current-context` is the nickname or 'key' for the cluster,user,namespace tuple that kubectl
will use by default when loading config from this file. You can override any of the values in kubectl
from the commandline, by passing `--context=CONTEXT`, `--cluster=CLUSTER`, `--user=USER`, and/or `--namespace=NAMESPACE` respectively.
You can change the `current-context` with [`kubectl config use-context`](kubectl/kubectl_config_use-context.md).
#### miscellaneous
```
apiVersion: v1
kind: Config
preferences:
colors: true
```
`apiVersion` and `kind` identify the version and schema for the client parser and should not
be edited manually.
`preferences` specify optional (and currently unused) kubectl preferences.
## Viewing kubeconfig files
`kubectl config view` will display the current kubeconfig settings. By default
it will show you all loaded kubeconfig settings; you can filter the view to just
the settings relevant to the `current-context` by passing `--minify`. See
[`kubectl config view`](kubectl/kubectl_config_view.md) for other options.
## Building your own kubeconfig file
NOTE, that if you are deploying k8s via kube-up.sh, you do not need to create your own kubeconfig files, the script will do it for you.
In any case, you can easily use this file as a template to create your own kubeconfig files.
So, lets do a quick walk through the basics of the above file so you can easily modify it as needed...
The above file would likely correspond to an api-server which was launched using the `--token-auth-file=tokens.csv` option, where the tokens.csv file looked something like this:
```
blue-user,blue-user,1
mister-red,mister-red,2
```
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in successfully, because we are providing the green-user's client credentials.
- Similarly, we can operate as the "blue-user" if we choose to change the value of current-context.
In the above scenario, green-user would have to log in by providing certificates, whereas blue-user would just provide the token. All this information would be handled for us by the
## Loading and merging rules
The rules for loading and merging the kubeconfig files are straightforward, but there are a lot of them. The final config is built in this order:
1. Get the kubeconfig from disk. This is done with the following hierarchy and merge rules:
If the CommandLineLocation (the value of the `kubeconfig` command line option) is set, use this file only. No merging. Only one instance of this flag is allowed.
Else, if EnvVarLocation (the value of $KUBECONFIG) is available, use it as a list of files that should be merged.
Merge files together based on the following rules.
Empty filenames are ignored. Files with non-deserializable content produced errors.
The first file to set a particular value or map key wins and the value or map key is never changed.
This means that the first file to set CurrentContext will have its context preserved. It also means that if two files specify a "red-user", only values from the first file's red-user are used. Even non-conflicting entries from the second file's "red-user" are discarded.
Otherwise, use HomeDirectoryLocation (~/.kube/config) with no merging.
1. Determine the context to use based on the first hit in this chain
1. command line argument - the value of the `context` command line option
1. current-context from the merged kubeconfig file
1. Empty is allowed at this stage
1. Determine the cluster info and user to use. At this point, we may or may not have a context. They are built based on the first hit in this chain. (run it twice, once for user, once for cluster)
1. command line argument - `user` for user name and `cluster` for cluster name
1. If context is present, then use the context's value
1. Empty is allowed
1. Determine the actual cluster info to use. At this point, we may or may not have a cluster info. Build each piece of the cluster info based on the chain (first hit wins):
1. command line arguments - `server`, `api-version`, `certificate-authority`, and `insecure-skip-tls-verify`
1. If cluster info is present and a value for the attribute is present, use it.
1. If you don't have a server location, error.
1. Determine the actual user info to use. User is built using the same rules as cluster info, EXCEPT that you can only have one authentication technique per user.
1. Load precedence is 1) command line flag, 2) user fields from kubeconfig
1. The command line flags are: `client-certificate`, `client-key`, `username`, `password`, and `token`.
1. If there are two conflicting techniques, fail.
1. For any information still missing, use default values and potentially prompt for authentication information
1. All file references inside of a kubeconfig file are resolved relative to the location of the kubeconfig file itself. When file references are presented on the command line
they are resolved relative to the current working directory. When paths are saved in the ~/.kube/config, relative paths are stored relatively while absolute paths are stored absolutely.
Any path in a kubeconfig file is resolved relative to the location of the kubeconfig file itself.
## Manipulation of kubeconfig via `kubectl config <subcommand>`
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
See [kubectl/kubectl_config.md](kubectl/kubectl_config.md) for help.
### Example
```console
$ kubectl config set-credentials myself --username=admin --password=secret
$ kubectl config set-cluster local-server --server=http://localhost:8080
$ kubectl config set-context default-context --cluster=local-server --user=myself
$ kubectl config use-context default-context
$ kubectl config set contexts.default-context.namespace the-right-prefix
$ kubectl config view
```
produces this output
```yaml
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: local-server
contexts:
- context:
cluster: local-server
namespace: the-right-prefix
user: myself
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: myself
user:
password: secret
username: admin
```
and a kubeconfig file that looks like this
```yaml
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: local-server
contexts:
- context:
cluster: local-server
namespace: the-right-prefix
user: myself
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: myself
user:
password: secret
username: admin
```
#### Commands for the example file
```console
$ kubectl config set preferences.colors true
$ kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
$ kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
$ kubectl config set-cluster pig-cluster --server=https://pig.org:443 --insecure-skip-tls-verify=true
$ kubectl config set-credentials blue-user --token=blue-token
$ kubectl config set-credentials green-user --client-certificate=path/to/my/client/cert --client-key=path/to/my/client/key
$ kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
$ kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
$ kubectl config use-context federal-context
```
### Final notes for tying it all together
So, tying this all together, a quick start to creating your own kubeconfig file:
- Take a good look and understand how you're api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
- Replace the snippet above with information for your cluster's api-server endpoint.
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubeconfig-file.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubeconfig-file.md?pixel)]()

Some files were not shown because too many files have changed in this diff Show More