Split namespace docs user vs admin.

Move namespace.md and examples dir from docs/user-guide
to docs/admin.

Assumption is that creating and deleting namespaces is an "admin"
task.

Add a mostly new user-guide to namespaces that gives more advice
on when to use namespaces, and how to work within them,
but not how to create/delete them.  It is more succinct than
before.
This commit is contained in:
Eric Tune
2015-07-20 12:54:28 -07:00
parent 2d88675f22
commit 6fa6beaccc
5 changed files with 214 additions and 138 deletions

184
docs/admin/namespaces.md Normal file
View File

@@ -0,0 +1,184 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/docs/admin/namespaces.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Namespaces
## Abstract
A Namespace is a mechanism to partition resources created by users into
a logically named group.
## Motivation
A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community').
Each user community wants to be able to work in isolation from other communities.
Each user community has its own:
1. resources (pods, services, replication controllers, etc.)
2. policies (who can or cannot perform actions in their community)
3. constraints (this community is allowed this much quota, etc.)
A cluster operator may create a Namespace for each unique user community.
The Namespace provides a unique scope for:
1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
3. ability to limit community resource consumption
## Use cases
1. As a cluster operator, I want to support multiple user communities on a single cluster.
2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users
in those communities.
3. As a cluster operator, I want to limit the amount of resources each community can consume in order
to limit the impact to other communities using the cluster.
4. As a cluster user, I want to interact with resources that are pertinent to my user community in
isolation of what other user communities are doing on the cluster.
## Usage
Look [here](namespaces/) for an in depth example of namespaces.
### Viewing namespaces
You can list the current namespaces in a cluster using:
```console
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
You can also get the summary of a specific namespace using:
```console
$ kubectl get namespaces <name>
```
Or you can get detailed information with:
```console
$ kubectl describe namespaces <name>
Name: default
Labels: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
```
Note that these details show both resource quota (if present) as well as resource limit ranges.
Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators
to define *Hard* resource usage limits that a *Namespace* may consume.
A limit range defines min/max constraints on the amount of resources a single entity can consume in
a *Namespace*.
See [Admission control: Limit Range](../design/admission_control_limit_range.md)
A namespace can be in one of two phases:
* `Active` the namespace is in use
* ```Terminating`` the namespace is being deleted, and can not be used for new objects
See the [design doc](../design/namespaces.md#phases) for more details.
### Creating a new namespace
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Note that the name of your namespace must be a DNS compatible label.
More information on the `finalizers` field can be found in the namespace [design doc](../design/namespaces.md#finalizers).
Then run:
```console
$ kubectl create -f ./my-namespace.yaml
```
### Working in namespaces
See [Setting the namespace for a request](../../docs/user-guide/namespaces.md#setting-the-namespace-for-a-request)
and [Setting the namespace preference](../../docs/user-guide/namespaces.md#setting-the-namespace-preference).
### Deleting a namespace
You can delete a namespace with
```console
$ kubectl delete namespaces <insert-some-namespace-name>
```
**WARNING, this deletes _everything_ under the namespace!**
This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state.
## Namespaces and DNS
When you create a [Service](../../docs/user-guide/services.md), it creates a corresponding [DNS entry](dns.md)1.
This entry is of the form `<service-name>.<namespace-name>.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
## Design
Details of the design of namespaces in Kubernetes, including a [detailed example](../design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
can be found in the [namespaces design doc](../design/namespaces.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/namespaces.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,283 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/docs/admin/namespaces/README.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Kubernetes Namespaces
Kubernetes _[namespaces](../../../docs/admin/namespaces.md)_ help different projects, teams, or customers to share a Kubernetes cluster.
It does this by providing the following:
1. A scope for [Names](../../user-guide/identifiers.md).
2. A mechanism to attach authorization and policy to a subsection of the cluster.
Use of multiple namespaces is optional.
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
### Step Zero: Prerequisites
This example assumes the following:
1. You have an [existing Kubernetes cluster](../../getting-started-guides/).
2. You have a basic understanding of Kubernetes _[pods](../../user-guide/pods.md)_, _[services](../../user-guide/services.md)_, and _[replication controllers](../../user-guide/replication-controller.md)_.
### Step One: Understand the default namespace
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods,
services, and replication controllers used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
```console
$ kubectl get namespaces
NAME LABELS
default <none>
```
### Step Two: Create new namespaces
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication-controllers
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
are relaxed to enable agile development.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
pods, services, and replication controllers that run the production site.
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace:
```json
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "development",
"labels": {
"name": "development"
}
}
}
```
Create the development namespace using kubectl.
```console
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
```
And then lets create the production namespace using kubectl.
```console
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
```console
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
production name=production Active
```
### Step Three: Create pods in each namespace
A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
Users interacting with one namespace do not see the content in another namespace.
To demonstrate this, let's spin up a simple replication controller and pod in the development namespace.
We first check what is the current context:
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
current-context: lithe-cocoa-92103_kubernetes
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
```console
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
Let's switch to operate in the development namespace.
```console
$ kubectl config use-context dev
```
You can verify your current context by doing the following:
```console
$ kubectl config view
```
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: development
user: lithe-cocoa-92103_kubernetes
name: dev
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: production
user: lithe-cocoa-92103_kubernetes
name: prod
current-context: dev
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
```
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
Let's create some content.
```console
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
snowflake-8w0qn 1/1 Running 0 22s
snowflake-jrpzb 1/1 Running 0 22s
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
```console
$ kubectl config use-context prod
```
The production namespace should be empty.
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
Production likes to run cattle, so let's create some cattle pods.
```console
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
cattle cattle kubernetes/serve_hostname run=cattle 5
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cattle-97rva 1/1 Running 0 12s
cattle-i9ojn 1/1 Running 0 12s
cattle-qj3yv 1/1 Running 0 12s
cattle-yc7vn 1/1 Running 0 12s
cattle-zz7ea 1/1 Running 0 12s
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
authorization rules for each namespace.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/namespaces/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,10 @@
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "development",
"labels": {
"name": "development"
}
}
}

View File

@@ -0,0 +1,10 @@
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "production",
"labels": {
"name": "production"
}
}
}