Ensure all docs and examples in user guide are reachable

This commit is contained in:
Janet Kuo
2015-07-15 17:28:59 -07:00
parent 55e9356bf3
commit b0c68c4b81
30 changed files with 77 additions and 47 deletions

View File

@@ -64,6 +64,12 @@ If you don't have much familiarity with Kubernetes, we recommend you read the fo
[**Overview**](overview.md)
: A brief overview of Kubernetes concepts.
[**Cluster**](../admin/README.md)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications.
[**Node**](../admin/node.md)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
[**Pod**](pods.md)
: A pod is a co-located group of containers and volumes.
@@ -107,6 +113,8 @@ If you don't have much familiarity with Kubernetes, we recommend you read the fo
* [Downward API: accessing system configuration from a pod](downward-api.md)
* [Images and registries](images.md)
* [Migrating from docker-cli to kubectl](docker-cli-to-kubectl.md)
* [Assign pods to selected nodes](node-selection/)
* [Perform a rolling update on a running group of pods](update-demo/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -104,7 +104,7 @@ Eventually, user specified reasons may be [added to the API](https://github.com/
### Hook Handler Execution
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including health checks) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop)
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including [health checks](production-pods.md#liveness-and-readiness-probes-aka-health-checks)) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop)
For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs.  The details of this parameter passing is handler implementation dependent (see below).

View File

@@ -26,7 +26,7 @@ For each container, the build steps are the same. The examples below
are for the `show` container. Replace `show` with `backend` for the
backend container.
GCR
Google Container Registry ([GCR](https://cloud.google.com/tools/container-registry/))
---
docker build -t gcr.io/<project-name>/show .
gcloud docker push gcr.io/<project-name>/show

View File

@@ -47,7 +47,7 @@ This example demonstrates how limits can be applied to a Kubernetes namespace to
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md)
See [LimitRange design doc](../../design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md)
Step 0: Prerequisites
-----------------------------------------

View File

@@ -21,7 +21,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Overview
This example shows two types of pod health checks: HTTP checks and container execution checks.
This example shows two types of pod [health checks](../production-pods.md#liveness-and-readiness-probes-aka-health-checks): HTTP checks and container execution checks.
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
```
@@ -33,9 +33,9 @@ The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container executio
initialDelaySeconds: 15
timeoutSeconds: 1
```
Kubelet executes the command cat /tmp/health in the container and reports failure if the command returns a non-zero exit code.
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
Note that the container removes the /tmp/health file after 10 seconds,
Note that the container removes the `/tmp/health` file after 10 seconds,
```
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
```

View File

@@ -27,7 +27,7 @@ describes a pod that just emits a log message once every 4 seconds. The pod spec
[synthetic_10lps.yaml](synthetic_10lps.yaml)
describes a pod that just emits 10 log lines per second.
To observe the ingested log lines when using Google Cloud Logging please see the getting
See [logging document](../logging.md) for more details about logging. To observe the ingested log lines when using Google Cloud Logging please see the getting
started instructions
at [Cluster Level Logging to Google Cloud Logging](../../../docs/getting-started-guides/logging.md).
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting

View File

@@ -27,8 +27,8 @@ Kubernetes components, such as kubelet and apiserver, use the [glog](https://god
## Examining the logs of running containers
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
this pod specification which has a container which writes out some text to standard
output every second [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
this pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
output every second. (You can find different pod specifications [here](logging-demo/).)
```
apiVersion: v1
kind: Pod

View File

@@ -241,7 +241,7 @@ my-nginx-o0ef1 1/1 Running 0 1h
At some point, youll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time.
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](../design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.
Lets say you were running version 1.7.9 of nginx:
```yaml

View File

@@ -88,13 +88,13 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
Create the development namespace using kubectl.
```shell
$ kubectl create -f docs/user-guide/kubernetes-namespaces/namespace-dev.json
$ kubectl create -f docs/user-guide/namespaces/namespace-dev.json
```
And then lets create the production namespace using kubectl.
```shell
$ kubectl create -f docs/user-guide/kubernetes-namespaces/namespace-prod.json
$ kubectl create -f docs/user-guide/namespaces/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.

View File

@@ -22,7 +22,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Node selection example
This example shows how to assign a pod to a specific node or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it.
This example shows how to assign a [pod](../pods.md) to a specific [node](../../admin/node.md) or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it.
### Step Zero: Prerequisites

View File

@@ -22,11 +22,13 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# How To Use Persistent Volumes
The purpose of this guide is to help you become familiar with Kubernetes Persistent Volumes. By the end of the guide, we'll have
The purpose of this guide is to help you become familiar with [Kubernetes Persistent Volumes](../persistent-volumes.md). By the end of the guide, we'll have
nginx serving content from your persistent volume.
This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running.
See [Persistent Storage design document](../../design/persistent-storage.md) for more information.
## Provisioning
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
@@ -114,7 +116,7 @@ I love Kubernetes storage!
```
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join
```#google-containers``` on IRC and ask!
[```#google-containers```](https://botbot.me/freenode/google-containers/) on IRC and ask!
Enjoy!

View File

@@ -22,7 +22,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
Resource Quota
========================================
This example demonstrates how resource quota and limits can be applied to a Kubernetes namespace.
This example demonstrates how [resource quota](../../admin/admission-controllers.md#resourcequota) and [limits](../../admin/admission-controllers.md#limitranger) can be applied to a Kubernetes namespace. See [ResourceQuota design doc](../../design/admission_control_resource_quota.md) for more information.
This example assumes you have a functional Kubernetes setup.

View File

@@ -25,7 +25,7 @@ certainly want the docs that go with that version.</h1>
Objects of type `secret` are intended to hold sensitive information, such as
passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
is safer and more flexible than putting it verbatim in a `pod` definition or in
a docker image.
a docker image. See [Secrets design document](../design/secrets.md) for more information.
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
@@ -56,7 +56,7 @@ a docker image.
Creation of secrets can be manual (done by the user) or automatic (done by
automation built into the cluster).
A secret can be used with a pod in two ways: either as files in a volume mounted on one or more of
A secret can be used with a pod in two ways: either as files in a [volume](volumes.md) mounted on one or more of
its containers, or used by kubelet when pulling images for the pod.
To use a secret, a pod needs to reference the secret. This reference
@@ -142,6 +142,8 @@ own `volumeMounts` block, but only one `spec.volumes` is needed per secret.
You can package many files into one secret, or use many secrets,
whichever is convenient.
See another example of creating a secret and a pod that consumes that secret in a volume [here](secrets/).
### Manually specifying an imagePullSecret
Use of imagePullSecrets is desribed in the [images documentation](images.md#specifying-imagepullsecrets-on-a-pod)
### Automatic use of Manually Created Secrets

View File

@@ -22,8 +22,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Secrets example
Following this example, you will create a secret and a pod that consumes that secret in a volume.
You can learn more about secrets [Here](../secrets.md).
Following this example, you will create a [secret](../secrets.md) and a [pod](../pods.md) that consumes that secret in a [volume](../volumes.md). See [Secrets design document](../../design/secrets.md) for more information.
## Step Zero: Prerequisites

View File

@@ -52,7 +52,7 @@ certainly want the docs that go with that version.</h1>
Kubernetes [`Pods`](pods.md) are mortal. They are born and they die, and they
are not resurrected. [`ReplicationControllers`](replication-controller.md) in
particular create and destroy `Pods` dynamically (e.g. when scaling up or down
or when doing rolling updates). While each `Pod` gets its own IP address, even
or when doing [rolling updates](kubectl/kubectl_rolling-update.md)). While each `Pod` gets its own IP address, even
those IP addresses cannot be relied upon to be stable over time. This leads to
a problem: if some set of `Pods` (let's call them backends) provides
functionality to other `Pods` (let's call them frontends) inside the Kubernetes

View File

@@ -36,8 +36,8 @@ See the License for the specific language governing permissions and
limitations under the License.
-->
# Live update example
This example demonstrates the usage of Kubernetes to perform a live update on a running group of [pods](../../../docs/user-guide/pods.md).
# Rolling update example
This example demonstrates the usage of Kubernetes to perform a [rolling update](../kubectl/kubectl_rolling-update.md) on a running group of [pods](../../../docs/user-guide/pods.md). See [here](../managing-deployments.md#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](../../design/simple-rolling-update.md) for more information.
### Step Zero: Prerequisites
@@ -64,7 +64,7 @@ I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet.
### Step Two: Run the replication controller
Now we will turn up two replicas of an image. They all serve on internal port 80.
Now we will turn up two replicas of an [image](../images.md). They all serve on internal port 80.
```bash
$ kubectl create -f docs/user-guide/update-demo/nautilus-rc.yaml

View File

@@ -249,8 +249,8 @@ Kubelet to ensure that your application is operating correctly for a definition
Currently, there are three types of application health checks that you can choose from:
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success.
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](../liveness/).
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](../liveness/).
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.