Use example syncer tags instead of hard-coded examples in doc
This commit is contained in:
@@ -83,6 +83,8 @@ Let's create two new namespaces to hold our work.
|
||||
|
||||
Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Namespace",
|
||||
@@ -96,6 +98,9 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
|
||||
}
|
||||
```
|
||||
|
||||
[Download example](namespace-dev.json)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
Create the development namespace using kubectl.
|
||||
|
||||
```console
|
||||
|
@@ -58,20 +58,24 @@ This diagram shows four nodes created on a Google Compute Engine cluster with th
|
||||
|
||||
To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: ubuntu:14.04
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: ubuntu:14.04
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
|
||||
[Download example](../../examples/blog-logging/counter-pod.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let’s create the pod in the default
|
||||
namespace.
|
||||
|
||||
@@ -152,7 +156,9 @@ We’ve lost the log lines from the first invocation of the container in this po
|
||||
|
||||
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called `fluentd-cloud-logging` on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
|
||||
|
||||
This log collection pod has a specification which looks something like this [fluentd-gcp.yaml](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
|
||||
This log collection pod has a specification which looks something like this:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
@@ -163,19 +169,31 @@ metadata:
|
||||
spec:
|
||||
containers:
|
||||
- name: fluentd-cloud-logging
|
||||
image: gcr.io/google_containers/fluentd-gcp:1.6
|
||||
image: gcr.io/google_containers/fluentd-gcp:1.9
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
env:
|
||||
- name: FLUENTD_ARGS
|
||||
value: -qq
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /varlog
|
||||
- name: containers
|
||||
mountPath: /var/lib/docker/containers
|
||||
volumes:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: containers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
```
|
||||
|
||||
[Download example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
|
||||
|
||||
We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
|
||||
|
@@ -80,6 +80,8 @@ environment variable they want.
|
||||
This is an example of a pod that consumes its name and namespace via the
|
||||
downward API:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@@ -102,6 +104,9 @@ spec:
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
[Download example](downward-api/dapi-pod.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
Some more thorough examples:
|
||||
* [environment variables](environment-guide/)
|
||||
* [downward API](downward-api/)
|
||||
|
@@ -8,11 +8,11 @@ spec:
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "-c", "env" ]
|
||||
env:
|
||||
- name: POD_NAME
|
||||
- name: MY_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
- name: MY_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
|
@@ -43,19 +43,24 @@ The logs of a running container may be fetched using the command `kubectl logs`.
|
||||
this pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
|
||||
output every second. (You can find different pod specifications [here](logging-demo/).)
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: ubuntu:14.04
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: ubuntu:14.04
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
|
||||
[Download example](../../examples/blog-logging/counter-pod.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
we can run the pod:
|
||||
|
||||
```console
|
||||
|
@@ -3,7 +3,7 @@ kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
name: nginx
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
|
@@ -47,6 +47,8 @@ $ kubectl create -f ./pod.yaml
|
||||
|
||||
Where pod.yaml contains something like:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE pod.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@@ -62,6 +64,9 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](pod.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
You can see your cluster's pods:
|
||||
|
||||
```console
|
||||
@@ -87,6 +92,8 @@ $ kubectl create -f ./replication.yaml
|
||||
|
||||
Where `replication.yaml` contains:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE replication.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@@ -109,6 +116,9 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](replication.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
To delete the replication controller (and the pods it created):
|
||||
|
||||
```console
|
||||
|
@@ -146,6 +146,8 @@ For this example we'll be creating a Redis pod with a named volume and volume mo
|
||||
|
||||
Example Redis pod definition with a persistent storage volume ([pod-redis.yaml](pod-redis.yaml)):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE pod-redis.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@@ -163,6 +165,9 @@ spec:
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](pod-redis.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
Notes:
|
||||
- The volume mount name is a reference to a specific empty dir volume.
|
||||
- The volume mount path is the path to mount the empty dir volume within the container.
|
||||
|
@@ -69,6 +69,8 @@ To add a label, add a labels section under metadata in the pod definition:
|
||||
|
||||
For example, here is the nginx pod definition with labels ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@@ -84,6 +86,9 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](pod-nginx-with-label.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
|
||||
|
||||
```console
|
||||
@@ -108,6 +113,8 @@ Replication controllers are the objects to answer these questions. A replicatio
|
||||
|
||||
For example, here is a replication controller that instantiates two nginx pods ([replication-controller.yaml](replication-controller.yaml)):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE replication-controller.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
@@ -135,6 +142,9 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](replication-controller.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
#### Replication Controller Management
|
||||
|
||||
Create an nginx replication controller ([replication-controller.yaml](replication-controller.yaml)):
|
||||
@@ -164,6 +174,8 @@ Once you have a replicated set of pods, you need an abstraction that enables con
|
||||
|
||||
For example, here is a service that balances across the pods created in the previous nginx replication controller example ([service.yaml](service.yaml)):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE service.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@@ -183,6 +195,9 @@ spec:
|
||||
app: nginx
|
||||
```
|
||||
|
||||
[Download example](service.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
#### Service Management
|
||||
|
||||
Create an nginx service ([service.yaml](service.yaml)):
|
||||
@@ -271,6 +286,8 @@ The container health checks are configured in the `livenessProbe` section of you
|
||||
|
||||
Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](pod-with-http-healthcheck.yaml)):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@@ -294,6 +311,9 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](pod-with-http-healthcheck.yaml)
|
||||
<!-- END MUNGE: EXAMPLE -->
|
||||
|
||||
For more information about health checking, see [Container Probes](../pod-states.md#container-probes).
|
||||
|
||||
|
||||
|
@@ -10,5 +10,5 @@ spec:
|
||||
- name: redis-persistent-storage
|
||||
mountPath: /data/redis
|
||||
volumes:
|
||||
- name: redis-persistent-storage
|
||||
emptyDir: {}
|
||||
- name: redis-persistent-storage
|
||||
emptyDir: {}
|
||||
|
@@ -4,17 +4,17 @@ metadata:
|
||||
name: pod-with-healthcheck
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
# defines the health checking
|
||||
livenessProbe:
|
||||
# an http probe
|
||||
httpGet:
|
||||
path: /_status/healthz
|
||||
port: 80
|
||||
# length of time to wait for a pod to initialize
|
||||
# after pod startup, before applying health checking
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 1
|
||||
ports:
|
||||
- containerPort: 80
|
||||
- name: nginx
|
||||
image: nginx
|
||||
# defines the health checking
|
||||
livenessProbe:
|
||||
# an http probe
|
||||
httpGet:
|
||||
path: /_status/healthz
|
||||
port: 80
|
||||
# length of time to wait for a pod to initialize
|
||||
# after pod startup, before applying health checking
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 1
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
@@ -4,21 +4,21 @@ metadata:
|
||||
name: nginx-controller
|
||||
spec:
|
||||
replicas: 2
|
||||
# selector identifies the set of pods that this
|
||||
# selector identifies the set of Pods that this
|
||||
# replication controller is responsible for managing
|
||||
selector:
|
||||
name: nginx
|
||||
# template defines the 'cookie cutter' used for creating
|
||||
app: nginx
|
||||
# podTemplate defines the 'cookie cutter' used for creating
|
||||
# new pods when necessary
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
# Important: these labels need to match the selector above
|
||||
# The api server enforces this constraint.
|
||||
name: nginx
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
@@ -4,13 +4,13 @@ metadata:
|
||||
name: nginx-service
|
||||
spec:
|
||||
ports:
|
||||
- port: 8000 # the port that this service should serve on
|
||||
# the container on each pod to connect to, can be a name
|
||||
# (e.g. 'www') or a number (e.g. 80)
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
- port: 8000 # the port that this service should serve on
|
||||
# the container on each pod to connect to, can be a name
|
||||
# (e.g. 'www') or a number (e.g. 80)
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
# just like the selector in the replication controller,
|
||||
# but this time it identifies the set of pods to load balance
|
||||
# traffic to.
|
||||
selector:
|
||||
name: nginx
|
||||
app: nginx
|
||||
|
Reference in New Issue
Block a user