Replace `` with when emphasizing something inline in docs/
This commit is contained in:
@@ -75,32 +75,32 @@ The first step in debugging a Pod is taking a look at it. Check the current sta
|
||||
$ kubectl describe pods ${POD_NAME}
|
||||
```
|
||||
|
||||
Look at the state of the containers in the pod. Are they all ```Running```? Have there been recent restarts?
|
||||
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
|
||||
|
||||
Continue debugging depending on the state of the pods.
|
||||
|
||||
#### My pod stays pending
|
||||
|
||||
If a Pod is stuck in ```Pending``` it means that it can not be scheduled onto a node. Generally this is because
|
||||
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
|
||||
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
|
||||
```kubectl describe ...``` command above. There should be messages from the scheduler about why it can not schedule
|
||||
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
|
||||
your pod. Reasons include:
|
||||
|
||||
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](compute-resources.md#my-pods-are-pending-with-event-message-failedscheduling) for more information.
|
||||
|
||||
* **You are using ```hostPort```**: When you bind a Pod to a ```hostPort``` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, ```hostPort``` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
```hostPort``` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
|
||||
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
|
||||
|
||||
|
||||
#### My pod stays waiting
|
||||
|
||||
If a Pod is stuck in the ```Waiting``` state, then it has been scheduled to a worker node, but it can't run on that machine.
|
||||
Again, the information from ```kubectl describe ...``` should be informative. The most common cause of ```Waiting``` pods is a failure to pull the image. There are three things to check:
|
||||
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
|
||||
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
|
||||
* Make sure that you have the name of the image correct
|
||||
* Have you pushed the image to the repository?
|
||||
* Run a manual ```docker pull <image>``` on your machine to see if the image can be pulled.
|
||||
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
|
||||
|
||||
#### My pod is crashing or otherwise unhealthy
|
||||
|
||||
@@ -117,13 +117,13 @@ If your container has previously crashed, you can access the previous container'
|
||||
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
Alternately, you can run commands inside that container with ```exec```:
|
||||
Alternately, you can run commands inside that container with `exec`:
|
||||
|
||||
```console
|
||||
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
Note that ```-c ${CONTAINER_NAME}``` is optional and can be omitted for Pods that only contain a single container.
|
||||
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for Pods that only contain a single container.
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
|
||||
@@ -141,7 +141,7 @@ feature request on GitHub describing your use case and why these tools are insuf
|
||||
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
|
||||
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
|
||||
|
||||
You can also use ```kubectl describe rc ${CONTROLLER_NAME}``` to introspect events related to the replication
|
||||
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
|
||||
controller.
|
||||
|
||||
### Debugging Services
|
||||
@@ -183,10 +183,10 @@ $ kubectl get pods --selector=name=nginx,type=frontend
|
||||
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
|
||||
|
||||
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
|
||||
have the right ports exposed. If your service has a ```containerPort``` specified, but the Pods that are
|
||||
have the right ports exposed. If your service has a `containerPort` specified, but the Pods that are
|
||||
selected don't have that port listed, then they won't be added to the endpoints list.
|
||||
|
||||
Verify that the pod's ```containerPort``` matches up with the Service's ```containerPort```
|
||||
Verify that the pod's `containerPort` matches up with the Service's `containerPort`
|
||||
|
||||
#### Network traffic is not forwarded
|
||||
|
||||
@@ -197,7 +197,7 @@ There are three things to
|
||||
check:
|
||||
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods)
|
||||
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the ```containerPort``` field needs to be 8080.
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
|
||||
|
||||
#### More information
|
||||
|
||||
|
||||
@@ -113,9 +113,9 @@ This hook is called immediately before a container is terminated. This event h
|
||||
|
||||
A single parameter named reason is passed to the handler which contains the reason for termination. Currently the valid values for reason are:
|
||||
|
||||
* ```Delete``` - indicating an API call to delete the pod containing this container.
|
||||
* ```Health``` - indicating that a health check of the container failed.
|
||||
* ```Dependency``` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted. Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD.
|
||||
* `Delete` - indicating an API call to delete the pod containing this container.
|
||||
* `Health` - indicating that a health check of the container failed.
|
||||
* `Dependency` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted. Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD.
|
||||
|
||||
Eventually, user specified reasons may be [added to the API](https://github.com/GoogleCloudPlatform/kubernetes/issues/137).
|
||||
|
||||
@@ -131,7 +131,7 @@ For hooks which have parameters, these parameters are passed to the event handle
|
||||
Hook delivery is "at least one", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
|
||||
correctly.
|
||||
|
||||
We expect double delivery to be rare, but in some cases if the ```kubelet``` restarts in the middle of sending a hook, the hook may be resent after the kubelet comes back up.
|
||||
We expect double delivery to be rare, but in some cases if the `kubelet` restarts in the middle of sending a hook, the hook may be resent after the kubelet comes back up.
|
||||
|
||||
Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend.
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ namespace using the [downward API](../downward-api.md).
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting
|
||||
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
|
||||
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the pod
|
||||
|
||||
@@ -34,21 +34,21 @@ Documentation for other releases can be found at
|
||||
# Kubernetes User Guide: Managing Applications: Application Introspection and Debugging
|
||||
|
||||
Once your application is running, you’ll inevitably need to debug problems with it.
|
||||
Earlier we described how you can use ```kubectl get pods``` to retrieve simple status information about
|
||||
Earlier we described how you can use `kubectl get pods` to retrieve simple status information about
|
||||
your pods. But there are a number of ways to get even more information about your application.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Kubernetes User Guide: Managing Applications: Application Introspection and Debugging](#kubernetes-user-guide-managing-applications-application-introspection-and-debugging)
|
||||
- [Using ```kubectl describe pod``` to fetch details about pods](#using-kubectl-describe-pod-to-fetch-details-about-pods)
|
||||
- [Using `kubectl describe pod` to fetch details about pods](#using-kubectl-describe-pod-to-fetch-details-about-pods)
|
||||
- [Example: debugging Pending Pods](#example-debugging-pending-pods)
|
||||
- [Example: debugging a down/unreachable node](#example-debugging-a-downunreachable-node)
|
||||
- [What's next?](#whats-next)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Using ```kubectl describe pod``` to fetch details about pods
|
||||
## Using `kubectl describe pod` to fetch details about pods
|
||||
|
||||
For this example we’ll use a ReplicationController to create two pods, similar to the earlier example.
|
||||
|
||||
@@ -87,7 +87,7 @@ my-nginx-gy1ij 1/1 Running 0 1m
|
||||
my-nginx-yv5cn 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
We can retrieve a lot more information about each of these pods using ```kubectl describe pod```. For example:
|
||||
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
|
||||
|
||||
```console
|
||||
$ kubectl describe pod my-nginx-gy1ij
|
||||
@@ -150,7 +150,7 @@ my-nginx-iichp 0/1 Running 0 8s
|
||||
my-nginx-tc2j9 0/1 Running 0 8s
|
||||
```
|
||||
|
||||
To find out why the my-nginx-9unp9 pod is not running, we can use ```kubectl describe pod``` on the pending Pod and look at its events:
|
||||
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
|
||||
|
||||
```console
|
||||
$ kubectl describe pod my-nginx-9unp9
|
||||
@@ -177,11 +177,11 @@ Events:
|
||||
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
|
||||
```
|
||||
|
||||
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason ```PodFitsResources``` (and possibly others). ```PodFitsResources``` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
|
||||
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
|
||||
|
||||
To correct this situation, you can use ```kubectl scale``` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
|
||||
To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
|
||||
|
||||
In addition to ```kubectl describe pod```, another way to get extra information about a pod (beyond what is provided by ```kubectl get pod```) is to pass the ```-o yaml``` output format flag to ```kubectl get pod```. This will give you, in YAML format, even more information than ```kubectl describe pod```--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
|
||||
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
|
||||
|
||||
```yaml
|
||||
$ kubectl get pod my-nginx-i595c -o yaml
|
||||
@@ -247,7 +247,7 @@ status:
|
||||
|
||||
## Example: debugging a down/unreachable node
|
||||
|
||||
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that’s running on the node, or to find out why a Pod won’t schedule onto the node. As with Pods, you can use ```kubectl describe node``` and ```kubectl get node -o yaml``` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
|
||||
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that’s running on the node, or to find out why a Pod won’t schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
|
||||
@@ -68,7 +68,7 @@ These are just examples; you are free to develop your own conventions.
|
||||
## Syntax and character set
|
||||
|
||||
_Labels_ are key value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`).
|
||||
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. ```kube-scheduler```, ```kube-controller-manager```, ```kube-apiserver```, ```kubectl```, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for kubernetes core components.
|
||||
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for kubernetes core components.
|
||||
|
||||
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
|
||||
|
||||
|
||||
@@ -85,8 +85,8 @@ kube-system <none> Active
|
||||
```
|
||||
|
||||
Kubernetes starts with two initial namespaces:
|
||||
* ```default``` The default namespace for objects with no other namespace
|
||||
* ```kube-system``` The namespace for objects created by the Kubernetes system
|
||||
* `default` The default namespace for objects with no other namespace
|
||||
* `kube-system` The namespace for objects created by the Kubernetes system
|
||||
|
||||
You can also get the summary of a specific namespace using:
|
||||
|
||||
@@ -121,14 +121,14 @@ a *Namespace*.
|
||||
See [Admission control: Limit Range](../design/admission_control_limit_range.md)
|
||||
|
||||
A namespace can be in one of two phases:
|
||||
* ```Active``` the namespace is in use
|
||||
* `Active` the namespace is in use
|
||||
* ```Terminating`` the namespace is being deleted, and can not be used for new objects
|
||||
|
||||
See the [design doc](../design/namespaces.md#phases) for more details.
|
||||
|
||||
### Creating a new namespace
|
||||
|
||||
To create a new namespace, first create a new YAML file called ```my-namespace.yaml``` with the contents:
|
||||
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
@@ -139,7 +139,7 @@ metadata:
|
||||
|
||||
Note that the name of your namespace must be a DNS compatible label.
|
||||
|
||||
More information on the ```finalizers``` field can be found in the namespace [design doc](../design/namespaces.md#finalizers).
|
||||
More information on the `finalizers` field can be found in the namespace [design doc](../design/namespaces.md#finalizers).
|
||||
|
||||
Then run:
|
||||
|
||||
@@ -149,7 +149,7 @@ $ kubectl create -f ./my-namespace.yaml
|
||||
|
||||
### Setting the namespace for a request
|
||||
|
||||
To temporarily set the namespace for a request, use the ```--namespace``` flag.
|
||||
To temporarily set the namespace for a request, use the `--namespace` flag.
|
||||
|
||||
For example:
|
||||
|
||||
@@ -185,13 +185,13 @@ $ kubectl delete namespaces <insert-some-namespace-name>
|
||||
|
||||
**WARNING, this deletes _everything_ under the namespace!**
|
||||
|
||||
This delete is asynchronous, so for a time you will see the namespace in the ```Terminating``` state.
|
||||
This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state.
|
||||
|
||||
## Namespaces and DNS
|
||||
|
||||
When you create a [Service](services.md), it creates a corresponding [DNS entry](../admin/dns.md)1.
|
||||
This entry is of the form ```<service-name>.<namespace-name>.cluster.local```, which means
|
||||
that if a container just uses ```<service-name>``` it will resolve to the service which
|
||||
This entry is of the form `<service-name>.<namespace-name>.cluster.local`, which means
|
||||
that if a container just uses `<service-name>` it will resolve to the service which
|
||||
is local to a namespace. This is useful for using the same configuration across
|
||||
multiple namespaces such as Development, Staging and Production. If you want to reach
|
||||
across namespaces, you need to use the fully qualified domain name (FQDN).
|
||||
|
||||
@@ -45,11 +45,11 @@ See [Persistent Storage design document](../../design/persistent-storage.md) for
|
||||
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
|
||||
must first create storage (create their Google Compute Engine (GCE) disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
|
||||
|
||||
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. ```HostPath``` was included
|
||||
for ease of development and testing. You'll create a local ```HostPath``` for this example.
|
||||
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. `HostPath` was included
|
||||
for ease of development and testing. You'll create a local `HostPath` for this example.
|
||||
|
||||
> IMPORTANT! For ```HostPath``` to work, you will need to run a single node cluster. Kubernetes does not
|
||||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the ```HostPath``` resides.
|
||||
> IMPORTANT! For `HostPath` to work, you will need to run a single node cluster. Kubernetes does not
|
||||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides.
|
||||
|
||||
|
||||
|
||||
@@ -124,7 +124,7 @@ I love Kubernetes storage!
|
||||
```
|
||||
|
||||
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join
|
||||
[```#google-containers```](https://botbot.me/freenode/google-containers/) on IRC and ask!
|
||||
[`#google-containers`](https://botbot.me/freenode/google-containers/) on IRC and ask!
|
||||
|
||||
Enjoy!
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ Following this example, you will create a [secret](../secrets.md) and a [pod](..
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting
|
||||
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
|
||||
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the secret
|
||||
|
||||
@@ -47,7 +47,7 @@ is *not* opened by default.
|
||||
|
||||
Google Compute Engine firewalls are documented [elsewhere](https://cloud.google.com/compute/docs/networking#firewalls_1).
|
||||
|
||||
You can add a firewall with the ```gcloud``` command line tool:
|
||||
You can add a firewall with the `gcloud` command line tool:
|
||||
|
||||
```console
|
||||
$ gcloud compute firewall-rules create my-rule --allow=tcp:<port>
|
||||
|
||||
@@ -85,7 +85,7 @@ $ cd kubernetes
|
||||
$ kubectl create -f ./replication.yaml
|
||||
```
|
||||
|
||||
Where ```replication.yaml``` contains:
|
||||
Where `replication.yaml` contains:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
||||
@@ -174,7 +174,7 @@ Disk](http://cloud.google.com/compute/docs/disks) into your pod. Unlike
|
||||
preserved and the volume is merely unmounted. This means that a PD can be
|
||||
pre-populated with data, and that data can be "handed off" between pods.
|
||||
|
||||
__Important: You must create a PD using ```gcloud``` or the GCE API or UI
|
||||
__Important: You must create a PD using `gcloud` or the GCE API or UI
|
||||
before you can use it__
|
||||
|
||||
There are some restrictions when using a `gcePersistentDisk`:
|
||||
@@ -230,7 +230,7 @@ volume are preserved and the volume is merely unmounted. This means that an
|
||||
EBS volume can be pre-populated with data, and that data can be "handed off"
|
||||
between pods.
|
||||
|
||||
__Important: You must create an EBS volume using ```aws ec2 create-volume``` or
|
||||
__Important: You must create an EBS volume using `aws ec2 create-volume` or
|
||||
the AWS API before you can use it__
|
||||
|
||||
There are some restrictions when using an awsElasticBlockStore volume:
|
||||
|
||||
Reference in New Issue
Block a user