Merge pull request #11583 from thockin/docs-tick-tick-tick
Collected markedown fixes around syntax.
This commit is contained in:
@@ -158,7 +158,7 @@ Yes.
|
||||
|
||||
For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters):
|
||||
|
||||
```shell
|
||||
```
|
||||
--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
|
||||
```
|
||||
|
||||
|
@@ -109,7 +109,7 @@ These keys may be leveraged by the Salt sls files to branch behavior.
|
||||
|
||||
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
|
||||
|
||||
```
|
||||
```jinja
|
||||
{% if grains['os_family'] == 'RedHat' %}
|
||||
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
|
||||
{% else %}
|
||||
|
@@ -100,7 +100,6 @@ type ResourceQuotaList struct {
|
||||
// Items is a list of ResourceQuota objects
|
||||
Items []ResourceQuota `json:"items"`
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## AdmissionControl plugin: ResourceQuota
|
||||
|
@@ -103,7 +103,6 @@ Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4
|
||||
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods
|
||||
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest"
|
||||
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal
|
||||
|
||||
```
|
||||
|
||||
This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries.
|
||||
|
@@ -117,7 +117,7 @@ Gather the public and private IPs for the master node:
|
||||
aws ec2 describe-instances --instance-id <instance-id>
|
||||
```
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"Reservations": [
|
||||
{
|
||||
@@ -131,7 +131,6 @@ aws ec2 describe-instances --instance-id <instance-id>
|
||||
},
|
||||
"PublicIpAddress": "54.68.97.117",
|
||||
"PrivateIpAddress": "172.31.9.9",
|
||||
...
|
||||
```
|
||||
|
||||
#### Update the node.yaml cloud-config
|
||||
@@ -222,7 +221,7 @@ Gather the public IP address for the worker node.
|
||||
aws ec2 describe-instances --filters 'Name=private-ip-address,Values=<host>'
|
||||
```
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"Reservations": [
|
||||
{
|
||||
@@ -235,7 +234,6 @@ aws ec2 describe-instances --filters 'Name=private-ip-address,Values=<host>'
|
||||
"Name": "running"
|
||||
},
|
||||
"PublicIpAddress": "54.68.97.117",
|
||||
...
|
||||
```
|
||||
|
||||
Visit the public IP address in your browser to view the running pod.
|
||||
|
@@ -165,7 +165,6 @@ $ kubectl create -f ./node.json
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
fed-node name=fed-node-label Unknown
|
||||
|
||||
```
|
||||
|
||||
Please note that in the above, it only creates a representation for the node
|
||||
|
@@ -67,7 +67,6 @@ NAME ZONE SIZE_GB TYPE STATUS
|
||||
kubernetes-master-pd us-central1-b 20 pd-ssd READY
|
||||
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
|
||||
+++ Logging using Fluentd to elasticsearch
|
||||
|
||||
```
|
||||
|
||||
The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana
|
||||
@@ -86,7 +85,6 @@ kibana-logging-v1-bhpo8 1/1 Running 0 2h
|
||||
kube-dns-v3-7r1l9 3/3 Running 0 2h
|
||||
monitoring-heapster-v4-yl332 1/1 Running 1 2h
|
||||
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
|
||||
|
||||
```
|
||||
|
||||
Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers
|
||||
@@ -137,7 +135,6 @@ KubeUI is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/
|
||||
Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
|
||||
Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
|
||||
InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
|
||||
|
||||
```
|
||||
|
||||
Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out
|
||||
@@ -204,7 +201,6 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
|
||||
|
@@ -661,13 +661,11 @@ Next, verify that kubelet has started a container for the apiserver:
|
||||
```console
|
||||
$ sudo docker ps | grep apiserver:
|
||||
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
|
||||
|
||||
```
|
||||
|
||||
Then try to connect to the apiserver:
|
||||
|
||||
```console
|
||||
|
||||
$ echo $(curl -s http://localhost:8080/healthz)
|
||||
ok
|
||||
$ curl -s http://localhost:8080/api
|
||||
@@ -677,7 +675,6 @@ $ curl -s http://localhost:8080/api
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
If you have selected the `--register-node=true` option for kubelets, they will now being self-registering with the apiserver.
|
||||
@@ -689,7 +686,6 @@ Otherwise, you will need to manually create node objects.
|
||||
Complete this template for the scheduler pod:
|
||||
|
||||
```json
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
@@ -719,7 +715,6 @@ Complete this template for the scheduler pod:
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Optionally, you may want to mount `/var/log` as well and redirect output there.
|
||||
@@ -746,7 +741,6 @@ Flags to consider using with controller manager.
|
||||
Template for controller manager pod:
|
||||
|
||||
```json
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
@@ -802,7 +796,6 @@ Template for controller manager pod:
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
@@ -97,8 +97,6 @@ export NUM_MINIONS=${NUM_MINIONS:-3}
|
||||
export SERVICE_CLUSTER_IP_RANGE=11.1.1.0/24
|
||||
|
||||
export FLANNEL_NET=172.16.0.0/16
|
||||
|
||||
|
||||
```
|
||||
|
||||
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
|
||||
@@ -124,13 +122,11 @@ After all the above variable being set correctly. We can use below command in cl
|
||||
The scripts is automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below like. So you will not type in the wrong password.
|
||||
|
||||
```console
|
||||
|
||||
Deploying minion on machine 10.10.103.223
|
||||
|
||||
...
|
||||
|
||||
[sudo] password to copy files and start minion:
|
||||
|
||||
```
|
||||
|
||||
If all things goes right, you will see the below message from console
|
||||
@@ -143,7 +139,6 @@ You can also use `kubectl` command to see if the newly created k8s is working co
|
||||
For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below.
|
||||
|
||||
```console
|
||||
|
||||
NAME LABELS STATUS
|
||||
|
||||
10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready
|
||||
@@ -151,8 +146,6 @@ NAME LABELS STATUS
|
||||
10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready
|
||||
|
||||
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
|
||||
|
||||
|
||||
```
|
||||
|
||||
Also you can run kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s.
|
||||
@@ -165,7 +158,6 @@ After the previous parts, you will have a working k8s cluster, this part will te
|
||||
The configuration of dns is configured in cluster/ubuntu/config-default.sh.
|
||||
|
||||
```sh
|
||||
|
||||
ENABLE_CLUSTER_DNS=true
|
||||
|
||||
DNS_SERVER_IP="192.168.3.10"
|
||||
@@ -173,7 +165,6 @@ DNS_SERVER_IP="192.168.3.10"
|
||||
DNS_DOMAIN="cluster.local"
|
||||
|
||||
DNS_REPLICAS=1
|
||||
|
||||
```
|
||||
|
||||
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range.
|
||||
@@ -183,11 +174,9 @@ The `DNS_REPLICAS` describes how many dns pod running in the cluster.
|
||||
After all the above variable have been set. Just type the below command
|
||||
|
||||
```console
|
||||
|
||||
$ cd cluster/ubuntu
|
||||
|
||||
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
|
||||
|
||||
```
|
||||
|
||||
After some time, you can use `$ kubectl get pods` to see the dns pod is running in the cluster. Done!
|
||||
|
@@ -195,7 +195,6 @@ or:
|
||||
|
||||
```console
|
||||
u@pod$ echo $HOSTNAMES_SERVICE_HOST
|
||||
|
||||
```
|
||||
|
||||
So the first thing to check is whether that `Service` actually exists:
|
||||
|
@@ -151,7 +151,6 @@ users:
|
||||
myself:
|
||||
username: admin
|
||||
password: secret
|
||||
|
||||
```
|
||||
|
||||
and a kubeconfig file that looks like this
|
||||
|
@@ -75,7 +75,6 @@ They just know they can rely on their claim to storage and can manage its lifecy
|
||||
Claims must be created in the same namespace as the pods that use them.
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl create -f docs/user-guide/persistent-volumes/claims/claim-01.yaml
|
||||
|
||||
$ kubectl get pvc
|
||||
|
Reference in New Issue
Block a user