Copy edits for typos

This commit is contained in:
Ed Costello
2015-08-09 14:18:06 -04:00
parent 2bfa9a1f98
commit 35a5eda585
33 changed files with 42 additions and 42 deletions

View File

@@ -1,6 +1,6 @@
# Exec healthz server
The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncracies of container runtime exec implemetations.
The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncrasies of container runtime exec implementations.
## Examples:

View File

@@ -1,5 +1,5 @@
# Collecting log files from within containers with Fluentd and sending them to Elasticsearch.
*Note that this only works for clusters with an Elastisearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.*
*Note that this only works for clusters with an ElasticSearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.*
This directory contains the source files needed to make a Docker image that collects log files from arbitrary files within a container using [Fluentd](http://www.fluentd.org/) and sends them to the cluster's Elasticsearch service.
The image is designed to be used as a sidecar container as part of a pod.

View File

@@ -34,7 +34,7 @@ In this case, if there are problems launching a replacement scheduler process th
##### Command Line Arguments
- `--ha` is required to enable scheduler HA and multi-scheduler leader election.
- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identicial across schedulers.
- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identical across schedulers.
If you have HDFS installed on your slaves then you can specify HDFS URI locations for the binaries:

View File

@@ -25,7 +25,7 @@ Looks open enough :).
1. Now, you can start this pod, like so `kubectl create -f contrib/prometheus/prometheus-all.json`. This ReplicationController will maintain both prometheus, the server, as well as promdash, the visualization tool. You can then configure promdash, and next time you restart the pod - you're configuration will be remain (since the promdash directory was mounted as a local docker volume).
1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (locahost:9090)to as a promdash server, and create a dashboard according to the promdash directions.
1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (localhost:9090)to as a promdash server, and create a dashboard according to the promdash directions.
## Prometheus
@@ -52,14 +52,14 @@ This is a v1 api based, containerized prometheus ReplicationController, which sc
1. Use kubectl to handle auth & proxy the kubernetes API locally, emulating the old KUBERNETES_RO service.
1. The list of services to be monitored is passed as a command line aguments in
1. The list of services to be monitored is passed as a command line arguments in
the yaml file.
1. The startup scripts assumes that each service T will have
2 environment variables set ```T_SERVICE_HOST``` and ```T_SERVICE_PORT```
1. Each can be configured manually in yaml file if you want to monitor something
that is not a regular Kubernetes service. For example, you can add comma delimted
that is not a regular Kubernetes service. For example, you can add comma delimited
endpoints which can be scraped like so...
```
- -t
@@ -77,7 +77,7 @@ at port 9090.
# TODO
- We should publish this image into the kube/ namespace.
- Possibly use postgre or mysql as a promdash database.
- Possibly use Postgres or mysql as a promdash database.
- stop using kubectl to make a local proxy faking the old RO port and build in
real auth capabilities.

View File

@@ -191,7 +191,7 @@ $ mysql -u root -ppassword --host 104.197.63.17 --port 3306 -e 'show databases;'
### Troubleshooting:
- If you can curl or netcat the endpoint from the pod (with kubectl exec) and not from the node, you have not specified hostport and containerport.
- If you can hit the ips from the node but not from your machine outside the cluster, you have not opened firewall rules for the right network.
- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not runing.
- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not running.
1. Use ps in the pod
2. sudo restart haproxy in the pod
3. cat /etc/haproxy/haproxy.cfg in the pod