Absolutize links that leave the docs/ tree to go anywhere other than

to examples/ or back to docs/
This commit is contained in:
David Oppenheimer
2015-07-20 00:25:07 -07:00
parent d414e29643
commit 50e95a031b
27 changed files with 86 additions and 86 deletions

View File

@@ -113,7 +113,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "ns": "projectCaribou"}`
[Complete file example](../../pkg/auth/authorizer/abac/example_policy_file.jsonl)
[Complete file example](http://releases.k8s.io/HEAD/pkg/auth/authorizer/abac/example_policy_file.jsonl)
## Plugin Development

View File

@@ -97,17 +97,17 @@ selects a node for them to run on.
Addons are pods and services that implement cluster features. They don't run on
the master VM, but currently the default setup scripts that make the API calls
to create these pods and services does run on the master VM. See:
[kube-master-addons](../../cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
[kube-master-addons](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
Addon objects are created in the "kube-system" namespace.
Example addons are:
* [DNS](../../cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](../../cluster/addons/kube-ui/) provides a graphical UI for the
* [DNS](http://releases.k8s.io/HEAD/cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/) provides a graphical UI for the
cluster.
* [fluentd-elasticsearch](../../cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](../../cluster/addons/fluentd-gcp/).
* [cluster-monitoring](../../cluster/addons/cluster-monitoring/) provides
* [fluentd-elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/).
* [cluster-monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/) provides
monitoring for the cluster.
## Node components

View File

@@ -41,7 +41,7 @@ At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](../../cluster/gce/config-default.sh)).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/HEAD/cluster/gce/config-default.sh)).
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
@@ -82,15 +82,15 @@ These limits, however, are based on data collected from addons running on 4-node
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
* Heapster ([GCM/GCL backed](../../cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](../../cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](../../cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](../../cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](../../cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](../../cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](../../cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Heapster ([GCM/GCL backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/HEAD/cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
* [elasticsearch](../../cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* [elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* Increase memory and CPU limits sligthly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
* [FluentD with ElasticSearch Plugin](../../cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](../user-guide/compute-resources.md#troubleshooting).

View File

@@ -33,7 +33,7 @@ Documentation for other releases can be found at
# DNS Integration with Kubernetes
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
As of kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
@@ -68,7 +68,7 @@ time.
## For more information
See [the docs for the DNS cluster addon](../../cluster/addons/dns/README.md).
See [the docs for the DNS cluster addon](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -55,7 +55,7 @@ to reduce downtime in case of corruption.
## Default configuration
The default setup scripts use kubelet's file-based static pods feature to run etcd in a
[pod](../../cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
[pod](http://releases.k8s.io/HEAD/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
be run on master VMs. The default location that kubelet scans for manifests is
`/etc/kubernetes/manifests/`.

View File

@@ -107,7 +107,7 @@ choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
[kubelet init file](../../cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
[kubelet init file](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
scripts.
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and

View File

@@ -129,7 +129,7 @@ We should define a grains.conf key that captures more specifically what network
## Further reading
The [cluster/saltbase](../../cluster/saltbase/) tree has more details on the current SaltStack configuration.
The [cluster/saltbase](http://releases.k8s.io/HEAD/cluster/saltbase/) tree has more details on the current SaltStack configuration.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->