(mostly) auto fixed links

This commit is contained in:
Daniel Smith
2015-07-16 14:54:28 -07:00
parent 4d1bf3a0ca
commit 98eeadb66e
24 changed files with 41 additions and 44 deletions

View File

@@ -89,8 +89,7 @@ variety of uses cases:
- Scheduler and Controller-manager will use the Secure Port too. They
will then be able to run on different machines than the apiserver.
- A general mechanism will be provided for [giving credentials to
pods](
https://github.com/GoogleCloudPlatform/kubernetes/issues/1907).
pods](https://github.com/GoogleCloudPlatform/kubernetes/issues/1907).
- Clients, like kubectl, will all support token-based auth, and the
Localhost will no longer be needed, and will not be the default.
However, the localhost port may continue to be an option for

View File

@@ -23,11 +23,11 @@ certainly want the docs that go with that version.</h1>
# Kubernetes Large Cluster
## Support
At v1.0, Kubernetes supports clusters up to 100 nodes with 30-50 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md#reliability-and-performance)).
At v1.0, Kubernetes supports clusters up to 100 nodes with 30-50 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).
## Setup
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/config-default.sh)).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](../../cluster/gce/config-default.sh)).
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
@@ -49,7 +49,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers limit the number of VMs you can create during a given period.
### Addon Resources
To prevent memory leaks or other resource issues in [cluster addons](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
For example:
```YAML

View File

@@ -35,7 +35,7 @@ certainly want the docs that go with that version.</h1>
## Introduction
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
the simple [Docker based single node cluster instructions](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/docker.md),
the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.md),
or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes.
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will