Make docs links be relative so we can version them
This commit is contained in:
@@ -120,7 +120,7 @@ then you need `R + U` clusters. If it is not (e.g you want to ensure low latenc
|
||||
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.
|
||||
|
||||
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
|
||||
you may need even more clusters. Our [roadmap](http://docs.k8s.io/roadmap.md)
|
||||
you may need even more clusters. Our [roadmap](./roadmap.md)
|
||||
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.
|
||||
|
||||
## Working with multiple clusters
|
||||
|
@@ -71,7 +71,7 @@ service would also consume the secrets associated with the MySQL service.
|
||||
|
||||
### Use-Case: Secrets associated with service accounts
|
||||
|
||||
[Service Accounts](http://docs.k8s.io/design/service_accounts.md) are proposed as a
|
||||
[Service Accounts](./service_accounts.md) are proposed as a
|
||||
mechanism to decouple capabilities and security contexts from individual human users. A
|
||||
`ServiceAccount` contains references to some number of secrets. A `Pod` can specify that it is
|
||||
associated with a `ServiceAccount`. Secrets should have a `Type` field to allow the Kubelet and
|
||||
@@ -241,7 +241,7 @@ memory overcommit on the node.
|
||||
|
||||
#### Secret data on the node: isolation
|
||||
|
||||
Every pod will have a [security context](http://docs.k8s.io/design/security_context.md).
|
||||
Every pod will have a [security context](./security_context.md).
|
||||
Secret data on the node should be isolated according to the security context of the container. The
|
||||
Kubelet volume plugin API will be changed so that a volume plugin receives the security context of
|
||||
a volume along with the volume spec. This will allow volume plugins to implement setting the
|
||||
@@ -253,7 +253,7 @@ Several proposals / upstream patches are notable as background for this proposal
|
||||
|
||||
1. [Docker vault proposal](https://github.com/docker/docker/issues/10310)
|
||||
2. [Specification for image/container standardization based on volumes](https://github.com/docker/docker/issues/9277)
|
||||
3. [Kubernetes service account proposal](http://docs.k8s.io/design/service_accounts.md)
|
||||
3. [Kubernetes service account proposal](./service_accounts.md)
|
||||
4. [Secrets proposal for docker (1)](https://github.com/docker/docker/pull/6075)
|
||||
5. [Secrets proposal for docker (2)](https://github.com/docker/docker/pull/6697)
|
||||
|
||||
|
@@ -63,14 +63,14 @@ Automated process users fall into the following categories:
|
||||
A pod runs in a *security context* under a *service account* that is defined by an administrator or project administrator, and the *secrets* a pod has access to is limited by that *service account*.
|
||||
|
||||
|
||||
1. The API should authenticate and authorize user actions [authn and authz](http://docs.k8s.io/design/access.md)
|
||||
1. The API should authenticate and authorize user actions [authn and authz](./access.md)
|
||||
2. All infrastructure components (kubelets, kube-proxies, controllers, scheduler) should have an infrastructure user that they can authenticate with and be authorized to perform only the functions they require against the API.
|
||||
3. Most infrastructure components should use the API as a way of exchanging data and changing the system, and only the API should have access to the underlying data store (etcd)
|
||||
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](http://docs.k8s.io/design/service_accounts.md)
|
||||
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](./service_accounts.md)
|
||||
1. If the user who started a long-lived process is removed from access to the cluster, the process should be able to continue without interruption
|
||||
2. If the user who started processes are removed from the cluster, administrators may wish to terminate their processes in bulk
|
||||
3. When containers run with a service account, the user that created / triggered the service account behavior must be associated with the container's action
|
||||
5. When container processes run on the cluster, they should run in a [security context](http://docs.k8s.io/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
|
||||
5. When container processes run on the cluster, they should run in a [security context](./security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
|
||||
1. Administrators should be able to configure the cluster to automatically confine all container processes as a non-root, randomly assigned UID
|
||||
2. Administrators should be able to ensure that container processes within the same namespace are all assigned the same unix user UID
|
||||
3. Administrators should be able to limit which developers and project administrators have access to higher privilege actions
|
||||
@@ -79,7 +79,7 @@ A pod runs in a *security context* under a *service account* that is defined by
|
||||
6. Developers may need to ensure their images work within higher security requirements specified by administrators
|
||||
7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met.
|
||||
8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes
|
||||
6. Developers should be able to define [secrets](http://docs.k8s.io/design/secrets.md) that are automatically added to the containers when pods are run
|
||||
6. Developers should be able to define [secrets](./secrets.md) that are automatically added to the containers when pods are run
|
||||
1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples:
|
||||
1. An SSH private key for git cloning remote data
|
||||
2. A client certificate for accessing a remote system
|
||||
@@ -93,12 +93,12 @@ A pod runs in a *security context* under a *service account* that is defined by
|
||||
|
||||
### Related design discussion
|
||||
|
||||
* Authorization and authentication http://docs.k8s.io/design/access.md
|
||||
* Secret distribution via files https://github.com/GoogleCloudPlatform/kubernetes/pull/2030
|
||||
* Docker secrets https://github.com/docker/docker/pull/6697
|
||||
* Docker vault https://github.com/docker/docker/issues/10310
|
||||
* Service Accounts: http://docs.k8s.io/design/service_accounts.md
|
||||
* Secret volumes https://github.com/GoogleCloudPlatform/kubernetes/4126
|
||||
* [Authorization and authentication](./access.md)
|
||||
* [Secret distribution via files](https://github.com/GoogleCloudPlatform/kubernetes/pull/2030)
|
||||
* [Docker secrets](https://github.com/docker/docker/pull/6697)
|
||||
* [Docker vault](https://github.com/docker/docker/issues/10310)
|
||||
* [Service Accounts:](./service_accounts.md)
|
||||
* [Secret volumes](https://github.com/GoogleCloudPlatform/kubernetes/pull/4126)
|
||||
|
||||
## Specific Design Points
|
||||
|
||||
|
@@ -11,7 +11,7 @@ You need two machines with CentOS installed on them.
|
||||
## Starting a cluster
|
||||
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker.
|
||||
|
||||
|
@@ -15,7 +15,7 @@ CloudStack is a software to build public and private clouds based on hardware vi
|
||||
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
|
||||
|
||||
This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
|
||||
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](http://docs.k8s.io/getting-started-guides/coreos/coreos_multinode_cluster.md).
|
||||
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](./coreos/coreos_multinode_cluster.md).
|
||||
|
||||
|
||||
This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
|
||||
|
@@ -213,7 +213,7 @@ Now for the good stuff!
|
||||
## Cloud Configs
|
||||
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
|
||||
|
||||
These are based on the work found here: [master.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/node.yaml)
|
||||
These are based on the work found here: [master.yml](./cloud-configs/master.yaml), [node.yml](./cloud-configs/node.yaml)
|
||||
|
||||
To make the setup work, you need to replace a few placeholders:
|
||||
|
||||
|
@@ -33,7 +33,7 @@ docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd
|
||||
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
This actually runs the kubelet, which in turn runs a [pod](http://docs.k8s.io/pods.md) that contains the other master components.
|
||||
This actually runs the kubelet, which in turn runs a [pod](../pods.md) that contains the other master components.
|
||||
|
||||
### Step Three: Run the service proxy
|
||||
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
|
||||
|
@@ -13,7 +13,7 @@ Getting started on [Fedora](http://fedoraproject.org)
|
||||
|
||||
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
|
||||
|
@@ -85,8 +85,8 @@ cluster/kubectl.sh get replicationcontrollers
|
||||
|
||||
### Running a user defined pod
|
||||
|
||||
Note the difference between a [container](http://docs.k8s.io/containers.md)
|
||||
and a [pod](http://docs.k8s.io/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
|
||||
Note the difference between a [container](../containers.md)
|
||||
and a [pod](../pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
|
||||
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
|
||||
|
||||
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
|
||||
|
@@ -21,7 +21,7 @@ done automatically based on statistical analysis and thresholds.
|
||||
|
||||
* This proposal is for horizontal scaling only. Vertical scaling will be handled in [issue 2072](https://github.com/GoogleCloudPlatform/kubernetes/issues/2072)
|
||||
* `ReplicationControllers` will not know about the auto-scaler, they are the target of the auto-scaler. The `ReplicationController` responsibilities are
|
||||
constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](http://docs.k8s.io/replication-controller.md#responsibilities-of-the-replication-controller)
|
||||
constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](../replication-controller.md#responsibilities-of-the-replication-controller)
|
||||
* Auto-scalers will be loosely coupled with data gathering components in order to allow a wide variety of input sources
|
||||
* Auto-scalable resources will support a scale verb ([1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629))
|
||||
such that the auto-scaler does not directly manipulate the underlying resource.
|
||||
@@ -42,7 +42,7 @@ applications will expose one or more network endpoints for clients to connect to
|
||||
balanced or situated behind a proxy - the data from those proxies and load balancers can be used to estimate client to
|
||||
server traffic for applications. This is the primary, but not sole, source of data for making decisions.
|
||||
|
||||
Within Kubernetes a [kube proxy](http://docs.k8s.io/services.md#ips-and-vips)
|
||||
Within Kubernetes a [kube proxy](../services.md#ips-and-vips)
|
||||
running on each node directs service requests to the underlying implementation.
|
||||
|
||||
While the proxy provides internal inter-pod connections, there will be L3 and L7 proxies and load balancers that manage
|
||||
@@ -225,7 +225,7 @@ or down as appropriate. In the future this may be more configurable.
|
||||
|
||||
### Interactions with a deployment
|
||||
|
||||
In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](http://docs.k8s.io/replication-controller.md#rolling-updates)
|
||||
In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](../replication-controller.md#rolling-updates)
|
||||
there will be multiple replication controllers, with one scaling up and another scaling down. This means that an
|
||||
auto-scaler must be aware of the entire set of capacity that backs a service so it does not fight with the deployer. `AutoScalerSpec.MonitorSelector`
|
||||
is what provides this ability. By using a selector that spans the entire service the auto-scaler can monitor capacity
|
||||
|
@@ -102,7 +102,7 @@ scp host2:/path/to/home2/.kube/config path/to/other/.kube/config
|
||||
|
||||
export $KUBECONFIG=path/to/other/.kube/config
|
||||
```
|
||||
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](http://docs.k8s.io/kubeconfig-file.md).
|
||||
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](./kubeconfig-file.md).
|
||||
|
||||
|
||||
|
||||
|
Reference in New Issue
Block a user