move admin related docs into docs/admin
This commit is contained in:
@@ -62,7 +62,7 @@ Definition of columns:
|
||||
- **OS** is the base operating system of the nodes.
|
||||
- **Config. Mgmt** is the configuration management system that helps install and maintain kubernetes software on the
|
||||
nodes.
|
||||
- **Networking** is what implements the [networking model](../../docs/networking.md). Those with networking type
|
||||
- **Networking** is what implements the [networking model](../../docs/admin/networking.md). Those with networking type
|
||||
_none_ may not support more than one node, or may support multiple VM nodes only in the same physical node.
|
||||
- **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
|
||||
tests for supporting the API and base features of Kubernetes v1.0.0.
|
||||
|
@@ -27,7 +27,7 @@ Getting started on [Fedora](http://fedoraproject.org)
|
||||
|
||||
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
|
||||
|
@@ -35,7 +35,7 @@ Here is the same information in a picture which shows how the pods might be plac
|
||||

|
||||
|
||||
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod’s execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
|
||||
[cluster DNS service](../../docs/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
[cluster DNS service](../admin/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
|
||||
To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
|
||||
```
|
||||
|
@@ -82,7 +82,7 @@ on how flags are set on various components.
|
||||
have identical configurations.
|
||||
|
||||
### Network
|
||||
Kubernetes has a distinctive [networking model](../networking.md).
|
||||
Kubernetes has a distinctive [networking model](../admin/networking.md).
|
||||
|
||||
Kubernetes allocates an IP address to each pod. When creating a cluster, you
|
||||
need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
|
||||
@@ -252,7 +252,7 @@ The admin user (and any users) need:
|
||||
|
||||
Your tokens and passwords need to be stored in a file for the apiserver
|
||||
to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
|
||||
The format for this file is described in the [authentication documentation](../authentication.md).
|
||||
The format for this file is described in the [authentication documentation](../admin/authentication.md).
|
||||
|
||||
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
|
||||
into a [kubeconfig file](../kubeconfig-file.md).
|
||||
@@ -378,7 +378,7 @@ Arguments to consider:
|
||||
- `--docker-root=`
|
||||
- `--root-dir=`
|
||||
- `--configure-cbr0=` (described above)
|
||||
- `--register-node` (described in [Node](../node.md) documentation.
|
||||
- `--register-node` (described in [Node](../admin/node.md) documentation.
|
||||
|
||||
### kube-proxy
|
||||
|
||||
@@ -398,7 +398,7 @@ Each node needs to be allocated its own CIDR range for pod networking.
|
||||
Call this `NODE_X_POD_CIDR`.
|
||||
|
||||
A bridge called `cbr0` needs to be created on each node. The bridge is explained
|
||||
further in the [networking documentation](../networking.md). The bridge itself
|
||||
further in the [networking documentation](../admin/networking.md). The bridge itself
|
||||
needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call
|
||||
this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
|
||||
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
|
||||
@@ -444,7 +444,7 @@ traffic to the internet, but have no problem with them inside your GCE Project.
|
||||
### Using Configuration Management
|
||||
The previous steps all involved "conventional" system administration techniques for setting up
|
||||
machines. You may want to use a Configuration Management system to automate the node configuration
|
||||
process. There are examples of [Saltstack](../salt.md), Ansible, Juju, and CoreOS Cloud Config in the
|
||||
process. There are examples of [Saltstack](../admin/salt.md), Ansible, Juju, and CoreOS Cloud Config in the
|
||||
various Getting Started Guides.
|
||||
|
||||
## Bootstrapping the Cluster
|
||||
@@ -463,7 +463,7 @@ You will need to run one or more instances of etcd.
|
||||
- Alternative: run 3 or 5 etcd instances.
|
||||
- Log can be written to non-durable storage because storage is replicated.
|
||||
- run a single apiserver which connects to one of the etc nodes.
|
||||
See [Availability](../availability.md) for more discussion on factors affecting cluster
|
||||
See [Availability](../admin/availability.md) for more discussion on factors affecting cluster
|
||||
availability.
|
||||
|
||||
To run an etcd instance:
|
||||
@@ -489,7 +489,7 @@ Here are some apiserver flags you may need to set:
|
||||
- `--tls-cert-file=/srv/kubernetes/server.cert` -%}
|
||||
- `--tls-private-key-file=/srv/kubernetes/server.key` -%}
|
||||
- `--admission-control=$RECOMMENDED_LIST`
|
||||
- See [admission controllers](../admission_controllers.md) for recommended arguments.
|
||||
- See [admission controllers](../admin/admission-controllers.md) for recommended arguments.
|
||||
- `--allow-privileged=true`, only if you trust your cluster user to run pods as root.
|
||||
|
||||
If you are following the firewall-only security approach, then use these arguments:
|
||||
|
Reference in New Issue
Block a user