Merge pull request #11452 from thockin/docs-munge-headerlines
Munge headerlines
This commit is contained in:
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Kubernetes Cluster Admin Guide
|
||||
|
||||
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
|
||||
@@ -72,6 +73,7 @@ If you are modifying an existing guide which uses Salt, this document explains [
|
||||
project.](salt.md).
|
||||
|
||||
## Upgrading a cluster
|
||||
|
||||
[Upgrading a cluster](cluster-management.md).
|
||||
|
||||
## Managing nodes
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Configuring APIserver ports
|
||||
|
||||
This document describes what ports the kubernetes apiserver
|
||||
@@ -42,6 +43,7 @@ in [Accessing the cluster](../user-guide/accessing-the-cluster.md).
|
||||
|
||||
|
||||
## Ports and IPs Served On
|
||||
|
||||
The Kubernetes API is served by the Kubernetes APIServer process. Typically,
|
||||
there is one of these running on a single kubernetes-master node.
|
||||
|
||||
@@ -93,6 +95,7 @@ variety of uses cases:
|
||||
setup time. Kubelets use cert-based auth, while kube-proxy uses token-based auth.
|
||||
|
||||
## Expected changes
|
||||
|
||||
- Policy will limit the actions kubelets can do via the authed port.
|
||||
- Scheduler and Controller-manager will use the Secure Port too. They
|
||||
will then be able to run on different machines than the apiserver.
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Admission Controllers
|
||||
|
||||
**Table of Contents**
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Authentication Plugins
|
||||
|
||||
Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Authorization Plugins
|
||||
|
||||
|
||||
@@ -53,6 +54,7 @@ The following implementations are available, and are selected by flag:
|
||||
`ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control.
|
||||
|
||||
## ABAC Mode
|
||||
|
||||
### Request Attributes
|
||||
|
||||
A request has 4 attributes that can be considered for authorization:
|
||||
@@ -105,6 +107,7 @@ To permit any user to do something, write a policy with the user property unset.
|
||||
To permit an action Policy with an unset namespace applies regardless of namespace.
|
||||
|
||||
### Examples
|
||||
|
||||
1. Alice can do anything: `{"user":"alice"}`
|
||||
2. Kubelet can read any pods: `{"user":"kubelet", "resource": "pods", "readonly": true}`
|
||||
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Kubernetes Cluster Admin Guide: Cluster Components
|
||||
|
||||
This document outlines the various binary components that need to run to
|
||||
@@ -92,6 +93,7 @@ These controllers include:
|
||||
selects a node for them to run on.
|
||||
|
||||
### addons
|
||||
|
||||
Addons are pods and services that implement cluster features. They don't run on
|
||||
the master VM, but currently the default setup scripts that make the API calls
|
||||
to create these pods and services does run on the master VM. See:
|
||||
|
@@ -30,9 +30,11 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Kubernetes Large Cluster
|
||||
|
||||
## Support
|
||||
|
||||
At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).
|
||||
|
||||
## Setup
|
||||
@@ -59,6 +61,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
|
||||
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
|
||||
|
||||
### Addon Resources
|
||||
|
||||
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
|
||||
|
||||
For example:
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Cluster Management
|
||||
|
||||
This doc is in progress.
|
||||
|
@@ -30,13 +30,16 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Cluster Troubleshooting
|
||||
|
||||
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
|
||||
problem you are experiencing. See
|
||||
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
|
||||
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
|
||||
|
||||
## Listing your cluster
|
||||
|
||||
The first thing to debug in your cluster is if your nodes are all registered correctly.
|
||||
|
||||
Run
|
||||
@@ -48,15 +51,18 @@ kubectl get nodes
|
||||
And verify that all of the nodes you expect to see are present and that they are all in the ```Ready``` state.
|
||||
|
||||
## Looking at logs
|
||||
|
||||
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
|
||||
of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead)
|
||||
|
||||
### Master
|
||||
|
||||
* /var/log/kube-apiserver.log - API Server, responsible for serving the API
|
||||
* /var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions
|
||||
* /var/log/kube-controller-manager.log - Controller that manages replication controllers
|
||||
|
||||
### Worker Nodes
|
||||
|
||||
* /var/log/kubelet.log - Kubelet, responsible for running containers on the node
|
||||
* /var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# DNS Integration with Kubernetes
|
||||
|
||||
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# High Availability Kubernetes Clusters
|
||||
|
||||
**Table of Contents**
|
||||
@@ -44,6 +45,7 @@ Documentation for other releases can be found at
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
|
||||
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
|
||||
the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.md),
|
||||
@@ -53,6 +55,7 @@ Also, at this time high availability support for Kubernetes is not continuously
|
||||
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
|
||||
|
||||
## Overview
|
||||
|
||||
Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
|
||||
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
|
||||
of these steps in detail, but a summary is given here to help guide and orient the user.
|
||||
@@ -69,6 +72,7 @@ Here's what the system should look like when it's finished:
|
||||
Ready? Let's get started.
|
||||
|
||||
## Initial set-up
|
||||
|
||||
The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux.
|
||||
Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions.
|
||||
Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running
|
||||
@@ -79,6 +83,7 @@ instructions at [https://get.k8s.io](https://get.k8s.io)
|
||||
describe easy installation for single-master clusters on a variety of platforms.
|
||||
|
||||
## Reliable nodes
|
||||
|
||||
On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
|
||||
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
|
||||
the ```kubelet``` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
|
||||
@@ -99,6 +104,7 @@ On systemd systems you ```systemctl enable kubelet``` and ```systemctl enable do
|
||||
|
||||
|
||||
## Establishing a redundant, reliable data storage layer
|
||||
|
||||
The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
|
||||
to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're
|
||||
done.
|
||||
@@ -110,6 +116,7 @@ size of the cluster from three to five nodes. If that is still insufficient, yo
|
||||
[even more redundancy to your storage layer](#even-more-reliable-storage).
|
||||
|
||||
### Clustering etcd
|
||||
|
||||
The full details of clustering etcd are beyond the scope of this document, lots of details are given on the
|
||||
[etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through
|
||||
a simple cluster set up, using etcd's built in discovery to build our cluster.
|
||||
@@ -131,6 +138,7 @@ for ```${NODE_IP}``` on each machine.
|
||||
|
||||
|
||||
#### Validating your cluster
|
||||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
```
|
||||
@@ -147,6 +155,7 @@ You can also validate that this is working with ```etcdctl set foo bar``` on one
|
||||
on a different node.
|
||||
|
||||
### Even more reliable storage
|
||||
|
||||
Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
|
||||
installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).
|
||||
|
||||
@@ -163,9 +172,11 @@ for each node. Throughout these instructions, we assume that this storage is mo
|
||||
|
||||
|
||||
## Replicated API Servers
|
||||
|
||||
Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
|
||||
|
||||
```
|
||||
@@ -184,12 +195,14 @@ Next, you need to create a ```/srv/kubernetes/``` directory on each node. This
|
||||
The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.
|
||||
|
||||
### Starting the API Server
|
||||
|
||||
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into ```/etc/kubernetes/manifests/``` on each master node.
|
||||
|
||||
The kubelet monitors this directory, and will automatically create an instance of the ```kube-apiserver``` container using the pod definition specified
|
||||
in the file.
|
||||
|
||||
### Load balancing
|
||||
|
||||
At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
|
||||
be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
|
||||
up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
|
||||
@@ -204,6 +217,7 @@ For external users of the API (e.g. the ```kubectl``` command line interface, co
|
||||
them to talk to the external load balancer's IP address.
|
||||
|
||||
## Master elected components
|
||||
|
||||
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
|
||||
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
|
||||
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
|
||||
@@ -227,6 +241,7 @@ by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kub
|
||||
directory.
|
||||
|
||||
### Running the podmaster
|
||||
|
||||
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into ```/etc/kubernetes/manifests/```
|
||||
|
||||
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in ```podmaster.yaml```.
|
||||
@@ -237,6 +252,7 @@ the kubelet will restart them. If any of these nodes fail, the process will mov
|
||||
node.
|
||||
|
||||
## Conclusion
|
||||
|
||||
At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!).
|
||||
|
||||
If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and
|
||||
@@ -245,7 +261,7 @@ restarting the kubelets on each node.
|
||||
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
|
||||
set the ```--apiserver``` flag to your replicated endpoint.
|
||||
|
||||
##Vagrant up!
|
||||
## Vagrant up!
|
||||
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](../../examples/high-availability/).
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## kube-apiserver
|
||||
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## kube-controller-manager
|
||||
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## kube-proxy
|
||||
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## kube-scheduler
|
||||
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## kubelet
|
||||
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Considerations for running multiple Kubernetes clusters
|
||||
|
||||
You may want to set up multiple kubernetes clusters, both to
|
||||
@@ -65,6 +66,7 @@ Reasons to have multiple clusters include:
|
||||
- test clusters to canary new Kubernetes releases or other cluster software.
|
||||
|
||||
## Selecting the right number of clusters
|
||||
|
||||
The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally.
|
||||
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
|
||||
load and growth.
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Networking in Kubernetes
|
||||
|
||||
**Table of Contents**
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Node
|
||||
|
||||
**Table of Contents**
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Kubernetes OpenVSwitch GRE/VxLAN networking
|
||||
|
||||
This document describes how OpenVSwitch is used to setup networking between pods across nodes.
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Administering Resource Quotas
|
||||
|
||||
Kubernetes can limit both the number of objects created in a namespace, and the
|
||||
@@ -49,7 +50,8 @@ Resource Quota is enforced in a particular namespace when there is a
|
||||
|
||||
See [ResourceQuota design doc](../design/admission_control_resource_quota.md) for more information.
|
||||
|
||||
## Object Count Quota
|
||||
## Object Count Quota
|
||||
|
||||
The number of objects of a given type can be restricted. The following types
|
||||
are supported:
|
||||
|
||||
@@ -65,7 +67,8 @@ are supported:
|
||||
For example, `pods` quota counts and enforces a maximum on the number of `pods`
|
||||
created in a single namespace.
|
||||
|
||||
## Compute Resource Quota
|
||||
## Compute Resource Quota
|
||||
|
||||
The total number of objects of a given type can be restricted. The following types
|
||||
are supported:
|
||||
|
||||
@@ -83,6 +86,7 @@ Any resource that is not part of core Kubernetes must follow the resource naming
|
||||
This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource)
|
||||
|
||||
## Viewing and Setting Quotas
|
||||
|
||||
Kubectl supports creating, updating, and viewing quotas
|
||||
|
||||
```
|
||||
@@ -123,6 +127,7 @@ services 3 5
|
||||
```
|
||||
|
||||
## Quota and Cluster Capacity
|
||||
|
||||
Resource Quota objects are independent of the Cluster Capacity. They are
|
||||
expressed in absolute units.
|
||||
|
||||
@@ -136,6 +141,7 @@ writing a 'controller' which watches the quota usage and adjusts the quota
|
||||
hard limits of each namespace.
|
||||
|
||||
## Example
|
||||
|
||||
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).
|
||||
|
||||
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Using Salt to configure Kubernetes
|
||||
|
||||
The Kubernetes cluster can be configured using Salt.
|
||||
|
@@ -30,6 +30,7 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Cluster Admin Guide to Service Accounts
|
||||
|
||||
*This is a Cluster Administrator guide to service accounts. It assumes knowledge of
|
||||
@@ -57,7 +58,7 @@ for a number of reasons:
|
||||
accounts for components of that system. Because service accounts can be created
|
||||
ad-hoc and have namespaced names, such config is portable.
|
||||
|
||||
## Service account automation
|
||||
## Service account automation
|
||||
|
||||
Three separate components cooperate to implement the automation around service accounts:
|
||||
- A Service account admission controller
|
||||
@@ -78,6 +79,7 @@ It acts synchronously to modify pods as they are created or updated. When this p
|
||||
6. It adds a `volumeSource` to each container of the pod mounted at `/var/run/secrets/kubernetes.io/serviceaccount`.
|
||||
|
||||
### Token Controller
|
||||
|
||||
TokenController runs as part of controller-manager. It acts asynchronously. It:
|
||||
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
|
||||
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
|
||||
|
Reference in New Issue
Block a user