Update cluster management doc.
This commit is contained in:
@@ -70,7 +70,7 @@ uses OpenVSwitch to set up networking between pods across
|
||||
Kubernetes nodes.
|
||||
|
||||
If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
|
||||
project.](salt.md).
|
||||
project](salt.md).
|
||||
|
||||
## Managing a cluster, including upgrades
|
||||
|
||||
|
@@ -33,7 +33,9 @@ Documentation for other releases can be found at
|
||||
|
||||
# Cluster Management
|
||||
|
||||
This doc is in progress.
|
||||
## Creating and configuring a Cluster
|
||||
|
||||
To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](../../docs/getting-started-guides/README.md) depending on your environment.
|
||||
|
||||
## Upgrading a cluster
|
||||
|
||||
@@ -73,7 +75,7 @@ If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardwa
|
||||
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer,
|
||||
then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
|
||||
replication controller, then a new copy of the pod will be started on a different node. So, in the case where all
|
||||
pods are replicated, upgrades can be done without special coordination.
|
||||
pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.
|
||||
|
||||
If you want more control over the upgrading process, you may use the following workflow:
|
||||
1. Mark the node to be rebooted as unschedulable:
|
||||
@@ -82,7 +84,7 @@ If you want more control over the upgrading process, you may use the following w
|
||||
1. Get the pods off the machine, via any of the following strategies:
|
||||
1. wait for finite-duration pods to complete
|
||||
1. delete pods with `kubectl delete pods $PODNAME`
|
||||
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
|
||||
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
|
||||
1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
|
||||
1. Work on the node
|
||||
1. Make the node schedulable again:
|
||||
|
@@ -46,7 +46,7 @@ Documentation for other releases can be found at
|
||||
- [Node Info](#node-info)
|
||||
- [Node Management](#node-management)
|
||||
- [Node Controller](#node-controller)
|
||||
- [Self-Registration of nodes](#self-registration-of-nodes)
|
||||
- [Self-Registration of Nodes](#self-registration-of-nodes)
|
||||
- [Manual Node Administration](#manual-node-administration)
|
||||
- [Node capacity](#node-capacity)
|
||||
|
||||
@@ -177,7 +177,7 @@ join a node to a Kubernetes cluster, you as an admin need to make sure proper se
|
||||
running in the node. In the future, we plan to automatically provision some node
|
||||
services.
|
||||
|
||||
### Self-Registration of nodes
|
||||
### Self-Registration of Nodes
|
||||
|
||||
When kubelet flag `--register-node` is true (the default), the kubelet will attempt to
|
||||
register itself with the API server. This is the preferred pattern, used by most distros.
|
||||
@@ -191,6 +191,16 @@ For self-registration, the kubelet is started with the following options:
|
||||
Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies
|
||||
its own. (In the future, we plan to limit authorization to only allow a kubelet to modify its own Node resource.)
|
||||
|
||||
If your cluster runs short on resources you can easily add more machines to it if your cluster is running in Node self-registration mode. If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
|
||||
|
||||
```
|
||||
gcloud preview managed-instance-groups --zone compute-zone resize my-cluster-minon-group --new-size 42
|
||||
```
|
||||
|
||||
Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.
|
||||
|
||||
In other environments you may need to configure the machine yourself and tell the Kubelet on which machine API server is running.
|
||||
|
||||
#### Manual Node Administration
|
||||
|
||||
A cluster administrator can create and modify Node objects.
|
||||
|
Reference in New Issue
Block a user