Update cluster management doc.

This commit is contained in:
gmarek
2015-07-07 14:52:14 +02:00
parent ddda661991
commit de07cbda3a
4 changed files with 32 additions and 6 deletions

View File

@@ -33,7 +33,9 @@ Documentation for other releases can be found at
# Cluster Management
This doc is in progress.
## Creating and configuring a Cluster
To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](../../docs/getting-started-guides/README.md) depending on your environment.
## Upgrading a cluster
@@ -73,7 +75,7 @@ If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardwa
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer,
then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
replication controller, then a new copy of the pod will be started on a different node. So, in the case where all
pods are replicated, upgrades can be done without special coordination.
pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.
If you want more control over the upgrading process, you may use the following workflow:
1. Mark the node to be rebooted as unschedulable:
@@ -82,7 +84,7 @@ If you want more control over the upgrading process, you may use the following w
1. Get the pods off the machine, via any of the following strategies:
1. wait for finite-duration pods to complete
1. delete pods with `kubectl delete pods $PODNAME`
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
1. Work on the node
1. Make the node schedulable again: