Drain pods created from ReplicaSets

This commit is contained in:
Marc Lough
2016-03-31 19:50:09 +01:00
committed by Marc Lough
parent f4473af950
commit fdf409861a
5 changed files with 75 additions and 20 deletions

View File

@@ -23,7 +23,8 @@ without \-\-ignore\-daemonsets, and regardless it will not delete any
DaemonSet\-managed pods, because those pods would be immediately replaced by the
DaemonSet controller, which ignores unschedulable markings. If there are any
pods that are neither mirror pods nor managed\-\-by ReplicationController,
DaemonSet or Job\-\-, then drain will not delete any pods unless you use \-\-force.
ReplicaSet, DaemonSet or Job\-\-, then drain will not delete any pods unless you
use \-\-force.
.PP
When you are ready to put the node back into service, use kubectl uncordon, which
@@ -33,7 +34,7 @@ will make the node schedulable again.
.SH OPTIONS
.PP
\fB\-\-force\fP=false
Continue even if there are pods not managed by a ReplicationController, Job, or DaemonSet.
Continue even if there are pods not managed by a ReplicationController, ReplicaSet, Job, or DaemonSet.
.PP
\fB\-\-grace\-period\fP=\-1
@@ -147,10 +148,10 @@ will make the node schedulable again.
.RS
.nf
# Drain node "foo", even if there are pods not managed by a ReplicationController, Job, or DaemonSet on it.
# Drain node "foo", even if there are pods not managed by a ReplicationController, ReplicaSet, Job, or DaemonSet on it.
$ kubectl drain foo \-\-force
# As above, but abort if there are pods not managed by a ReplicationController, Job, or DaemonSet, and use a grace period of 15 minutes.
# As above, but abort if there are pods not managed by a ReplicationController, ReplicaSet, Job, or DaemonSet, and use a grace period of 15 minutes.
$ kubectl drain foo \-\-grace\-period=900