kubernetes/pkg/kubelet/status
Clayton Coleman ad3d8949f0
kubelet: Preserve existing container status when pod terminated
The kubelet must not allow a container that was reported failed in a
restartPolicy=Never pod to be reported to the apiserver as success.
If a client deletes a restartPolicy=Never pod, the dispatchWork and
status manager race to update the container status. When dispatchWork
(specifically podIsTerminated) returns true, it means all containers
are stopped, which means status in the container is accurate. However,
the TerminatePod method then clears this status. This results in a
pod that has been reported with status.phase=Failed getting reset to
status.phase.Succeeded, which is a violation of the guarantees around
terminal phase.

Ensure the Kubelet never reports that a container succeeded when it
hasn't run or been executed by guarding the terminate pod loop from
ever reporting 0 in the absence of container status.
2020-03-04 13:34:24 -05:00
..
testing Fix golint failures of pkg/kubelet/status/... 2019-09-21 23:43:37 +08:00
BUILD kubelet: Preserve existing container status when pod terminated 2020-03-04 13:34:24 -05:00
generate_test.go Generate ContainersReady condition 2018-06-05 11:10:38 -07:00
generate.go Fix golint failures of pkg/kubelet/status/... 2019-09-21 23:43:37 +08:00
status_manager_test.go kubelet: Preserve existing container status when pod terminated 2020-03-04 13:34:24 -05:00
status_manager.go kubelet: Preserve existing container status when pod terminated 2020-03-04 13:34:24 -05:00