![]() Automatic merge from submit-queue reset resultRun on pod restart xref https://bugzilla.redhat.com/show_bug.cgi?id=1455056 There is currently an issue where, if the pod is restarted due to liveness probe failures exceeding failureThreshold, the failure count is not reset on the probe worker. When the pod restarts, if the liveness probe fails even once, the pod is restarted again, not honoring failureThreshold on the restart. ```yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - name: busybox image: busybox command: - sleep - "3600" livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 3 timeoutSeconds: 1 periodSeconds: 3 successThreshold: 1 failureThreshold: 5 terminationGracePeriodSeconds: 0 ``` Before this PR: ``` $ kubectl create -f busybox-probe-fail.yaml pod "busybox" created $ kubectl get pod -w NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 4s busybox 1/1 Running 1 24s busybox 1/1 Running 2 33s busybox 0/1 CrashLoopBackOff 2 39s ``` After this PR: ``` $ kubectl create -f busybox-probe-fail.yaml $ kubectl get pod -w NAME READY STATUS RESTARTS AGE busybox 0/1 ContainerCreating 0 2s busybox 1/1 Running 0 4s busybox 1/1 Running 1 27s busybox 1/1 Running 2 45s ``` ```release-note Fix kubelet reset liveness probe failure count across pod restart boundaries ``` Restarts are now happen at even intervals. @derekwaynecarr |
||
---|---|---|
.. | ||
results | ||
testing | ||
BUILD | ||
common_test.go | ||
prober_manager_test.go | ||
prober_manager.go | ||
prober_test.go | ||
prober.go | ||
worker_test.go | ||
worker.go |