On HA API server hiccups we saw a prow job with error:
fail [k8s.io/kubernetes@v1.19.0/test/e2e/e2e.go:284]: Sep 30 17:06:08.313: Error waiting for all pods to be running and ready: 0 / 0 pods in namespace "kube-system" are NOT in RUNNING and READY state in 10m0s
POD NODE PHASE GRACE CONDITIONS
The failure should include the last error from API server, if it's
available:
fail [k8s.io/kubernetes@v1.19.0/test/e2e/e2e.go:284]: Oct 1 11:29:45.220: Error waiting for all pods to be running and ready: 0 / 0 pods in namespace "kube-system" are NOT in RUNNING and READY state in 10m0s
Last error: Get "https://localhost:6443/api/v1/namespaces/kube-system/replicationcontrollers": dial tcp [::1]:6443: connect: connection refused
POD NODE PHASE GRACE CONDITIONS
Some of the tests are negative test cases which are supposed to ensure that those
invalid usecases are handled properly.
However, some of the tests are false positives, they can pass due to various reasons.
One such example is: "should fail substituting values in a volume subpath with absolute path".
This test can pass if:
- the Pod cannot start due to various reasons (e.g.: the container image cannot be pulled or does
not exist).
- the Pod ran to completion, even though the container was not supposed to start in the first place.
Organized functions that abstract the access of
k8s.io/apimachinery/pkg/runtime.Objects into a framework subpackge.
Signed-off-by: Jorge Alarcon Ochoa <alarcj137@gmail.com>
Because of a := assignment, the anonymous function assigned the pod
list to a local variable instead of the
WaitForPodsWithLabelRunningReady return value which therefore always
was nil.
The correct code is an assignment with = as in WaitForPodsWithLabelScheduled.