kubelet sometimes calls NodeStageVolume an NodePublishVolume too
often, which breaks this test and leads to flakiness. The test isn't
about that, so we can relax the checking and it still covers what it
was meant to cover.
collectPodsAndNetworkPolicies() is called to collect diagnostics
after a failure. Previously, if it encountered a failure in getting
the logs it would call Failf(), discarding the rest of the diagnostics
immediately.
Following changes in #87730, Kubelet is directly hcsshim to gather stats.
However, unlike `docker stats` API that was used before, hcsshim does not
keep information about exited containers.
When the Kubelet lists containers (`docker_container.go:ListContainers()`),
it sets `All: true`, retrieving non-running containers.
When docker stats is called with such container id, it'll return a valid JSON
with all values set to 0. The non-running containers are filtered later on in the process.
When the hcsshim is called with such container id, it'll return an error, effectively
stopping the stats retrieval for all containers.
"Volumes GlusterFS should be mountable" is a bit flaky in a downstream CI.
This PR make "should be mountable" test on par with the other GlusterFS
tests (in_tree.go: DeleteVolume())
commit 43c56eb403 introduced a change
where CPUAccounting, CPUAccounting and TasksAccounting are enabled for
the systemd service.
It causes a regression on RHEL 7.8 where systemd-run doesn't allow to
set TasksAccounting.
Since Delegate= already enables all the controllers, it is superfluous
to specify them.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
In caf0d1d61874a2c8687b7deb773eca30ddaee5b6 we documented a policy to
ensure that conformance tests should not rely in existence or use of
kubelet apis directly. So based on that we should drop conformance for
the two tests here that use the "/logs" endpoint directly.
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
The test "should not change the subpath mount on a container restart if the environment variable changes"
creates a pod with the liveness probe: cat /volume_mount/test.log. The test then
deletes that file, which causes the probe to fail and the container to be restarted.
After which it recreates the file by exec-ing into the pod, but there is a chance
that the container was not created yet, or it did not start yet.
This commit adds a few retries to the exec command.
this is mainly to ensure integration tests (which all end in _test)
are properly bossed around for their imports
I had to adjust some of the _test files to adhere to existing
reverse_rules specified elsewhere
specifically:
- cmd/kubeadm/.import-restrictions
- we don't need to explicitly allow k8s.io repos (external or published)
- rm pkg/controller/.import-restrictions
- pkg/client/unversioned was removed in 59042
- pkg/kubectl/.import-restrictions
- pkg/printers is no longer used
- pkg/api was masking all of the pkg/apis prefixes
- rm staging/src/k8s.io/code-generator/cmd/lister-gen/.import-restrictions
- noop / empty file
- test/e2e/framework/.import-restrictions
- we don't need to explicitly allow k8s.io repos (external or published)
yaml has comments, so we can explain why we have certain rules or
certain prefixes
for those files that weren't already commented yaml, I converted them to
yaml and took a best guess at comments based on the PRs that introduced
or updated them
When a test pattern or storage class uses late binding, the cleanup
code didn't know about the PV that may have been created for the PVC
since setting it up and thus then also didn't wait for PV deletion.
This is problematic for test isolation because the next test was
allowed to be started before fully cleaning up. Worse, it the driver
gets removed after the test, the volume might never get deleted.