Automatic merge from submit-queue
Update coreos node e2e image to a version that uses cgroupfs
Temporary fix for #28192. This PR updates coreos node e2e image to a version that uses cgroupfs.
cc @vishh @yifan-gu
Automatic merge from submit-queue
Node E2E: Disable kubenet for local node e2e test.
After https://github.com/kubernetes/kubernetes/pull/28196, we must manually setup cni and nsenter in local node to run `make test_e2e_node`, which may not be necessary for local development.
I've tried to move cni downloading logic into `BeforeSuite`, however it is still hard to figure out who should install nsenter, manually installed by every developer? in the `setup_host.sh` script? in `BeforeSuite`?
This PR:
* Added a flag to disable kubenet and disabled kubenet in local test.
* Cleaned up the CNI installation logic a bit.
/cc @yujuhong @freehan
[]()
Automatic merge from submit-queue
E2E: Add UpdatePod function in e2e framework and change the test to use it.
Fix https://github.com/kubernetes/kubernetes/issues/28096.
Some e2e tests need to update pod, but the pod update is a bit complex because of potential conflict. #28096 happened just because the test only called pod `Update` once.
This PR move the update pod logic into a util function `UpdatePod` in e2e framework, and change the tests to use it.
Mark P2 because the original issue is P0, but in fact happens not quite frequently. :)
[]()
the test to use it.
Automatic merge from submit-queue
Add test/test_owners.csv, for automatic assignment of test failures.
This file will be read by the munger -- see kubernetes/contrib#1264
This also includes a simple script to do minor automatic updates to the CSV.
I'd like to get `update_owners.py` into a more usable state -- right now the CSV is based directly on the Google Sheets data. It has 9 outdated tests and is missing 80 new tests.
I can randomly assign new tests to people on kubernetes-maintainers, but are there any caveats to how the assignment should work? Should they be load balanced? Should some people in the group not receive issues? Etc.
Automatic merge from submit-queue
Fix node e2e issues on selinux enabled systems
It fixes following 3 node e2es:
```
[Fail] [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits [It] it should run with the expected status [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:114
[Fail] [k8s.io] Kubelet metrics api when querying /stats/summary [It] it should report resource usage through the stats api
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:158
```
```
[Fail] [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits [It] should report termination message if TerminationMessagePath is set [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:150
```
@kubernetes/rh-cluster-infra
Automatic merge from submit-queue
e2e: increase timeout when waiting for deployment pods to be deleted
Use the same timeout as the one used for waiting for the deployment
reaper to complete.
Takes a stab at https://github.com/kubernetes/kubernetes/issues/28067
@kubernetes/deployment PTAL
Automatic merge from submit-queue
Reorganize volume controllers and manager
* Move both PV and attach/detach volume controllers to `controllers/volume` (closes#26222)
* Rename `kubelet/volume` to `kubelet/volumemanager`
* Add/update OWNER files
Automatic merge from submit-queue
Add MinReadySeconds to rolling updater
Add MinReadySeconds support to RollingUpdater that allows to specify the number of seconds to wait on top of the pod is "ready" because its readiness probe passed.
Automatic merge from submit-queue
Fix node confomance test
Fixes https://github.com/kubernetes/kubernetes/issues/28255, https://github.com/kubernetes/kubernetes/issues/28250, https://github.com/kubernetes/kubernetes/issues/28341.
The main reason of the flake is that in the failed test expects the `PodPhase` to keep `Pending`. It did `Eventually` check and `Consistently` check for 5 seconds. However, the default `PodPhase` is `Pending`, when the check passes, the `PodStatus` could still be in default state.
After that, the test expects the container status to be `Waiting`, which may not be the case, because the default `ContainerStatuses` is empty, and the pod could still be in the default state.
This PR changes the test to ensure `ContainerStatuses` first and then check the `PodPhase` after that.
Mark P1 because the test fails relatively frequently and does block some PRs.
@pwittrock
/cc @liangchenye @ncdc
[]()