They are not needed for any of the tests and in practice apparently
caused enough overhead that even unrelated tests timed out. For
example, in the pull-kubernetes-e2e-kind test, 43 out of 5771 tests
failed, including tests from sig-node, sig-cli, sig-api-machinery,
sig-network.
Mirroring the various YAML files by hand is tedious. The new
update-hostpath.sh does all the necessary steps automatically.
The result is now a bit more consistent with the upstream repos in the
sense that the original file names and paths for the RBAC YAML files
are used.
The csi-hostpath-testing.yaml is included for the sake of
completeness, but not used during E2E testing.
The new hostpath driver release is v1.6.2, which adds the
external-health-monitor for the first time.
We were avoiding the scheduled using the pod.Spec.NodeName directly,
however, once we switched to using the node selector, the no_snat
e2e test started to fail because was trying to schedule pods on
nodes with taints, hence, failing the test.
based on this comment in
ea07644522/test/e2e/framework/pod/node_selection.go (L96-L101)
// pod.Spec.NodeName should not be set directly because
// it will bypass the scheduler, potentially causing
// kubelet to Fail the pod immediately if it's out of
// resources. Instead, we want the pod to remain
// pending in the scheduler until the node has resources
// freed up.
* Use deep copies in `PrepareForUpdate()`
* Preserve select metadata from new pod
* Use patch to add ephemeral container `kubectl debug`
* Distinguish between pod vs /ephemeralcontainers NotFound
This replaces the check of mount propagation to/from the host OS mount
namespace to a similar check about the mount namespace where kubelet is
running (which may or may not be the same mount namespace as the host
OS).
This addresses issue #100259
This changes the `/ephemeralcontainers` subresource of `/pods` to use
the `Pod` kind rather than `EphemeralContainers`.
When designing this API initially it seemed preferable to create a new
kind containing only the pod's ephemeral containers, similar to how
binding and scaling work.
It later became clear that this made admission control more difficult
because the controller wouldn't be presented with the entire Pod, so we
updated this to operate on the entire Pod, similar to how `/status`
works.
To run the tests in a single node cluster, create two pods consuming 2/5 of the extended resource instead of one consuming 2/3.
The low priority pod will be consuming 2/5 of the extended resource instead so in case there's only a single node,
a high priority pod consuming 2/5 of the extended resource can be still scheduled. Thus, making sure only the low priority pod
gets preempted once the preemptor pod consuming 2/5 of the extended resource gets scheduled while keeping the high priority pod untouched.