the tests with pods using hostNetwork need to bind pods for the
test. Since they use hostNetwork the ports are limited, hence, if
more than one run in parallel, one is going to fail because will not
be able to get the port.
The test verifies a specific feature, in which GPUs are required, thus, cannot
be run in most testing environments. We should exclude this test from most test jobs.
We'll be doing this by adding the [Feature:GPUDevicePlugin] tag (which is also being
used by test/e2e/scheduling/nvidia-gpus.go), and then add it to the ginkgo skip regex.
The pause image should not run sleep commands. This will cause pod fail
to start correctly.
See details in issue #100047. We discovered some behavior about
development in certain cases like new pod fail to start correctly, but will be further investigated.
Change-Id: I9761bbefa694f6fe51a6f1e7561fa7e566ce4d8f
We can support topology detection for all drivers with a single
topology key by allowing different nodes to have the same topology
value. This makes the test a bit more generic and its configuration
simpler.
Drivers need to opt into the new test. Depending on how the driver
describes its behavior, the check can be more specific. Currently it
distinguishes between getting any kind of information about the
storage class (i.e. assuming that capacity is not exhausted) and
getting one object per node (for local storage). Discovering nodes
only works for CSI drivers.
The immediate usage of this new test is for csi-driver-host-path with
the deployment for distributed provisioning and storage capacity
tracking. Periodic kubernetes-csi Prow and pre-merge jobs can run this
test.
The alternative would have been to write a test that manages the
deployment of the csi-driver-host-path driver itself, i.e. use the E2E
manifests. But that would have implied duplicating the
deployments (in-tree and in csi-driver-host-path) and then changing
kubernetes-csi Prow jobs to somehow run for in-tree driver definition
with newer components, something that currently isn't done. The test
then also wouldn't be applicable to out-of-tree driver deployments.
Yet another alternative would be to create a separate E2E test suite
either in csi-driver-host-path or external-provisioner. The advantage
of such an approach is that the test can be written exactly for the
expected behavior of a deployment and thus be more precise than the
generic version of the test in k/k. But this again wouldn't be
reusable for other drivers and also a lot of work to set up as no such
E2E test suite currently exists.
Removing myself from OWNERS as I haven't had the bandwidth to go over
these reviews for a long time.
This should have been part of #99718, which has already been merged.
The image "e2eteam/powershell-helper:6.2.7-linux-cache" is a Linux image. Because we're running "docker buildx build --platform windows/amd64", docker buildx will consider it as a Windows image unless we explicitly specify otherwise. If the image's platform is not correctly identified, we can run into problems when trying to build the image.
We are already doing something similar with the windows-servercore-cache image.