In order to build Windows container images for multiple OS versions,
--isolation=hyperv is required. However, not all clouds / nodes supports
or have it enabled by default, which is why we're going to rely on
having multiple nodes to build the Windows images, until this issue
is addressed.
This commit adds support for building test images for multiple
Windows versions, as we have to support both LTS and SAC channels.
With this, the format for Windows images in the BASEIMAGE files is:
OS/ARCH/OS_VERSION
Also adds --isolation-hyperv to the Windows docker build command, making sure
that container images for multiple OS versions can be built using the same
Windows node.
Adds Windows support to the test/images/image-util.sh script.
A Windows node with Docker installed is required to build Windows images.
The connection URL to it must be set in the REMOTE_DOCKER_URL env variable.
Additionally, the authentication to the remote docker node is done through
certificates, which must be found in ~/.docker.
By default, the REMOTE_DOCKER_URL env variable is set to "" in the Makefile,
and because of it, the image-util.sh script will skip building and pushing
Windows images.
Added GOOS argument to the go build process in order to be able to build
Windows binaries. Additionally, the OS env variable was added to the images
Makefiles (default value is "linux") in order to maintain default behaviour.
Some images require a different Dockerfile for Windows images, since they
have different ways of installing dependencies. Because of this, if a image
needs to be built for Windows, it will first check for a Dockerfile_windows
file instead of the default one. If there isn't one, it means that the
same Dockerfile can be used for both Windows and Linux.
All Windows images will be based on the image
"mcr.microsoft.com/windows/servercore:ltsc2019". There are a couple of features
that are needed from this image, especially powershell.
Added busybox image for Windows. Most Windows images will be based on it, which
will help reduce the command line differences between Linux and Windows, but
not entirely.
Added Windows support for agnhost image.
Changes the image naming template from:
$REGISTRY/$image-$arch:$TAG
to
$REGISTRY/$image:$TAG-$os_name-$arch
The previous naming template would generate a plethora of images (Ai * N images,
where Ai is the number of OS/architectures for the image i and N is the number
of images), while the new naming template will reduce the number of images to N.
The new template also includes the OS name, as we plan to integrate Windows
images into the manifest lists as well.
When building images, their REGISTRY can be set to a custom
one, instead of the default "gcr.io/kubernetes-e2e-test-images" or
"us.gcr.io/k8s-artifacts-prod/e2e-test-images".
Some images are based on other images we're already building
(e.g.: kitten, nautilus), but their base images
are set in the default registry name, which can be undesirable.
This commit addresses this issue.
Windows images will require other base images, and thus, we will need
to explicitly specify the OS type a base image is for in order to
avoid confusion or errors.
The way the images are built is going to be changed, and in order to avoid
overwritting and breaking the current images, the image versions are bumped.
Similar functionality is required across e2e tests for RuntimeClass.
Let's create runtimeclass as part of the framework/node package.
Signed-off-by: Eric Ernst <eric.ernst@intel.com>
Use bytes instead of strings, and slice in-place filter
(see https://github.com/golang/go/wiki/SliceTricks#filter-in-place)
to avoid copying strings around.
In my benchmark it shows almost 2x improvement:
BenchmarkString-8 1477207 10198 ns/op
BenchmarkBuffer-8 1561291 7622 ns/op
BenchmarkInPlace-8 2295714 5202 ns/op
String is the original implementation, Buffer is an intermediary
one that uses strings.Builder, and InPlace is the one from this commit.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Add a new e2e test to test the Except clauses in IPBlock CIDR
based NetworkPolicies. This test adds an egress rule which
allows client to connect to a CIDR which includes the
ServerPod's IP, however carves an except subnet which excludes
this ServerPod.
The test "should enforce egress policy allowing traffic to a server in a
different namespace based on PodSelector and NamespaceSelector
[Feature:NetworkPolicy]" is flaky because it doesn't wait for the server
Pod to be ready before testing traffic via its service, then even the
NetworkPolicy allows it, the SYN packets will be rejected by iptables
because the service has no endpoints at that moment.
This PR fixes it by making it wait for Pods to be ready like other
tests.
Due to an oversight, the e2e topology manager tests
were leaking a configmap and a serviceaccount.
This patch ensures a proper cleanup
Signed-off-by: Francesco Romani <fromani@redhat.com>
Up until now, the test validated the alignment of resources
only in the first container in a pod. That was just an overlook.
With this patch, we validate all the containers in a given pod.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Add autodetection code to figure out on which NUMA node are
the devices attached to.
This autodetection work under the assumption all the VFs in
the system must be used for the tests.
Should not this be the case, or in general to handle non-trivial
configurations, we keep the annotations mechanism added to the
SRIOV device plugin config map.
Signed-off-by: Francesco Romani <fromani@redhat.com>
On single-NUMA node systems the numa_node of sriov devices was
sometimes reported as "-1" instead of, say, 0. This makes some
tests that should succeed[0] fail unexpectedly.
The reporting works as expected on real multi-NUMA node systems.
This small workaround was added to handle this corner case,
but it makes overall the code less readable and a bit too lenient,
hence we remove it.
+++
[0] on a single NUMA node system some resources are obviously
always aligned if the pod can be admitted. It boils down to the
node capacity at pod admittal time.
Signed-off-by: Francesco Romani <fromani@redhat.com>
The e2e_node topology_manager check have a early, quick check
to rule out systems without sriov device, thus skipping the tests.
The first version of the ckeck detected PFs, (Physical Functions),
under the assumption that VFs (Virtual Functions) were already been
created. This works because, obviously, you can't have VFs without PFs.
However, it's a little safer and easier to understand if we check
firectly for VFs, bailing out from systems which don't provide them.
Nothing changes for properly configured test systems.
Signed-off-by: Francesco Romani <fromani@redhat.com>
This is useful for logs from daemonset pods because for those it is
often relevant which node they ran on because they interact with
resources or other pods on the host.
To keep the log prefix short, it gets limited to a maximum length of
10 characters.
Quite a few images are only used a few times in a few tests. Thus,
the images are being centralized into the agnhost image, reducing
the number of images that have to be pulled and used.
This PR replaces the usage of the following images with agnhost:
- resource-consumer-controller
- test-webserver
Commit 69a473be3 broke the test case (that was checking
that a file from a volume can't be read) by adding a
(wrong) assumption that error should be nil.
Fix the assumption (we do expect the error here).
Note that we were checking file contents before; now when
we check the error from read, checking the contents is
redundant.
The other issue with the test is SELinux should be in enforcing
mode for this to work, so let's check that first to avoid
false positives.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>