This moves adding a pod to ReservedFor out of the main scheduling cycle into PreBind. There it is done concurrently in different goroutines. For claims which were specifically allocated for a pod (the most common case), that usually makes no difference because the claim is already reserved. It starts to matter when that pod then cannot be scheduled for other reasons, because then the claim gets unreserved to allow deallocating it. It also matters for claims that are created separately and then get used multiple times by different pods. Because multiple pods might get added to the same claim rapidly independently from each other, it makes sense to do all claim status updates via patching: then it is no longer necessary to have an up-to-date copy of the claim because the patch operation will succeed if (and only if) the patched claim is valid. Server-side-apply cannot be used for this because a client always has to send the full list of all entries that it wants to be set, i.e. it cannot add one entry unless it knows the full list.
Overview
The tests in this directory cover dynamic resource allocation support in Kubernetes. They do not test the correct behavior of arbitrary dynamic resource allocation drivers.
If such a driver is needed, then the in-tree test/e2e/dra/test-driver is used, with a slight twist: instead of deploying that driver directly in the cluster, the necessary sockets for interaction with kubelet (registration and dynamic resource allocation) get proxied into the e2e.test binary. This reuses the work done for CSI mock testing. The advantage is that no separate images are needed for the test driver and that the e2e test has full control over all gRPC calls, in case that it needs that for operations like error injection or checking calls.
Cluster setup preparation
The container runtime must support CDI. CRI-O supports CDI starting from release 1.23, Containerd supports CDI starting from release 1.7. To bring up a Kind cluster with Containerd, two things are needed:
NB: Kind switched to use worker-node base image with Containerd 1.7 by default starting from release 0.20, build kind from latest main branch sources or use Kind release binary 0.20 or later.
Build kind node image
After building Kubernetes, in Kubernetes source code tree build new node image:
$ kind build node-image --image dra/node:latest $(pwd)
Bring up a Kind cluster
$ kind create cluster --config test/e2e/dra/kind.yaml --image dra/node:latest
Run tests
- Build ginkgo
NB: If you are using go workspace you must disable it
GOWORK=off make ginkgo
$ make ginkgo
- Run e2e tests for the
Dynamic Resource Allocationfeature:
$ KUBECONFIG=~/.kube/config _output/bin/ginkgo -p -v -focus=Feature:DynamicResourceAllocation ./test/e2e