This is in preparation for revamping the resource.k8s.io completely. Because
there will be no support for transitioning from v1alpha2 to v1alpha3, the
roundtrip test data for that API in 1.29 and 1.30 gets removed.
Repeating the version in the import name of the API packages is not really
required. It was done for a while to support simpler grepping for usage of
alpha APIs, but there are better ways for that now. So during this transition,
"resourceapi" gets used instead of "resourcev1alpha3" and the version gets
dropped from informer and lister imports. The advantage is that the next bump
to v1beta1 will affect fewer source code lines.
Only source code where the version really matters (like API registration)
retains the versioned import.
Dropping the error that is returned by allocateOne hides the reason *why*
allocation failed. Including the UID is "too much information" for an error
message (usually the user doesn't care about the exact identity, just the name)
and the claim name can and will be added by the caller.
Before:
controller.go:373: E0625 16:04:12.140953] test-driver.cdi.k8s.io/resource controller: processing failed err="claim test-dramq9jv-resource-h72pg: failed allocating claim 8551afba-3c9a-4a8a-8633-6fad6c4b9e42" key="schedulingCtx:test/test-dramq9jv"
event.go:377: I0625 16:04:12.141031] test-driver.cdi.k8s.io/resource controller: Event(v1.ObjectReference{Kind:"PodSchedulingContext", Namespace:"test", Name:"test-dra65gfw", UID:"6be9ba57-31da-4fef-b61d-b0468d71afcf", APIVersion:"resource.k8s.io/v1alpha3", ResourceVersion:"197", FieldPath:""}): type: 'Warning' reason: 'Failed' claim test-dra65gfw-resource-zpzrj: failed allocating claim f98a32e1-ab7d-4b34-a258-6d8224aa9006
After:
controller.go:373: E0625 16:02:54.248059] test-driver.cdi.k8s.io/resource controller: processing failed err="claim test-dram98ll-resource-nvsbj: device selectors are not supported" key="schedulingCtx:test/test-dram98ll"
event.go:377: I0625 16:02:54.248163] test-driver.cdi.k8s.io/resource controller: Event(v1.ObjectReference{Kind:"PodSchedulingContext", Namespace:"test", Name:"test-dratpt77", UID:"24010402-b026-4fe4-a535-e1dab69db8c0", APIVersion:"resource.k8s.io/v1alpha3", ResourceVersion:"298", FieldPath:""}): type: 'Warning' reason: 'Failed' claim test-dratpt77-resource-vlgrv: device selectors are not supported
Several enhancements:
- `--resource-config` is now listed under `controller` options instead of
`leader election`: merely a cosmetic change
- The driver name can be configured as part of the resource config. The
command line flag overrides the config, but only when set explicitly.
This makes it possible to pre-define complete driver setups where the
name is associated with certain resource availability. This will be
used for testing cluster autoscaling.
- The set of nodes where resources are available can optionally be specified
via node labels. This will be used for testing cluster autoscaling.
Analyzing the CPU profile of
go test -timeout=0 -count=5 -cpuprofile profile.out -bench=BenchmarkPerfScheduling/.*Claim.* -benchtime=1ns -run=xxx ./test/integration/scheduler_perf
showed that a significant amount of time was spent iterating over allocated
claims to determine how many were allocated per node. That "naive" approach was
taken to avoid maintaining a redundant data structure, but now that performance
measurements show that this comes at a cost, it's not "premature optimization"
anymore to introduce such a second field.
The average scheduling throughput in
SchedulingWithResourceClaimTemplate/2000pods_100nodes increases from 16.4
pods/s to 19.2 pods/s.
The recommendation and default in the controller helper code is to set
ReservedFor to the pod which triggered delayed allocation. However, this
is neither required nor enforced. Therefore we should also test the fallback
path were kube-scheduler itself adds the pod to ReservedFor.
When running as part of the scheduler_perf benchmark testing, we want to print
less information by default, so we should use V to limit verbosity
Pretty-printing doesn't belong into "application" code. I am moving that into
the ktesting formatting (https://github.com/kubernetes/kubernetes/pull/116180).
The check for "resources available on a node" must treat nodes that are not
listed as "no resources available". The previous logic only worked because all
nodes were listed during E2E testing. The upcoming integration testing is
covering additional scenarios and triggered this broken case.
The recently introduced failure handling in ExpectNoError depends on error
wrapping: if an error prefix gets added with `fmt.Errorf("foo: %v", err)`, then
ExpectNoError cannot detect that the root cause is an assertion failure and
then will add another useless "unexpected error" prefix and will not dump the
additional failure information (currently the backtrace inside the E2E
framework).
Instead of manually deciding on a case-by-case basis where %w is needed, all
error wrapping was updated automatically with
sed -i "s/fmt.Errorf\(.*\): '*\(%s\|%v\)'*\",\(.* err)\)/fmt.Errorf\1: %w\",\3/" $(git grep -l 'fmt.Errorf' test/e2e*)
This may be unnecessary in some cases, but it's not wrong.
The recently introduced failure handling in ExpectNoError depends on error
wrapping: if an error prefix gets added with `fmt.Errorf("foo: %v", err)`, then
ExpectNoError cannot detect that the root cause is an assertion failure and
then will add another useless "unexpected error" prefix and will not dump the
additional failure information (currently the backtrace inside the E2E
framework).
Instead of manually deciding on a case-by-case basis where %w is needed, all
error wrapping was updated automatically with
sed -i "s/fmt.Errorf\(.*\): '*\(%s\|%v\)'*\",\(.* err)\)/fmt.Errorf\1: %w\",\3/" $(git grep -l 'fmt.Errorf' test/e2e*)
This may be unnecessary in some cases, but it's not wrong.
The driver can be used manually against a cluster started with
local-up-cluster.sh and is also used for E2E testing. Because the tests proxy
connections from the nodes into the e2e.test binary and create/delete files via
the equivalent of "kubectl exec dd/rm", they can be run against arbitrary
clusters. Each test gets its own driver instance and resource class, therefore
they can run in parallel.