snuck in there while I was working on the test, but is ultimately not necessary to test the functionality.
skipping healthz check resulted in leaking goroutines from poststarthooks.
Don't implement interfaces that trigger tests with in-line and
pre-provisioned vSphere volumes.
With cloud provider removal, the in-tree vSphere tests won't be able to
create a volume in vSphere and thus test in-line volumes in Pods and
pre-provisioned PVs. Only dynamically provisioned volumes can be used for
testing, because they're provisioned by the vSphere CSI driver.
Refactor the code related to creating an internal type load balancer in the e2e tests for network load balancers. The modification removes the check for the "azure" provider and updates it to only check for "gke" and "gce" providers. This change ensures that the test only runs when the cluster is using "gke" or "gce" as the provider. The counterpart test is in the out-of-tree cloud provider azure.
This moves adding a pod to ReservedFor out of the main scheduling cycle into
PreBind. There it is done concurrently in different goroutines. For claims
which were specifically allocated for a pod (the most common case), that
usually makes no difference because the claim is already reserved.
It starts to matter when that pod then cannot be scheduled for other reasons,
because then the claim gets unreserved to allow deallocating it. It also
matters for claims that are created separately and then get used multiple times
by different pods.
Because multiple pods might get added to the same claim rapidly independently
from each other, it makes sense to do all claim status updates via patching:
then it is no longer necessary to have an up-to-date copy of the claim because
the patch operation will succeed if (and only if) the patched claim is valid.
Server-side-apply cannot be used for this because a client always has to send
the full list of all entries that it wants to be set, i.e. it cannot add one
entry unless it knows the full list.
ginkgo.GinkgoHelper is a recent addition to ginkgo which allows functions to
mark themselves as helper. This then changes which callstack gets reported for
failures. It makes sense to support the same mechanism also for logging.
There's also no reason why framework.Logf should produce output that is in a
different format than klog log entries. Having time stamps formatted
differently makes it hard to read test output which uses a mixture of both.
Another user-visible advantage is that the error log entry from
framework.ExpectNoError now references the test source code.
With textlogger there is a simple replacement for klog that can be reconfigured
to let the caller handle stack unwinding. klog itself doesn't support that
and should be modified to support it (feature freeze).
Emitting printf-style output via that logger would work, but become less
readable because the message string would get quoted instead of printing it
verbatim as before. So instead, the traditional klog header gets reproduced
in the framework code. In this example, the first line is from klog, the second
from Logf:
I0111 11:00:54.088957 332873 factory.go:193] Registered Plugin "containerd"
...
I0111 11:00:54.987534 332873 util.go:506] >>> kubeConfig: /var/run/kubernetes/admin.kubeconfig
Indention is a bit different because the initial output is printed before
installing the logger which writes through ginkgo.GinkgoWriter.
One welcome side effect is that now "go vet" detects mismatched parameters for
framework.Logf because fmt.Sprintf is called without mangling the format
string. Some of the calls were incorrect.
A stand-alone binary shouldn't import the test/e2e/framework, which is targeted
towards usage in a Ginkgo test suite. This currently works, but will break once
test/e2e/framework becomes more opinionated about how to configure logging.
The simplest solution is to duplicate the one short function that the binary
was calling in the framework.
Now that we have it (8a89a1f5a5), let's also make sure that
the new WithFlaky is used everywhere instead if [Flaky]. This way it can be
used for filtering by label.