Primarily this protects against accidentally polling with the default interval
of 10ms. Setting these defaults may also make some tests simpler because they
don't need to override the defaults.
Various different tests all have their own poll intervals. As a start towards
consolidating that, the interval from test/e2e/framework/pod (as one of the
most common cases for polling) is moved into the framework.
Changing other helper packages and tests needs to follow.
This consolidates timeout handling. In the future, configuration of all
timeouts via a configuration file might get added. For now, the same three
legacy command line flags for the timeouts that get moved continue to be
supported.
All usage of builder pattern is convertible to cpuset.New()
with the same or fewer lines of code.
Migrate Builder.Add to a private method of CPUSet, with a comment
that it is only intended for internal use to preserve immutable
propoerty of the exported interface.
This also removes 'require' library dependency, which avoids
non-standard library usage.
In 'set', conversions to slice are done also, but with different names:
ToSliceNoSort() -> UnsortedList()
ToSlice() -> List()
Reimplement List() in terms of UnsortedList to save some duplication.
Removes exit/fatal from cpuset library.
Usage in podresources test was not necessary.
Library reference in cpu_manager_test was moved to a local function, and
converted to use e2e test framework error catching.
Before, in RunPostFilterPlugins, we didn't distinguish between unschedulable and unresolvable
because we only have one postFilterPlugin by default, now, we have at least two, we should
make sure that once a postFilterPlugin returns unresolvable, we'll return directly
Signed-off-by: Kante Yin <kerthcet@gmail.com>
If we were to add new fields in TimeoutContext, the current users of
NewFrameworkWithCustomTimeouts might run into failures unless they get modified
to also set those new fields. This is error-prone.
A better approach is to let users of NewFrameworkWithCustomTimeouts override
fields by setting just those and use the normal defaults for the others.
Ginkgo relies on all workers defining all tests in exactly the same order. This
wasn't guaranteed for these tests, with the result that some tests might have
been executed more than once and others not at all when running in parallel.
This was noticed when some of these tests started to flake and then were
reported both as failure and success, as if they had been retried.
In v1.26.0, these tests only used the timeout context while waiting for a CSI
call. This restores that behavior, just in case that it is relevant. No test
flakes are known because of this.
The intend of timeout handling (for the entire "It" and not just a few calls)
becomes more obvious and simpler when using ginkgo.NodeTimeout as decorator.
It doesn't make sense for the E2E framework to have command line options that
don't do anything because then all test suites built with the framework inherit
those options.
For -list-images and -list-conformance-tests the solution is to move the
implementation into the framework (-list-images) respectively move the flag
into test/e2e (-list-conformance-tests).
The placement was decided based on the observation that image patching is
common functionality while conformance testing is specific to one test suite.
The "[sig-network] DNS HostNetwork should resolve DNS of partial qualified
names for services on hostNetwork pods with dnsPolicy:
ClusterFirstWithHostNet" test assumes that a service named "kube-dns"
exists in the "kube-system" namespace. This assumption is valid if the
cluster was configured using kubeadm, but the assumption may be invalid
otherwise.
As the test uses dnsPolicy: ClusterFirst (as opposed to dnsPolicy: None),
it does not need to specify the name server in dnsConfig. Omitting
dnsConfig.nameservers obviates the need to look up the service.
Follow-up to commit add4652352.
* test/e2e/network/dns.go: Don't look up or use the kube-dns cluster IP
address as it might not exist on clusters that were not configured using
kubeadm.
Bring back the number of test spec which was dropped earlier.
It's now available in the reporting node of `ReportBeforeSuite` by extracting
the number from report.PreRunStats.SpecsThatWillBeRun.
Signed-off-by: Dave Chen <dave.chen@arm.com>