The user expectections calling this method is that the pod should
be ready for the test, however, it only checks that is running,
causing timing issues on busy environments.
Per example, if the pod is not ready, kube-proxy or other services
implementations will not forward traffic to it.
Instead of allowing the cloud provider to guess at the zones that
should be applied for a cluster under test, allow the explicit list
of zones to consider to be passed as a new test context flag -gce-zones.
Only the GCE test cloud provider recognizes this value because only
the GCE test cloud provider makes assumptions about zones for verifying
values, and the default assumptions for GKE do not always match non-GKE
providers.
A number of e2e tests are useful to run after the system has been
disrupted or is in the progress of being disrupted, but the current
suite and test logic blocks progress waiting for all nodes to be
healthy.
By passing -1 to --minStartupPods or --allowed-not-ready-nodes flags
the caller can bypass wait logic before and after test suites that
would prevent running e2e during disruption. This allows use of parts
of the e2e suite during cluster duress to verify that controllers or
components still function.
Both of these are explicit arguments and are more elegantly logged
in a test framework by logging the arguments to the test.
The namespaces to be deleted are already logged inside
WaitForNamespacesDeleted
Extract TestSuite, TestDriver, TestPattern, TestConfig
and VolumeResource, SnapshotVolumeResource from testsuite
package and put them into a new package called api.
The ultimate goal here is to make the testsuites as clean
as possible. And only testsuites in the package.
WaitForPodSuccessInNamespace[Slow] are replaced by WaitForPodSuccessInNamespaceTimeout(),
so that custom timeouts are used instead of the hardcoded ones.
since we added tests to check connectivity against pods with
hostNetwork: true, there is the possibility that those pods
fail to run because the port is being used in the host.
Current test were using port 8080,8081 and 8082 that are commonly
used in hosts for other applications.
If the service is not ready after a certain time, and we are using
Pods with hostNetwork: true we assume that there is a conflict
and skip this test.
Dual stack services can have two ClusterIPs, we already have tests that
exercise the connectivity from different scenarios to the first
ClusterIP of the service.
This PR adds a new functionality to the e2e network utils to enable
DualStack services, and replicate the same tests but using the
secondary ClusterIP, so we cover the connectivity to both cluster IPs.
We hardcode the index number in the KubeProxy/Conntrack e2es and
CollectAddresses returns 4 mixed IP Family addresses in a dualstack
cluster. This change ensures that the serverNodeInfo.nodeIP has only
valid addresses for the expected IPFamily per test case.
Signed-off-by: Christopher M. Luciano <cmluciano@us.ibm.com>