Some storage tests has commands not available in Windows. Mark them as
LinuxOnly now. Will check later to see whether equivalent windows
commands are available.
Change-Id: I41b5668c855b2754a2e332cff4e90ebf2981aca0
Removes comment from daemons function that previously indicated that a
check was being run to make sure docker daemon was running.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
The e2e test, included as part of Conformance,
"validates that there is no conflict between
pods with same hostPort but different hostIP and protocol"
was only testing that the pods were scheduled without conflict
but was never testing the functionality.
The test should check that pods with containers forwarding the same
hostPort can be scheduled without conflict, and that those exposed
HostPort are forwarding the ports to the corresponding pods.
the predicate tests were using loopback addresses for the the
hostPort test, however, those have different semantics depending
on the IP family, i.e. you can not bind to ::1 and ::2 simultanously,
in addition, IP forwarding from localhost to localhost in IPv6 is
not working since it doesn't have the kernel route_localnet hack.
- as soon as a request is received by the apiserver, determine the
timeout of the request and set a new request context with the deadline.
- the timeout filter that times out non-long-running requests should
use the request context as opposed to a fixed 60s wait today.
- admission and storage layer uses the same request context with the
deadline specified.
Instead of hardcoding fedora:latest, use one of our e2e images as
source inside the created pods. This will allow users who test with
this data outside of integration environments to reference a real
image and avoid spurious errors.
Relaxes matching of pod_memory_working_set_bytes metric so that we won't
error due to presence of other pods.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
The e2e topology manager want to test the resource alignment using
devices, and the easiest devices to use are the SRIOV devices at this
moment.
The resource alignment test cases are run for each supported policies,
in a loop.
The tests manage the SRIOV device plugin; up until now, the plugin
was set up and tore down at each loop.
There is no real need for that. Each loop must reconfigure (thus
restart) the kubelet, but the device plugin can set up and tore down
just once for all the policies, thus once.
The kubelet can reconnect just fine to a running device plugin.
This way, we greatly reduce the interactions and the complexity of the
test environment, making it easier to understand and more robust, and
we trim down some minutes from execution time.
However, this patch also hides (not solves) a test flake we observed
on some environment. The issue is hardly reproduceable and not well
understood, but seems caused by doing the sriov dp setup/teardown
in each policy testing loop.
Investigation so far suggests that the kubelet sometimes have a stale
state after the sriovdp teardown/setup cycle, leading to flakes and
false negatives.
We tried to address this in https://github.com/kubernetes/kubernetes/pull/95611
with no conclusive results yet.
This patch was posted because overall we believe this patch gains
exceeds the drawbacks (hiding the aforementioned flake) and
because understanding the potential interaction issues between the
sriovdp and the kubelet deserve a separate test.
Signed-off-by: Francesco Romani <fromani@redhat.com>
A suite of e2e tests was created for Topology Manager
so as to test pod scope alignment feature.
Co-authored-by: Pawel Rapacz <p.rapacz@partner.samsung.com>
Co-authored-by: Krzysztof Wiatrzyk <k.wiatrzyk@samsung.com>
Signed-off-by: Cezary Zukowski <c.zukowski@samsung.com>
since we added tests to check connectivity against pods with
hostNetwork: true, there is the possibility that those pods
fail to run because the port is being used in the host.
Current test were using port 8080,8081 and 8082 that are commonly
used in hosts for other applications.
If the service is not ready after a certain time, and we are using
Pods with hostNetwork: true we assume that there is a conflict
and skip this test.
Dual stack services can have two ClusterIPs, we already have tests that
exercise the connectivity from different scenarios to the first
ClusterIP of the service.
This PR adds a new functionality to the e2e network utils to enable
DualStack services, and replicate the same tests but using the
secondary ClusterIP, so we cover the connectivity to both cluster IPs.
Add a separate in-tree gcepd driver for windows cluster because it does
not support certain features as Linux driver.
Change-Id: I2fca86b3f32f17db7703c46a36944d9ee51f355f
We hardcode the index number in the KubeProxy/Conntrack e2es and
CollectAddresses returns 4 mixed IP Family addresses in a dualstack
cluster. This change ensures that the serverNodeInfo.nodeIP has only
valid addresses for the expected IPFamily per test case.
Signed-off-by: Christopher M. Luciano <cmluciano@us.ibm.com>