As described in 8c76845b03 ("test/e2e/network: fix a bug in the hostport e2e
test") if we have two pods with the same hostPort, hostIP, but different
protocols, a CNI may be buggy and decide to forward all traffic only to one of
these pods. Add a check that we receiving requests from different pods.
Co-authored-by: Antonio Ojea <antonio.ojea.garcia@gmail.com>
The hostport e2e test (sonobuoy run --e2e-focus 'validates that there is no
conflict between pods with same hostPort but different hostIP and protocol')
checks, in particular, that two pods with the same hostPort, the same hostIP,
but different L4 protocols can coexist on one node.
In order to do this, the test creates two pods with the same hostIP:hostPort,
one TCP-based, another UDP-based. However, both pods listen on both protocols:
netexec --http-port=8080 --udp-port=8080
This can happen that a CNI which doesn't distinguish between TCP and UDP
hostPorts forwards all traffic, TCP or UDP, to the same pod. As this pod
listens on both protocols it will reply to both requests, and the test
will think that everything works properly while the second pod is indeed
disconnected. Fix this by executing different commands in different pods:
TCP: netexec --http-port=8080 --udp-port=-1
UDP: netexec --http-port=8008 --udp-port=8080
The TCP pod now doesn't listen on UDP, and the UDP pod doesn't listen on TCP on
the target hostPort. The UDP pod still needs to listen on TCP on another port
so that a pod readiness check can be made.
The test is about SCTP and the accessed service only forwarded SCTP
traffic to the server Pod but the client Pod used TCP protocol, so the
test traffic never reached the server Pod and the test NetworkPolicy
was never enforced, which lead to test success even if the default-deny
policy was implemented wrongly. In some cases it may got failure result
if there was an external server having same IP as the cluster IP and
listening to TCP 80 port.
Signed-off-by: Quan Tian <qtian@vmware.com>
Now, internalStaticIP is hard-coded to "10.240.11.11". Such IP works
for aks-engine cluster but not for CAPZ ones (node-subnet 10.1.0.0/16)
Signed-off-by: Zhecheng Li <zhechengli@microsoft.com>
Some of these tests could not be run previously, especially on Windows
Docker containers. But now, by using Windows Containerd, we can finally
run them:
- HostNetwork=true tests: This can now be enabled on Windows Privileged Containers.
- /etc/hosts related tests: These were not supported because it required single
file mappings, which is possible in Containerd.
- termination message as non-root user: Requires RunAsUsername, and single file
mappings.
We do not have guarantee that the agnhost's `/hostname` endpoint returns
a hostname and not an FQDN. We also do not have guarantee a hostname
gets passed to the execHostnameTest() function for comparison.
So make sure we're comparing hostnames in execHostnameTest().
Number of workers was set to be 1 because prallel probing on Windows is
flakier, network policy tests may get stuck, this symptom disappears on
the newest kubernetes, network poicy tests run very well with 3 workers.
* It should check one Node in a zone instead of
each Node and its fromZone.
* Check Nodes' CPUs if they are equivalent
Signed-off-by: Zhecheng Li <zhechengli@microsoft.com>
The e2e test checks that the component implementing Kubernetes Services
interprets ClusterIPs with leading zeros as decimal, otherwise the
cluster will be exposed to CVE-2021-29923.
The tests were asserting that after a NodePort Service was removed,
no new traffic was still reaching the endpoints.
However, the number of tries was so large that another test running
in parallel could create a working Service on that NodePort, making
the test fails.
Use only 10 tries to confirm that the Service stopped working.