Current test for service with endpoints only validate the API objects,
however, doesn't validate that the implementation of Services works.
This mean that a cluster without any component implementing Services,
can pass Conformance.
based on this comment in
ea07644522/test/e2e/framework/pod/node_selection.go (L96-L101)
// pod.Spec.NodeName should not be set directly because
// it will bypass the scheduler, potentially causing
// kubelet to Fail the pod immediately if it's out of
// resources. Instead, we want the pod to remain
// pending in the scheduler until the node has resources
// freed up.
e2e test validates the following service status endpoints
- patchCoreV1NamespacedServiceStatus
- readCoreV1NamespacedServiceStatus
- replaceCoreV1NamespacedServiceStatus
Also includes untested service endpoint
- patchCoreV1NamespacedService
the sig-network e2e tests related to services has more than 3k lines.
Some of those e2e tests are related to loadbalancers, that are
cloud provider specific and have special requirements.
We split up the services file and keeps the loadbalancers e2e tests
in their own file and with their own tag, so it is easier to skip
for people that don't run e2e tests in cloud providers.
The ESIPP tests are using a function to poll an HTTP endpoint.
This function failed the framework if the request to the http endpoint
timed out, causing a panic that ginkgo couldn´t recover.
Also, this function was used inside a pollImmediate loop, so it should
return the error instead of fail.
The test create a pod with a hostPort to expose an SCTP port, then
it checks if the iptables rules were installed correctly in the host.
The iptables rules MUST be checked in the same host where the pod
is running :)
This reverts commit 0ed8fd6dc9.
It turns out that ExternalIPs are not allowed to be reachable from
pods until the IP is present in the node.
However, due to a kube-proxy limitation it was working in environment
that used CNIs without bridges for the pods.
Service has had a problem since forever:
- User creates a service type=LoadBalancer
- We silently allocate them a NodePort
- User changes type to ClusterIP
- We fail the operation because they did not clear NodePort
They never asked for or used the NodePort!
Dual-stack introduced some dependent fields that get auto-wiped on
updates. This carries it further.
If you squint, you can see Service as a big, messy discriminated union,
with type as the discriminator. Ignoring fields for non-selected
union-modes seems right.
This introduces the potential for an apply loop. Specifically, we will
accept YAML that we did not previously accept. Apply could see the
field in local YAML and not in the server and repeatedly try to patch it
in. But since that YAML is currently an error, it seems like a very low
risk. Almost nobody actually specifies their own NodePort values.
To mitigate this somewhat, we only auto-wipe on updates. The same YAML
would fail to create. This is a little inconsistent. We could
auto-wipe on create, too, at the risk of more potential impact.
To do this properly, we need to know the old and new values, which means
we can not do it in defaulting or conversion. So we do it in strategy.
This change also adds unit tests and updates e2e tests to rely on and
verify this behavior.
NetworkingTest is used to test different network scenarios.
Since new capabilites and scenarios are added, like SCTP or HostNetwork
for pods, we need a way to configure it with minimum disruption and code
changes.
Go idiomatic way to achieve this is using functional options.
- Due to performance issues, service controller updates are slow
in large clusters, causing failing tests. Tag can be removed once
performance issues are resolved
A previous commit created a few agnhost related functions that creates agnhost
pods / containers for general purposes.
Refactors tests to use those functions.
test can execute whever hosts have ssh or not
relevant case:
"should be able to up and down services"
"should implement service.kubernetes.io/service-proxy-name"
"should implement service.kubernetes.io/headless"
A previous commit created a few agnhost related functions that creates agnhost
pods / containers for general purposes.
Refactors tests to use those functions.
deflake current e2e test
"should be able to preserve UDP traffic when server pod cycles for a
NodePort service" and reorganize the code in the e2e framework
Signed-off-by: Antonio Ojea <antonio.ojea.garcia@gmail.com>
ensure that when a pod servicing UDP traffic is deleted the conntrack entries
are cleaned up and another backend can pick up the traffic with minimal
interruption
When using NodePort services and long running connections that on pod deletion
stale conntrack entries can halt the flow of traffic. Add a test case to check
that conntrack entries are cleaned up.
There were several sshPort values in e2e test packages because
we've migrated code from e2e framework by copying and pastting.
This adds common SSHPort on e2essh package to reduce such duplicated
code.
RestartControllerManager() is kube-controller specific function
and it is better to separate the function as subpackage of e2e
test framework.
In addition, the function made invalid dependency into e2essh.
So this separates the function into e2ekubesystem subpackage.
The function is called from e2e/network test only, so this moves
the function into the test for reducing e2e/framework/util.go code
and removing invalid dependency on e2e test framework.
and they made an invalid dependency to sub e2e framework from the core framework.
So we can use e2epod.WaitTimeoutForPodReadyInNamespace to remove invalid dependency.
The main purpose of this pr is to handle the framework core package dependency subpackage pod.
Most of these could have been refactored automatically but it wouldn't
have been uglier. The unsophisticated tooling left lots of unnecessary
struct -> pointer -> struct transitions.
The service session affinity allows to set the maximum session
sticky timeout.
This commit adds e2e tests to check that the session is sticky
before the timeout and is not after.
Executing commands in pods is expensive in terms of time and the
execution time is unpredictable and random.
The session affinity tests send several http requests from a pod
to check that the session is sticky. Instead of executing one
http request at a time, we can execute several requests from the
pod at one time and process the output.