e2e test validates the following service status endpoints
- patchCoreV1NamespacedServiceStatus
- readCoreV1NamespacedServiceStatus
- replaceCoreV1NamespacedServiceStatus
Also includes untested service endpoint
- patchCoreV1NamespacedService
the sig-network e2e tests related to services has more than 3k lines.
Some of those e2e tests are related to loadbalancers, that are
cloud provider specific and have special requirements.
We split up the services file and keeps the loadbalancers e2e tests
in their own file and with their own tag, so it is easier to skip
for people that don't run e2e tests in cloud providers.
The ESIPP tests are using a function to poll an HTTP endpoint.
This function failed the framework if the request to the http endpoint
timed out, causing a panic that ginkgo couldn´t recover.
Also, this function was used inside a pollImmediate loop, so it should
return the error instead of fail.
The test create a pod with a hostPort to expose an SCTP port, then
it checks if the iptables rules were installed correctly in the host.
The iptables rules MUST be checked in the same host where the pod
is running :)
This reverts commit 0ed8fd6dc9.
It turns out that ExternalIPs are not allowed to be reachable from
pods until the IP is present in the node.
However, due to a kube-proxy limitation it was working in environment
that used CNIs without bridges for the pods.
Service has had a problem since forever:
- User creates a service type=LoadBalancer
- We silently allocate them a NodePort
- User changes type to ClusterIP
- We fail the operation because they did not clear NodePort
They never asked for or used the NodePort!
Dual-stack introduced some dependent fields that get auto-wiped on
updates. This carries it further.
If you squint, you can see Service as a big, messy discriminated union,
with type as the discriminator. Ignoring fields for non-selected
union-modes seems right.
This introduces the potential for an apply loop. Specifically, we will
accept YAML that we did not previously accept. Apply could see the
field in local YAML and not in the server and repeatedly try to patch it
in. But since that YAML is currently an error, it seems like a very low
risk. Almost nobody actually specifies their own NodePort values.
To mitigate this somewhat, we only auto-wipe on updates. The same YAML
would fail to create. This is a little inconsistent. We could
auto-wipe on create, too, at the risk of more potential impact.
To do this properly, we need to know the old and new values, which means
we can not do it in defaulting or conversion. So we do it in strategy.
This change also adds unit tests and updates e2e tests to rely on and
verify this behavior.
NetworkingTest is used to test different network scenarios.
Since new capabilites and scenarios are added, like SCTP or HostNetwork
for pods, we need a way to configure it with minimum disruption and code
changes.
Go idiomatic way to achieve this is using functional options.
- Due to performance issues, service controller updates are slow
in large clusters, causing failing tests. Tag can be removed once
performance issues are resolved
A previous commit created a few agnhost related functions that creates agnhost
pods / containers for general purposes.
Refactors tests to use those functions.
test can execute whever hosts have ssh or not
relevant case:
"should be able to up and down services"
"should implement service.kubernetes.io/service-proxy-name"
"should implement service.kubernetes.io/headless"
A previous commit created a few agnhost related functions that creates agnhost
pods / containers for general purposes.
Refactors tests to use those functions.