* Rename const for topology.../zone
* Rename const for topology.../region
* Rename const for failure-domain.../zone
* Rename const for failure-domain.../region
* Restore old names for compat
This reverts commit 0ed8fd6dc9.
It turns out that ExternalIPs are not allowed to be reachable from
pods until the IP is present in the node.
However, due to a kube-proxy limitation it was working in environment
that used CNIs without bridges for the pods.
Service has had a problem since forever:
- User creates a service type=LoadBalancer
- We silently allocate them a NodePort
- User changes type to ClusterIP
- We fail the operation because they did not clear NodePort
They never asked for or used the NodePort!
Dual-stack introduced some dependent fields that get auto-wiped on
updates. This carries it further.
If you squint, you can see Service as a big, messy discriminated union,
with type as the discriminator. Ignoring fields for non-selected
union-modes seems right.
This introduces the potential for an apply loop. Specifically, we will
accept YAML that we did not previously accept. Apply could see the
field in local YAML and not in the server and repeatedly try to patch it
in. But since that YAML is currently an error, it seems like a very low
risk. Almost nobody actually specifies their own NodePort values.
To mitigate this somewhat, we only auto-wipe on updates. The same YAML
would fail to create. This is a little inconsistent. We could
auto-wipe on create, too, at the risk of more potential impact.
To do this properly, we need to know the old and new values, which means
we can not do it in defaulting or conversion. So we do it in strategy.
This change also adds unit tests and updates e2e tests to rely on and
verify this behavior.
* api: structure change
* api: defaulting, conversion, and validation
* [FIX] validation: auto remove second ip/family when service changes to SingleStack
* [FIX] api: defaulting, conversion, and validation
* api-server: clusterIPs alloc, printers, storage and strategy
* [FIX] clusterIPs default on read
* alloc: auto remove second ip/family when service changes to SingleStack
* api-server: repair loop handling for clusterIPs
* api-server: force kubernetes default service into single stack
* api-server: tie dualstack feature flag with endpoint feature flag
* controller-manager: feature flag, endpoint, and endpointSlice controllers handling multi family service
* [FIX] controller-manager: feature flag, endpoint, and endpointSlicecontrollers handling multi family service
* kube-proxy: feature-flag, utils, proxier, and meta proxier
* [FIX] kubeproxy: call both proxier at the same time
* kubenet: remove forced pod IP sorting
* kubectl: modify describe to include ClusterIPs, IPFamilies, and IPFamilyPolicy
* e2e: fix tests that depends on IPFamily field AND add dual stack tests
* e2e: fix expected error message for ClusterIP immutability
* add integration tests for dualstack
the third phase of dual stack is a very complex change in the API,
basically it introduces Dual Stack services. Main changes are:
- It pluralizes the Service IPFamily field to IPFamilies,
and removes the singular field.
- It introduces a new field IPFamilyPolicyType that can take
3 values to express the "dual-stack(mad)ness" of the cluster:
SingleStack, PreferDualStack and RequireDualStack
- It pluralizes ClusterIP to ClusterIPs.
The goal is to add coverage to the services API operations,
taking into account the 6 different modes a cluster can have:
- single stack: IP4 or IPv6 (as of today)
- dual stack: IPv4 only, IPv6 only, IPv4 - IPv6, IPv6 - IPv4
* [FIX] add integration tests for dualstack
* generated data
* generated files
Co-authored-by: Antonio Ojea <aojea@redhat.com>
The kube_proxy SIGDescribe previously only had Network in the title
and made it more difficult to select just the test cases in the
kube_proxy file and would end up running anything with Network in the
text area of SIGDescribe e2e tests.
Signed-off-by: Christopher M. Luciano <cmluciano@us.ibm.com>
CRUD operations are the extent of conformance testing that we can add
for NetworkPolicy tests since we require a 3rd party plugin like CNI
for enforcement.
Signed-off-by: Christopher M. Luciano <cmluciano@us.ibm.com>
The CreateSync method includes the waiting for the pod to become running
and returns a fresh new pod instance.
In addition, errors are asserted in the method.
Therefore, there is no need for the callers to repeat these operations.
Some, like the error assertions, will never be reached in case they
occur as they will explode from within the method itself.
Signed-off-by: Edward Haas <edwardh@redhat.com>
- Due to performance issues, service controller updates are slow
in large clusters, causing failing tests. Tag can be removed once
performance issues are resolved
The NetworkPolicy tests work by trying to connect to a service by its
name, which means that for the tests that involved creating egress
policies, it had to always create an extra rule allowing egress for
DNS, but this assumed that DNS was running on UDP port 53. If it was
running somewhere else (eg if you changed the CoreDNS pods to use port
5353 to avoid needing to give them the NET_BIND_SERVICE capability)
then the NetworkPolicy tests would fail.
Fix this by making the tests connect to their services by IP rather
than by name, and removing all the DNS special-case rules. There are
other tests that ensure that Service DNS works.
A previous commit created a few agnhost related functions that creates agnhost
pods / containers for general purposes.
Refactors tests to use those functions.
having framework.DumpDebugInfo after the FailF was
a noop and we are losing those potentially helpful
logs when we need them the most (on a failure)
Signed-off-by: Jamo Luhrsen <jluhrsen@redhat.com>