Terminal pods, whose phase its Failed or Succeeded, are guaranteed
to never regress and to be stopped, so their IPs never should
be published on the Endpoints.
The field is not used anywhere and its value may be stale as Endpoints
and EndpointSlice won't be updated if there is only Pod ResourceVersion
change..
When comparing EndpointSubsets and Endpoints, we ignore the difference
in ResourceVersion of Pod to avoid unnecessary updates caused by Pod
updates that we don't care, e.g. annotation update.
Otherwise periodic Service resync would intensively update Endpoints or
EndpointSlice whose Pods have irrelevant change between two resyncs,
leading to delay in processing newly created Services. In a scale
cluster with thousands of such Endpoints, we observed 2 minutes of
delay when the resync happens.
* set `endpoints.kubernetes.io/over-capacity` to "truncated" when
number of addresses has been truncated to a 1000
* ready addresses are prioritized over non-ready addresses
* addresses are proportionally truncated across subsets
Now that the EndpointSlice API and controllers are GA, the Endpoints
controller will use this annotation to warn when Endpoints are over
capacity. In a future release, this warning will be replaced with
truncation.
* api: structure change
* api: defaulting, conversion, and validation
* [FIX] validation: auto remove second ip/family when service changes to SingleStack
* [FIX] api: defaulting, conversion, and validation
* api-server: clusterIPs alloc, printers, storage and strategy
* [FIX] clusterIPs default on read
* alloc: auto remove second ip/family when service changes to SingleStack
* api-server: repair loop handling for clusterIPs
* api-server: force kubernetes default service into single stack
* api-server: tie dualstack feature flag with endpoint feature flag
* controller-manager: feature flag, endpoint, and endpointSlice controllers handling multi family service
* [FIX] controller-manager: feature flag, endpoint, and endpointSlicecontrollers handling multi family service
* kube-proxy: feature-flag, utils, proxier, and meta proxier
* [FIX] kubeproxy: call both proxier at the same time
* kubenet: remove forced pod IP sorting
* kubectl: modify describe to include ClusterIPs, IPFamilies, and IPFamilyPolicy
* e2e: fix tests that depends on IPFamily field AND add dual stack tests
* e2e: fix expected error message for ClusterIP immutability
* add integration tests for dualstack
the third phase of dual stack is a very complex change in the API,
basically it introduces Dual Stack services. Main changes are:
- It pluralizes the Service IPFamily field to IPFamilies,
and removes the singular field.
- It introduces a new field IPFamilyPolicyType that can take
3 values to express the "dual-stack(mad)ness" of the cluster:
SingleStack, PreferDualStack and RequireDualStack
- It pluralizes ClusterIP to ClusterIPs.
The goal is to add coverage to the services API operations,
taking into account the 6 different modes a cluster can have:
- single stack: IP4 or IPv6 (as of today)
- dual stack: IPv4 only, IPv6 only, IPv4 - IPv6, IPv6 - IPv4
* [FIX] add integration tests for dualstack
* generated data
* generated files
Co-authored-by: Antonio Ojea <aojea@redhat.com>
Also mark reason for lint errors in:
pkg/controller/endpoint/config/v1alpha1,
pkg/controller/endpointslice/config/v1alpha1
pkg/controller/endpointslicemirroring/config/v1alpha1
EndpointController was accidentally requiring all headless services to
be IPv4-only in clusters with IPv6DualStack enabled.
This still leaves "legacy" (ie, IPFamily-less) headless services as
always IPv4-only because the controller doesn't currently have easy
access to the information that would allow it to fix that.
(EndpointSliceController had the same problem already, and still
does.) This can be fixed, if needed, by manually setting IPFamily,
and the proposed API for 1.20 will handle this situation better.
Rewrite some of the test helpers to better support single-stack IPv4
vs single-stack IPv6 vs dual-stack IPv4 primary vs dual-stack IPv6
primary, and update TestPodToEndpointAddressForService to test some
more cases.
The endpoint controllers responded to Pod changes by trying to figure
out if the generated endpoint resource would change, rather than just
checking if the Pod had changed, but since the set of Pod fields that
need to be checked depend on the Service and Node as well, the code
ended up only checking for a subset of the changes it should have.
In particular, EndpointSliceController ended up only looking at IPv4
Pod IPs when processing Pod update events, so when a Pod went from
having no IP to having only an IPv6 IP, EndpointSliceController would
think it hadn't changed.
We should clean the EndpointsLastChangeTriggerTime annotation when there
is no new trigger time to be exported. Not clearing it may result in
kube-proxy exporting the wrong Network Programming Latency SLI.
The requested Service Protocol is checked against the supported protocols of GCE Internal LB. The supported protocols are TCP and UDP.
SCTP is not supported by OpenStack LBaaS. If SCTP is requested in a Service with type=LoadBalancer, the request is rejected. Comment style is also corrected.
SCTP is not allowed for LoadBalancer Service and for HostPort. Kube-proxy can be configured not to start listening on the host port for SCTP: see the new SCTPUserSpaceNode parameter
changed the vendor github.com/nokia/sctp to github.com/ishidawataru/sctp. I.e. from now on we use the upstream version.
netexec.go compilation fixed. Various test cases fixed
SCTP related conformance tests removed. Netexec's pod definition and Dockerfile are updated to expose the new SCTP port(8082)
SCTP related e2e test cases are removed as the e2e test systems do not support SCTP
sctp related firewall config is removed from cluster/gce/util.sh. Variable name sctp_addr is corrected to sctpAddr in pkg/proxy/ipvs/proxier.go
cluster/gce/util.sh is copied from master
As cited in
https://github.com/kubernetes/dns/issues/174 - this is documented to
work, and I don't see why it shouldn't work. We allowed the definition
of headless services without ports, but apparently nobody tested it very
well.
Manually tested clusterIP services with no ports - validation error.
Manually tested services with negative ports - validation error.
New tests failed, output inspected and verified. Now pass.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix a bug that headless service without ports fails to have endpoint created.
**What this PR does / why we need it**:
Follow up of https://github.com/kubernetes/kubernetes/pull/47250. Headless service without ports fails to have corresponding endpoint created because endpoint controller deliberately attaches a dummy endpointPort with portNum=0, which will fail API validation check. Error as below:
```
endpoints_controller.go:375] Error syncing endpoints for service "default/XXX": Endpoints "XXX" is invalid: subsets[0].ports[0].port: Invalid value: 0: must be between 1 and 65535, inclusive
```
This PR makes endpoint controller not attach the dummy endpointPort for headless service.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55158, fixes#62440
**Special notes for your reviewer**:
cc @xiangpengzhao
**Release note**:
```release-note
Fix a bug that headless service without ports fails to have endpoint created.
```
A pod status change of unready -> ready results in a move from
the endpoint's unready endpoint addresses to its ready addresses
so if a pod update contains an unready -> ready status change,
the endpoint needs to be updated.