- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
When a Pod is referencing a Node that doesn't exist on the local
informer cache, the current behavior was to return an error to
retry later and stop processing.
However, this can cause scenarios that a missing node leaves a
Slice stuck, it can no reflect other changes, or be created.
Also, this doesn't respect the publishNotReadyAddresses options
on Services, that considers ok to publish pod Addresses that are
known to not be ready.
The new behavior keeps retrying the problematic Service, but it
keeps processing the updates, reflacting current state on the
EndpointSlice. If the publishNotReadyAddresses is set, a missing
node on a Pod is not treated as an error.
There is always a placeholder slice.
The ServicePortCache logic was considering always one endpointSlice
per Endpoint, but if there are multiple empty Endpoints, we just
use one placeholder slice, not multiple placeholder slices.
Fixes Issue 108231 by checking `slicesToDelete` in the EndpointSlice
reconciler for a pre-existing placeholder slice.
Also adds a helper function for comparing the slices.
This updates the EndpointSlice controller to make use of the
EndpointSlice tracker to identify when expected changes are not present
in the cache yet. If this is detected, the controller will wait to sync
until all expected updates have been received. This should help avoid
race conditions that would result in duplicate EndpointSlices or failed
attempts to update stale EndpointSlices. To simplify this logic, this
also moves the EndpointSlice tracker from relying on resource versions
to generations.
* api: structure change
* api: defaulting, conversion, and validation
* [FIX] validation: auto remove second ip/family when service changes to SingleStack
* [FIX] api: defaulting, conversion, and validation
* api-server: clusterIPs alloc, printers, storage and strategy
* [FIX] clusterIPs default on read
* alloc: auto remove second ip/family when service changes to SingleStack
* api-server: repair loop handling for clusterIPs
* api-server: force kubernetes default service into single stack
* api-server: tie dualstack feature flag with endpoint feature flag
* controller-manager: feature flag, endpoint, and endpointSlice controllers handling multi family service
* [FIX] controller-manager: feature flag, endpoint, and endpointSlicecontrollers handling multi family service
* kube-proxy: feature-flag, utils, proxier, and meta proxier
* [FIX] kubeproxy: call both proxier at the same time
* kubenet: remove forced pod IP sorting
* kubectl: modify describe to include ClusterIPs, IPFamilies, and IPFamilyPolicy
* e2e: fix tests that depends on IPFamily field AND add dual stack tests
* e2e: fix expected error message for ClusterIP immutability
* add integration tests for dualstack
the third phase of dual stack is a very complex change in the API,
basically it introduces Dual Stack services. Main changes are:
- It pluralizes the Service IPFamily field to IPFamilies,
and removes the singular field.
- It introduces a new field IPFamilyPolicyType that can take
3 values to express the "dual-stack(mad)ness" of the cluster:
SingleStack, PreferDualStack and RequireDualStack
- It pluralizes ClusterIP to ClusterIPs.
The goal is to add coverage to the services API operations,
taking into account the 6 different modes a cluster can have:
- single stack: IP4 or IPv6 (as of today)
- dual stack: IPv4 only, IPv6 only, IPv4 - IPv6, IPv6 - IPv4
* [FIX] add integration tests for dualstack
* generated data
* generated files
Co-authored-by: Antonio Ojea <aojea@redhat.com>
Implement, in the endpoint slice controller, the same logic
used for labels in the legacy endpoints controller.
The labels in the endpoint and in the parent must be equivalent.
Headless services add the well-known IsHeadlessService label.
Slices must have two well known labels: LabelServiceName and
LabelManagedBy.
This fixes a bug that occurred when a Service was rapidly recreated.
This relied on an unfortunate series of events:
1. When the Service is deleted, the EndpointSlice controller removes it
from the EndpointSliceTracker along with any associated EndpointSlices.
2. When the Service is recreated, the EndpointSlice controller sees that
there are still appropriate EndpointSlices for the Service and does
nothing. (They have not yet been garbage collected).
3. When the EndpointSlice is deleted, the EndpointSlice controller
checks with the EndpointSliceTracker to see if it thinks we should have
this EndpointSlice. This check was intended to ensure we wouldn't
requeue a Service every time we delete an EndpointSlice for it.
This adds a check in reconciler to ensure that EndpointSlices it is
working with are owned by a Service with a matching UID. If not, it will
mark those EndpointSlices for deletion (assuming they're about to be
garbage collected anyway) and create new EndpointSlices.
Previously the controllers would proceed with additional creates,
updates, or deletes if 1 failed. That could potentially result in
scenarios where an EndpointSlice create or update failing while a delete
worked. This updates the logic so that removals will not happen if
additions fail.
EndpointController was accidentally requiring all headless services to
be IPv4-only in clusters with IPv6DualStack enabled.
This still leaves "legacy" (ie, IPFamily-less) headless services as
always IPv4-only because the controller doesn't currently have easy
access to the information that would allow it to fix that.
(EndpointSliceController had the same problem already, and still
does.) This can be fixed, if needed, by manually setting IPFamily,
and the proposed API for 1.20 will handle this situation better.
During EndpointSlice reconcilation, EndpointSliceTracker is supposed to
track expected EndpointSlice resource versions so that external changes
to them can be detected. But it actually tracked the stale resource
version and resulted in every Service was handled twice as it always
received an EndpointSlice update with a different resource version but
was actually created/updated by itself during the first processing.
This adds a new EndpointSlice tracker to keep track of the expected resource versions of EndpointSlices associated with each Service managed by the EndpointSlice controller. This should prevent a potential race where a syncService call could happen with an incomplete view of EndpointSlices if additions or deletions hadn't fully propagated to the cache yet. Additionally, this ensures that external changes to EndpointSlices will be handled by the EndpointSlice controller.
This was an oversight in the initial EndpointSlice release. This update
will ensure that Endpoints and EndpointSlices use the same logic to set
the Hostname attribute.
The Service spec includes a PublishNotReadyAddresses field which has
been used by Endpoints to report all matching resources ready. This may
or may not have been the initial purpose of the field, but given the
desire to provide backwards compatibility with the Endpoints API here,
it seems to make sense to continue to provide the same functionality.