If different pods with same address are exposed by the same service if
some of the endpointslices endpoints are overwriten. This change add the
pod name to the hash function to ensure that all the endpoints are in
place.
Signed-off-by: Enrique Llorente <ellorent@redhat.com>
Since https://github.com/kubernetes/kubernetes/pull/112648, we can
efficiently handle selectors from pre-existing `map[string]string`,
making the cache obsolete.
Benchmark:
```
name old time/op new time/op delta
GetPodServiceMemberships-48 189µs ± 1% 193µs ± 1% +2.10% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
GetPodServiceMemberships-48 59.0kB ± 0% 58.9kB ± 0% -0.09% (p=0.000 n=9+9)
name old allocs/op new allocs/op delta
GetPodServiceMemberships-48 1.02k ± 0% 1.02k ± 0% ~ (all equal)
```
Update the maximum sync backoff value to 1000s to match the sequence of
delays expected by the endpointslice controller when syncing Services:
Before this change the sequence was:
> 1s, 2s, 4s, 8s, 16s, 32s, 64s, 100s
Now it is:
> 1s, 2s, 4s, 8s, 16s, 32s, 64s, 128s, 256s, 512s, 1000s
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
When a Pod is referencing a Node that doesn't exist on the local
informer cache, the current behavior was to return an error to
retry later and stop processing.
However, this can cause scenarios that a missing node leaves a
Slice stuck, it can no reflect other changes, or be created.
Also, this doesn't respect the publishNotReadyAddresses options
on Services, that considers ok to publish pod Addresses that are
known to not be ready.
The new behavior keeps retrying the problematic Service, but it
keeps processing the updates, reflacting current state on the
EndpointSlice. If the publishNotReadyAddresses is set, a missing
node on a Pod is not treated as an error.
There is always a placeholder slice.
The ServicePortCache logic was considering always one endpointSlice
per Endpoint, but if there are multiple empty Endpoints, we just
use one placeholder slice, not multiple placeholder slices.
Fixes Issue 108231 by checking `slicesToDelete` in the EndpointSlice
reconciler for a pre-existing placeholder slice.
Also adds a helper function for comparing the slices.
The field is not used anywhere and its value may be stale as Endpoints
and EndpointSlice won't be updated if there is only Pod ResourceVersion
change..
The bug could result in the EndpointSlice controller unnecessarily updating
EndpointSlices associated with a Service that had Topology Aware Hints enabled.
This updates the StaleSlices() method in EndpointSliceTracker to also
ensure that the tracker does not have more slices than have been
provided.
Co-Authored-By: Swetha Repakula <srepakula@google.com>