- set higher severity and log level when unmanaged pods found and improve testing
- do not mention unsupported controller when triggering event for
unmanaged pods (this is covered by CalculateExpectedPodCountFailed
event)
- test unsupported controller
- make testing for events non blocking when event not found
If different pods with same address are exposed by the same service if
some of the endpointslices endpoints are overwriten. This change add the
pod name to the hash function to ensure that all the endpoints are in
place.
Signed-off-by: Enrique Llorente <ellorent@redhat.com>
When using these controllers in test/integration/scheduler_perf, the goroutine
leak check there pointed out that broadcaster.Shutdown function wasn't called
and thus goroutines leaked during a test.
Fixes the issue caused when multile ClusterCIDR objects have the same
nodeSelector values, order of the requirements in the nodeSelector is
not preserved when nodeSelector is marshalled and converted to a string.
Fixes the deletion of ClusterCIDR object, when a Node is associated(has
Pod CIDRs allocated from this ClusterCIDR) with it. Currently the
ClusterCIDR finalizer is never cleaned up as there is no reconciliation
happening after the associated Node has been deleted. This commit fixes
the issue by adding workitems from all events to a worker queue and
reconcile until the delete is successful.
The two methods nextScheduledTimeDuration and getNextScheduleTime have a
lot of similarities, so this commit squashes the common parts together
along with getMostRecentScheduleTime to avoid code duplication.
This commit makes the job controller re-honor exponential backoff for
failed pods. Before this commit, the controller created pods without any
backoff. This is a regression because the controller used to
create pods with an exponential backoff delay before (10s, 20s, 40s ...).
The issue occurs only when the JobTrackingWithFinalizers feature is
enabled (which is enabled by default right now). With this feature, we
get an extra pod update event when the finalizer of a failed pod is
removed.
Note that the pod failure detection and new pod creation happen in the
same reconcile loop so the 2nd pod is created immediately after the 1st
pod fails. The backoff is only applied on 2nd pod failure, which means
that the 3rd pod created 10s after the 2nd pod, 4th pod is created 20s
after the 3rd pod and so on.
This commit fixes a few bugs:
1. Right now, each time `uncounted != nil` and the job does not see a
_new_ failure, `forget` is set to true and the job is removed from the
queue. Which means that this condition is also triggered each time the
finalizer for a failed pod is removed and `NumRequeues` is reset, which
results in a backoff of 0s.
2. Updates `updatePod` to only apply backoff when we see a particular
pod failed for the first time. This is necessary to ensure that the
controller does not apply backoff when it sees a pod update event
for finalizer removal of a failed pod.
3. If `JobsReadyPods` feature is enabled and backoff is 0s, the job is
now enqueued after `podUpdateBatchPeriod` seconds, instead of 0s. The
unit test for this check also had a few bugs:
- `DefaultJobBackOff` is overwritten to 0 in certain unit tests,
which meant that `DefaultJobBackOff` was considered to be 0,
effectively not running any meaningful checks.
- `JobsReadyPods` was not enabled for test cases that ran tests
which required the feature gate to be enabled.
- The check for expected and actual backoff had incorrect
calculations.
Marking the pods not ready on a node requires looping over them and
updating each pod's status one at a time. This is performed serially,
and can take a while if we're processing each node serially as well.
Since the time is spent waiting on io, there's an opportunity to go
faster by processing multiple nodes concurrently. This change modifies
the loop to process nodes in parallel, using the same number of workers
as doNodeProcessingPassWorker.
This change also introduces histogram metrics to better observe
monitorNodeHealth.
The fake client doesn't guarantee that the informer cache is updated.
If it's not up-to-date, the controller always tries to set the
StartTime, leading to a broken test.
Change-Id: I71f26d46ea44beff88f0d03517985348654aec95
* Add tracker types and tests
* Modify ResourceEventHandler interface's OnAdd member
* Add additional ResourceEventHandlerDetailedFuncs struct
* Fix SharedInformer to let users track HasSynced for their handlers
* Fix in-tree controllers which weren't computing HasSynced correctly
* Deprecate the cache.Pop function
Since https://github.com/kubernetes/kubernetes/pull/112648, we can
efficiently handle selectors from pre-existing `map[string]string`,
making the cache obsolete.
Benchmark:
```
name old time/op new time/op delta
GetPodServiceMemberships-48 189µs ± 1% 193µs ± 1% +2.10% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
GetPodServiceMemberships-48 59.0kB ± 0% 58.9kB ± 0% -0.09% (p=0.000 n=9+9)
name old allocs/op new allocs/op delta
GetPodServiceMemberships-48 1.02k ± 0% 1.02k ± 0% ~ (all equal)
```
Endpoints generated by the endpoints controller are in the canonical
form, however, custom endpoints can not be in canonical format
(there was a time they were canonicalized in the apiserver, but this
caused performance issues because the endpoint controller kept
updating them since the created endpoint were different than the
stored one due to the canonicalization)
There are cases where a custom endpoint may generate multiple slices
due to the controller, per example, when the same address is present
in different subsets.
The endpointslice mirroring controller should canonicalize the
endpoints subsets before start processing them to be consistent
on the slices generated, there is no risk of hotlooping because
the endpoint is only used as input.
Change-Id: I2a8cd53c658a640aea559a88ce33e857fa98cc5c
This ensures that the daemonset controller updates daemonset statuses in
a best-effort manner even if syncDaemonSet fails.
In order to add an integration test, this also replaces
`cmd/kube-apiserver/app/testing.StartTestServer` with
`test/integration/framework.StartTestServer` and adds
`setupWithServerSetup` to configure the admission control of the
apiserver.