This updates the EndpointSlice controller to make use of the
EndpointSlice tracker to identify when expected changes are not present
in the cache yet. If this is detected, the controller will wait to sync
until all expected updates have been received. This should help avoid
race conditions that would result in duplicate EndpointSlices or failed
attempts to update stale EndpointSlices. To simplify this logic, this
also moves the EndpointSlice tracker from relying on resource versions
to generations.
A CounterVector with status as label may create unnecessary overhead
and using the success case with the empty label value wasn't
easy. It's better to have two seperate counters, one for total number
of calls and one for failed calls.
As discussed during the production readiness review, a metric for the
PVC create operations is useful. The "ephemeral_volume" workqueue
metrics were already added in the initial implementation.
The new code follows the example set by the endpoints controller.
When the feature is disabled either in the scheduler or the CSIDriver,
the scheduler is expected to schedule pods without considering whether
storage capacity is available.
The nodeShouldRunDaemonPod method does not need to return an error
because there are no scenarios under which it fails. Remove the
error return path for its direct calls as well.
In order to maintain the correct invariants, the existing maxUnavailable
logic calculated the same data several times in different ways. Leverage
the simpler structure from maxSurge and calculate pod availability only
once, as well as perform only a single pass over all the pods in the
daemonset. This changed no behavior of the current controller, and
has a structure that is almost identical to maxSurge.
If MaxSurge is set, the controller will attempt to double up nodes
up to the allowed limit with a new pod, and then when the most recent
(by hash) pod is ready, trigger deletion on the old pod. If the old
pod goes unready before the new pod is ready, the old pod is immediately
deleted. If an old pod goes unready before a new pod is placed on that
node, a new pod is immediately added for that node even past the MaxSurge
limit.
The backoff clock is used consistently throughout the daemonset controller
as an injectable clock for the purposes of testing.
It is too easy to omit checking the return value for the
syncAndValidateDaemonSet test in large suites. Switch the method
type to be a test helper and fatal/error directly. Also rename
a method that referenced the old name 'Rollback' instead of
'RollingUpdate'.