What type of PR is this?
/kind cleanup
What this PR does / why we need it:
The disruption controller is resyncing all ssets every 30 seconds, this is not necessary, and make the depth of disruption workqueue longer and can cause delays processing actual updates when large amounts of disruptions exist.
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Disruption controllers no longer force a resync every 30 seconds when nothing has changed.
This changes the retry logic in DisruptionController so that it
reconciles update conflicts. In the old behavior, any pdb status update
failure was retried with the same status, regardless of error.
Now there is no retry logic with the status update. The error is passed
up the stack where the PDB can be requeued for processing.
If the PDB status update error is a conflict error, there are some new
special cases:
- failSafe is not triggered, since this is considered a retryable error
- the PDB is requeued immediately (ignoring the rate limiter) because we
assume that conflict can be resolved by getting the latest version
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
adding comments stating that returned pods should be used as read-only objects
fixing typo
avoiding unnecessary loop to copy pods listed see #46433
fixing fmt
avoiding unnecessary loop to copy pods listed see #46433
A warning type event should be recorded when failed to calculate
the number of expected pods.
And the same to daemoncontroller when failed to place pod.