Automatic merge from submit-queue (batch tested with PRs 39807, 37505, 39844, 39525, 39109)
Update deployment equality helper
@mfojtik @janetkuo this is split out of https://github.com/kubernetes/kubernetes/pull/38714 to reduce the size of that PR, ptal
Automatic merge from submit-queue (batch tested with PRs 39807, 37505, 39844, 39525, 39109)
Made cache.Controller to be interface.
**What this PR does / why we need it**:
#37504
Automatic merge from submit-queue
replace global registry in apimachinery with global registry in k8s.io/kubernetes
We'd like to remove all globals, but our immediate problem is that a shared registry between k8s.io/kubernetes and k8s.io/client-go doesn't work. Since client-go makes a copy, we can actually keep a global registry with other globals in pkg/api for now.
@kubernetes/sig-api-machinery-misc @lavalamp @smarterclayton @sttts
Automatic merge from submit-queue (batch tested with PRs 39661, 39740, 39801, 39468, 39743)
fix nodeStatusUpdateRetry count exceeding condition judgement
When tryUpdateNodeStatus() return err,err!=nil, but nc.kubeClient.Core().Nodes().Get() return no err, err==nil,
And we run nodeStatusUpdateRetry times, when for loop ends, err == nil, we can not print error info and run continue, so maybe the condition judgement is not right
Maybe caused #38671
When tryUpdateNodeStatus() return err,err!=nil, but nc.kubeClient.Core().Nodes().Get() return no err, err==nil,
And we run nodeStatusUpdateRetry times, when for loop ends, err == nil, we can not print error info and run continue, so the condition judgement is wrong.
Automatic merge from submit-queue (batch tested with PRs 39483, 39088, 38787)
daemonset: differentiate between cases in nodeShouldRun
specifically we need to differentiate between wanting to run,
should run and should continue running. This is required to
support all taint effects and will improve reporting and end
user debuggability.
fixes https://github.com/kubernetes/kubernetes/issues/28839 among other things
secifically we need to differentiate between wanting to run,
should run and should continue running. This is required to
support all taint effects and will improve reporting and end
user debuggability.
Automatic merge from submit-queue (batch tested with PRs 39684, 39577, 38989, 39534, 39702)
kubelet: request client auth certificates from certificate API.
This fixes kubeadm and --experiment-kubelet-bootstrap.
cc @liggitt
Automatic merge from submit-queue (batch tested with PRs 39694, 39383, 39651, 39691, 39497)
HPA Controller: Check for 0-sum request value
In certain conditions in which the set of metrics returned by Heapster
is completely disjoint from the set of pods returned by the API server,
we can have a request sum of zero, which can cause a panic (due to
division by zero). This checks for that condition.
Fixes#39680
**Release note**:
```release-note
Fixes an HPA-related panic due to division-by-zero.
```
Automatic merge from submit-queue
certificates: add a signing profile to the internal types
Here is a strawman of a CertificateSigningProfile type which would be used by the certificates controller when configuring cfssl. Side question: what magnitude of change warrants a design proposal?
@liggitt @gtank
In certain conditions in which the set of metrics returned by Heapster
is completely disjoint from the set of pods returned by the API server,
we can have a request sum of zero, which can cause a panic (due to
division by zero). This checks for that condition.
Fixes#39680
Automatic merge from submit-queue (batch tested with PRs 39628, 39551, 38746, 38352, 39607)
Increasing times on reconciling volumes fixing impact to AWS.
#**What this PR does / why we need it**:
We are currently blocked by API timeouts with PV volumes. See https://github.com/kubernetes/kubernetes/issues/39526. This is a workaround, not a fix.
**Special notes for your reviewer**:
A second PR will be dropped with CLI cobra options in it, but we are starting with increasing the reconciliation periods. I am dropping this without major testing and will test on our AWS account. Will be marked WIP until I run smoke tests.
**Release note**:
```release-note
Provide kubernetes-controller-manager flags to control volume attach/detach reconciler sync. The duration of the syncs can be controlled, and the syncs can be shut off as well.
```
Automatic merge from submit-queue (batch tested with PRs 37845, 39439, 39514, 39457, 38866)
Log a warning message when failed to find kind for resource in garbage collector controller
at this time, I do not think thirdparty api group version resources should be taken care by garbage collector controllers, and this line of call will fail actually: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/garbagecollector/garbagecollector.go#L565, and as a result, the garbagecollector controller failed to start.
Automatic merge from submit-queue
Remove jobs that do not exist from active list of CronJob
**What this PR does / why we need it**: This PR modifies the controller for CronJob to remove from the active job list any job that does not exist anymore, to avoid staying blocked in active state forever. See #37957.
**Which issue this PR fixes**: fixes#37957
**Special notes for your reviewer**:
**Release note**:
```
```
Automatic merge from submit-queue (batch tested with PRs 39284, 39367)
Remove HostRecord annotation (beta feature)
The annotation has made it to GA so this code should be deleted.
**Release note**:
```release-note
The 'endpoints.beta.kubernetes.io/hostnames-map' annotation is no longer supported. Users can use the 'Endpoints.subsets[].addresses[].hostname' field instead.
```
Automatic merge from submit-queue (batch tested with PRs 39092, 39126, 37380, 37093, 39237)
Endpoints with TolerateUnready annotation, should list Pods in state terminating
**What this PR does / why we need it**:
We are using preStop lifecycle hooks to gracefully remove a node from a cluster. This hook is potentially long running and after the preStop hook is fired, the DNS resolution of the soon to be stopped Pod is failing, which causes a failure there.
**Special notes for your reviewer**:
Would be great to backport that to 1.4, 1.3
**Release note**:
```release-note
Endpoints, that tolerate unready Pods, are now listing Pods in state Terminating as well
```
@bprashanth
Automatic merge from submit-queue (batch tested with PRs 39075, 39350, 39353)
Move pkg/api.{Context,RequestContextMapper} into pkg/genericapiserver/api/request
**Based on #39350**
Automatic merge from submit-queue
DaemonSet ObservedGeneration
Extracting ObserverdGeneration part from #31693. It also implements #7328 for DaemonSets.
cc @kargakis
Automatic merge from submit-queue (batch tested with PRs 39150, 38615)
Add work queues to PV controller
PV controller should not use Controller.Requeue, as as it is not available in
shared informers. We need to implement our own work queues instead, where we
can enqueue volumes/claims as we want.
PV controller should not use Controller.Requeue, as as it is not available in
shared informers. We need to implement our own work queues instead where we
can enqueue volumes/claims as we want.
Automatic merge from submit-queue
Avoid unnecessary memory allocations
Low-hanging fruits in saving memory allocations. During our 5000-node kubemark runs I've see this:
ControllerManager:
- 40.17% k8s.io/kubernetes/pkg/util/system.IsMasterNode
- 19.04% k8s.io/kubernetes/pkg/controller.(*PodControllerRefManager).Classify
Scheduler:
- 42.74% k8s.io/kubernetes/plugin/pkg/scheduler/algrorithm/predicates.(*MaxPDVolumeCountChecker).filterVolumes
This PR is eliminating all of those.
Automatic merge from submit-queue (batch tested with PRs 39093, 34273)
start breaking up controller manager into two pieces
This PR addresses: https://github.com/kubernetes/features/issues/88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourceQuotaController
- namespaceController
- deploymentController
etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
I have added a new cloud provider called "external", which signals the controller-manager that
cloud provider specific loops are being run by another controller. I have added these changes in such
a way that the existing cloud providers are not affected. This change is completely backwards compatible, and does not require any changes to the way kubernetes is run today.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will be addressed in a different commit/issue.
@alena1108 @ibuildthecloud @thockin @dchen1107
**Special notes for your reviewer**:
@thockin - Im making this **WIP** PR to ensure that I don't stray too far from everyone's view of how we should make this change. As you can see, only one controller, namely `nodecontroller` can be disabled with the `--cloudprovider=external` flag at the moment. I'm working on cleaning up the `rancher-controller-manger` that I wrote to test this.
Secondly, I'd like to use this PR to address cloudprovider specific code in kubelet and api-server.
**Kubelet**
Kubelet uses provider specific code for node registration and for checking node-status. I thought of two ways to divide the kubelet:
- We could start a cloud provider specific kubelet on each host as a part of kubernetes, and this cloud-specific-kubelet does node registration and node-status checks.
- Create a kubelet plugin for each provider, which will be started by kubelet as a long running service. This plugin can be packaged as a binary.
I'm leaning towards the first option. That way, kubelet does not have to manage another process, and we can offload the process management of the cloud-provider-specific-kubelet to something like systemd.
@dchen1107 @thockin what do you think?
**Kube-apiserver**
Kube-apiserver uses provider specific code for distributing ssh keys to all the nodes of a cluster. Do you have any suggestions about how to address this?
**Release note**:
``` release-note
```
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
Automatic merge from submit-queue
Fix DaemonSet cache mutation
**What this PR does / why we need it**: stops the DaemonSetController from mutating the DaemonSet shared informer cache
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#38985
cc @deads2k @mikedanese @lavalamp @smarterclayton
Automatic merge from submit-queue (batch tested with PRs 36888, 38180, 38855, 38590)
Fix variable shadowing in exponential backoff when deleting volumes
While https://github.com/kubernetes/kubernetes/pull/38339 implemented exponential backoff on
volume deletion, that PR suffers from a minor bug when error thrown on volume deletion is anything other than `VolumeInUse` errors - in which case exponential backoff will not work.
This PR fixes that. This PR also makes unit tests more deterministic because exponential backoff changed the way operations are permitted.
CC @jsafrane @childsb @wongma7
Automatic merge from submit-queue (batch tested with PRs 38426, 38917, 38891, 38935)
if statement must be true
**What this PR does / why we need it**:
if len(metrics.Items)==0, the function would been returned. so the statement if len(metrics.Items) > 0 is redudant, it must be true.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Add dsStoreSynced so we also wait on this cache when starting the
DaemonSetController.
Switch to using a fake clientset in the unit tests.
Fix TestNumberReadyStatus so it doesn't expect the cache to be mutated.
Automatic merge from submit-queue (batch tested with PRs 34353, 33837, 38878)
Revert "daemonset: bail out after we enqueue once"
I get overzealous sometimes.
Reverts kubernetes/kubernetes#38780
Automatic merge from submit-queue
Fix Recreate for Deployments and stop using events in e2e tests
Fixes https://github.com/kubernetes/kubernetes/issues/36453 by removing events from the deployment tests. The test about events during a Rolling deployment is redundant so I just removed it (we already have another test specifically for Rolling deployments).
Closes https://github.com/kubernetes/kubernetes/issues/32567 (preferred to use pod LISTs instead of a new status API field for replica sets that would add many more writes to replica sets).
@kubernetes/deployment
Automatic merge from submit-queue
daemonset: bail out after we enqueue once
This isn't terrible because we dedup in the queue but it's a waste of
cycles.
Automatic merge from submit-queue (batch tested with PRs 38154, 38502)
Rename "release_1_5" clientset to just "clientset"
We used to keep multiple releases in the main repo. Now that [client-go](https://github.com/kubernetes/client-go) does the versioning, there is no need to keep releases in the main repo. This PR renames the "release_1_5" clientset to just "clientset", clientset development will be done in this directory.
@kubernetes/sig-api-machinery @deads2k
```release-note
The main repository does not keep multiple releases of clientsets anymore. Please find previous releases at https://github.com/kubernetes/client-go
```
Automatic merge from submit-queue
controller: adopt pods only when controller is not deleted
When a replica set is deleted it will continue adopting pods thus driving the worker that handles it in erroring out because the adoption is [always cancelled](59c313730c/pkg/controller/controller_ref_manager.go (L110)) in the controller reference manager.
```
E1212 14:40:31.245773 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.258462 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259131 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259149 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
I1212 14:40:31.268012 7964 deployment_controller.go:314] Error syncing deployment e2e-tests-deployment-2rr3m/test-rollover-deployment: Operation cannot be fulfilled on deployments.extensions "test-rollover-deployment": the object has been modified; please apply your changes to the latest version and try again
E1212 14:40:31.277252 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277276 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277287 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289148 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-b6s4x_82fa8343-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289169 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289176 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289181 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
```
@kubernetes/deployment @caesarxuchao
Automatic merge from submit-queue
Curating Owners: pkg/controller
cc @jsafrane @mikedanese @bprashanth @derekwaynecarr @thockin @saad-ali
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone **lgtms** and then someone
experienced in the project **approves**), we are adding new reviewers to
existing owners files.
## If You Care About the Process:
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
perfectly: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
## TLDR:
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the OWNERS file to add
the names of people that should be reviewing code in the future in the **reviewers** section. You probably do NOT need to modify the **approvers** section.
3. Notify me if you want some OWNERS file to be removed. Being an approver or reviewer
of a parent directory makes you a reviewer/approver of the subdirectories too, so not all
OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
bump log level on service status update
ref: https://github.com/kubernetes/kubernetes/issues/38349
I tried to reproduce the problem in #38349 and failed. Not sure why service status update failed and service controller skip status update in the next round. What I have observed is that if service status update failed due to conflict, the next round of processServiceUpdate will correct it.
Bumping log level to get a better signal when it occurs.
Automatic merge from submit-queue (batch tested with PRs 38608, 38299)
controller: set unavailableReplicas correctly when scaling down
```
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0
```
The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.
@kubernetes/deployment
Automatic merge from submit-queue
Remove json serialization annotations from internal types
fixes#3933
Internal types should never be serialized, and including json serialization tags on them makes it possible to accidentally do that without realizing it.
fixes in this PR:
* types
* [x] remove json tags from internal types
* [x] fix references from serialized types to internal ObjectMeta
* generation
* [x] remove generated json codecs for internal types (they should never be used)
* kubectl
* [x] fix `apply` to operate on versioned object
* [x] fix sorting by field to operate on versioned object
* [x] fix `--record` to build annotation patch using versioned object
* hpa
* [x] fix unmarshaling to internal CustomMetricTargetList in validation
* thirdpartyresources
* [x] fix encoding API responses using internal ObjectMeta
* tests
* [x] fix tests to use versioned objects when checking encoded content
* [x] fix tests passing internal objects to generic printers
follow ups (will open tracking issues or additional PRs):
- [ ] remove json tags from internal kubeconfig types (`kubectl config set` pathfinding needs to work against external type)
- [ ] HPA should version CustomMetricTargetList serialization in annotations
- [ ] revisit how TPR resthandlers encoding objects
- [ ] audit and add tests for printer use (human-readable printer requires internal versions, generic printers require external versions)
- [ ] add static analysis tests preventing new internal types from adding tags
- [ ] add static analysis tests requiring json tags on external types (and enforcing lower-case first letter)
- [ ] add more tests for `kubectl get` exercising known and unknown types with all output options
Automatic merge from submit-queue (batch tested with PRs 38354, 38371)
Add GetOptions parameter to Get() calls in client library
Ref #37473
This PR is super mechanical - the non trivial commits are:
- Update client generator
- Register GetOptions in batch/v2alpha1 group
Automatic merge from submit-queue (batch tested with PRs 38278, 37770)
Refactor REST storage to use generic defaults
This removes the repetition in the REST storage builders by moving the logic to `restoptions.ApplyOptions`. `registry.StorageWithCacher`/`generic.StorageDecorator` no longer assume that they can build the `keyFunc` for arbitrary objects. `restoptions.ApplyOptions` uses the `registry.Store`'s `KeyFunc` for its call to `generic.StorageDecorator`.
```release-note
Cluster federation servers have changed the location in etcd where federated services are stored, so existing federated services must be deleted and recreated. Before upgrading, export all federated services from the federation server and delete the services. After upgrading the cluster, recreate the federated services from the exported data.
```
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)
Exponential back off when volume delete fails
**What this PR does / why we need it**:
This PR implements ability in pv_controller to back off when deleting a volume fails from plugin API.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Partly fixes#38295 , but I think volume delete is most problematic thing happening in pv_controller without any sort of backoff.
After this change the attempts of volume deletion look like:
```
controller : I1208 00:18:35.532061 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:20:50.578325 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:23:05.563488 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:25:20.599158 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:27:35.560009 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:29:50.594967 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:32:05.539168 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:34:20.581665 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
```
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)
controller: sync stuck deployments in a secondary queue
@kubernetes/deployment this makes Deployments not depend on a tight resync interval in order to estimate progress.
This implements pv_controller to exponentially backoff
when deleting a volume fails in Cloud API. It ensures that
we aren't making too many calls to Cloud API
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0
The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.
Automatic merge from submit-queue
Add integration tests for desire state of world populator
Add integration tests for desire state of world populator
This adds tests for code introduced here :
https://github.com/kubernetes/kubernetes/issues/26994
Via integration test we can now verify that if pod delete
event is somehow missed by AttachDetach controller - it still
get cleaned up by Desired State of World populator.
This adds tests for code introduced here :
https://github.com/kubernetes/kubernetes/issues/26994
Via integration test we can now verify that if pod delete
event is somehow missed by AttachDetach controller - it still
get cleaned up by Desired State of World populator.