When we observe valid coordinates for a previously virtual node,
if there are dependents that do not agree with those coordinates,
add them to the attemptToDelete queue.
This queue will check the dependent's ownerReferences using the coordinates specified by the dependent.
If all of the owners can be verified absent, the dependent will be deleted.
If some are still present, or if there are errors looking them up, the dependent will not be deleted.
If the verified owner is namespaced, and the dependent is not in the same namespace,
an event will be recorded for user visibility, since cross-namespace ownerReferences are not supported.
If a virtual delete event is received for a node whose dependents disagree on the parent's coordinates:
1. propagate the delete to children that matched the verified absent coordinates
2. if the existing node is virtual, select a new set of coordinates from the remaining dependents
3. do not delete the parent node from the graph if the parent node is non-virtual,
or if there are dependents that do not agree with the virtual delete event coordinates
When adding a dependent to the graph, we ensure there is a node representing each owner reference,
and add the dependent to each parent node.
If the parent node already exists, and the dependent's ownerReference
coordinates disagree with the verified coordinates, add the dependent to the attemptToDelete queue.
This queue will check the dependent's ownerReferences using the coordinates specified by the dependent.
If all of the owners can be verified absent, the dependent will be deleted.
If some are still present, or if there are errors looking them up, the dependent will not be deleted.
If the parent node has been observed via informer event (so we know the coordinates are accurate),
and the verified owner is namespaced, and the dependent is not in the same namespace,
an event will be recorded for user visibility, since cross-namespace ownerReferences are not supported.
If the graph contains a virtual node (because some child object referenced it in an OwnerRef),
and a real informer event is observed for that uid at different coordinates,
we want to fix the coordinates of the node in the graph to match the actual coordinates.
The safe way to do this is to clone the node, replace the identity in the clone,
then replace the node with the clone.
Modifying the identity directly is not safe because it is accessed lock-free from many code paths.
Replacing the node in the graph from processGraphChanges is safe because it is the only graph writer.
Virtual nodes can be added to the GC graph in order to represent objects
which have not been observed via an informer, but are referenced via ownerReferences.
These virtual nodes are requeued into attemptToDelete until they are observed via an informer,
or successfully verified absent via a live lookup. Previously, both of those code paths
called markObserved() to stop requeuing into attemptToDelete.
Because it is useful to know whether a particular node has been observed via
a real informer event, this commit does the following:
* adds a `virtual bool` attribute to graph events so we know which ones came from a real informer
* limits the markObserved() call to the code path where a real informer event is observed
* uses an alternative mechanism to stop requeueing into attemptToDelete when a virtual node is verified absent via a live lookup
Before deleting an object based on absent owners, GC verifies absence of those owners with a live lookup.
The coordinates used to perform that live lookup are the ones specified in the ownerReference of the child.
In order to performantly delete multiple children from the same parent (e.g. 1000 pods from a replicaset),
a 404 response to a lookup is cached in absentOwnerCache.
Previously, the cache was a simple uid set. However, since children can disagree on the coordinates
that should be used to look up a given uid, the cache should record the exact coordinates verified absent.
This is a [apiVersion, kind, namespace, name, uid] tuple.
before this change, an object would be added to `attemptToDelete` queue only if `gc` detected the transition, simply by check if `deletionTimestamp` was set. After the change, it will check if `foregroundDeletion` finalizer has been set before adding the item to the queue.
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Automatic merge from submit-queue (batch tested with PRs 63386, 64624, 62297, 64847). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Let the garbage collector use json merge patch when SMP is not supported
**What this PR does / why we need it**:
Let garbage collector fallback to use json merge patch when strategic merge patch returns 415. This enables orphan delete on custom resources.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56348
**Special notes for your reviewer**:
This PR is developed based on https://github.com/kubernetes/kubernetes/pull/56595. Ref https://github.com/kubernetes/kubernetes/pull/56606 for more information.
**Release note**:
```release-note
Orphan delete is now supported for custom resources
```
/sig api-machinery
Automatic merge from submit-queue (batch tested with PRs 55908, 55829, 55293, 55653, 55665). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[bug fix] TestCascadingDeletion during pull-kubernetes-unit-test flaky
**What this PR does / why we need it**:
fix pull-kubernetes-unit-test flaky.
```
go test -v k8s.io/kubernetes/test/integration/garbagecollector -run TestCascadingDeletion$
panic: close of closed channel
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x111
/usr/local/go/src/runtime/panic.go:491 +0x283
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:259 +0xb80
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:123 +0x39
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:211 +0x20f
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x5e
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:172 +0xd9
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/garbagecollector/garbage_collector_test.go:268 +0xc9
from junit_3a3d564eebb1750e5c904cc525d117617fc0af51_20171113-090812.xml
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55652
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Enhance the garbage collector to periodically refresh the resources it
monitors (via discovery) to enable custom resource definition GC.
This implementation caches Unstructured structs for any kinds not
covered by a shared informer. The existing meta-only codec only supports
compiled types; an improved codec which supports arbitrary types could
be introduced to optimize caching to store only metadata for all
non-informer types.
This adds an interface form of dynamic.Client and
dynamic.ResourceClient, making those two follow the general client
conventions: `Interface` is an interface, and `Client` is the concrete
implementation. `ClientPool` retains it's interface status.
This allows us to create a fake implemenation of dyanmic.Interface,
dynamic.ResourceInterface, and dynamic.ClientPool for testing.
Allow the list of resources the garbage collector controller should
ignore to be customizable, so downstream integrators can add their own
resources to the list, if necessary.
Automatic merge from submit-queue (batch tested with PRs 40055, 42085, 44509, 44568, 43956)
Fix gofmt errors
**What this PR does / why we need it**:
There were some gofmt errors on master. Ran the following to fix:
```
hack/verify-gofmt.sh | grep ^diff | awk '{ print $2 }' | xargs gofmt -w -s
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: none
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```