Without this change, sometimes leaked goroutines were reported for
test/integration/scheduler_perf. The one that caused the cleanup to get delayed
was this one:
goleak.go:50: found unexpected goroutines:
[Goroutine 2704 in state chan receive, 2 minutes, with k8s.io/client-go/tools/cache.(*Reflector).watch on top of the stack:
goroutine 2704 [chan receive, 2 minutes]:
k8s.io/client-go/tools/cache.(*Reflector).watch(0xc00453f590, {0x0, 0x0}, 0x1f?, 0xc00a128080?)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:388 +0x5b3
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc00453f590, 0xc006e94900)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:324 +0x3bd
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:279 +0x45
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc007aafee0)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x49
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003e18150?, {0x75e37c0, 0xc00389c280}, 0x1, 0xc006e94900)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xcf
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00453f590, 0xc006e94900)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:278 +0x257
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:58 +0x3f
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:75 +0x74
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0xe5
watch() was stuck in an exponential backoff timeout. Logging confirmed that:
I0309 21:14:21.756149 1572727 reflector.go:387] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://127.0.0.1:38269/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 127.0.0.1:38269: connect: connection refused - backing off
* Enable plugin resolution as subcommand for selected builtin commands
This PR adds external plugin resolution as subcommand for selected builtin
commands if subcommand does not exist as builtin.
In it's alpha stage, this will only be enabled for create command and
this feature is hidden behind `KUBECTL_ENABLE_CMD_SHADOW` environment variable.
* Rename parameter to exactMatch to better reflect
This PR sets higher priority to the `share-processes` flag than
provided profile.
For example, if user tries to use copy-to debugging with restricted
profiling, share process namespace should be false if user explicitly
disables it via `--share-processes=false`.