Automatic merge from submit-queue (batch tested with PRs 53047, 54861, 55413, 55395, 55308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Switch internal scale type to autoscaling, enable apps/v1 scale subresources
xref #49504
* Switch workload internal scale type to autoscaling.Scale (internal-only change)
* Enable scale subresources for apps/v1 deployments, replicasets, statefulsets
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix influxdb e2e test failure.
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
Fixes#54636
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Pod Priority and Preemption in Cluster Autoscaler
This PR adds e2e tests for Pod Priority and Preemption in Clucter Autoscaler:
- shouldn't scale up when expendable pod is created
- should scale up when non expendable pod is created
- shouldn't scale up when expendable pod is preempted
- should scale down when expendable pod is running
- shouldn't scale down when non expendable pod is running
Automatic merge from submit-queue (batch tested with PRs 55265, 54092, 55353, 53733, 55385). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E Performance test to print latency numbers for vsphere volume lifecycle operations
**What this PR does / why we need it**:
This PR introduces test that prints latency numbers for volume lifecycle operations.
The operations that are evaluated are:
1. Create n number of PVCs
2. Create pods with these PVCs and ensure pods are in ready state
3. Delete pods
4. Delete the PVCs
**Which issue this PR fixes** : fixes vmware#292
**Special notes for your reviewer**:
1. This PR has some duplicate code changes from existing open PRs to add e2e tests. If those PRs are merged before, I ll rebase this PR to avoid redundant changes.
2. Following are the test logs with total number of volumes as 12, volumes per pod as 4 and total iterations of test to be 3.
<details>
Test logs:
```
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build041717622/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:11:29 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:11:29 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:11:29 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:11:29 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:11:29 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:11:29 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:11:29 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 280.313212ms
2017/10/16 15:11:29 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:11:30 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 156.135002ms
2017/10/16 15:11:30 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:11:30.867: INFO: Overriding default scale value of zero to 1
Oct 16 15:11:30.867: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:11:30.981146 6068 e2e.go:383] Starting e2e run "f687717b-b2be-11e7-b207-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508191890 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:11:31.007: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:11:31.018: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:11:31.061: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:11:31.155: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:11:31.155: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:11:31.163: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:11:31.163: INFO: Dumping network health container logs from all nodes...
Oct 16 15:11:31.177: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:11:31.181: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:11:31.183: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:11:31.708: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-5rrtp to have phase Bound
Oct 16 15:11:31.718: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:33.730: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:35.737: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:37.747: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:39.753: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:41.763: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:43.774: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:45.814: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:47.839: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:49.852: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:51.869: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:53.877: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:55.888: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:57.896: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:59.904: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:01.916: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:03.941: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:05.947: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:07.957: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:09.985: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:12.002: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:14.009: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:16.017: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:18.026: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:20.034: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:22.096: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:24.116: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:26.124: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:28.134: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:30.147: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:32.153: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:34.162: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:36.177: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:38.185: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:40.193: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:42.203: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:44.210: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:46.217: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:48.227: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:50.236: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:52.242: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:54.258: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:56.268: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:58.290: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:00.304: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:02.321: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:04.330: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:06.338: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:08.345: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:10.351: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:12.367: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:14.384: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:16.394: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:18.410: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:20.421: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:22.430: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:24.439: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:26.448: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:28.465: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:30.473: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:32.482: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:34.490: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:36.500: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:38.510: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:40.517: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:42.527: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
^C2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 2m13.976765704s
2017/10/16 15:13:43 main.go:260: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance: signal: killed]
2017/10/16 15:13:43 e2e.go:79: err: exit status 1
exit status 1
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$ make
+++ [1016 15:14:25] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [1016 15:14:25] Generating bindata:
test/e2e/generated/gobindata_util.go
~/k8s/kubernetes_2 ~/k8s/kubernetes_2/test/e2e/generated
~/k8s/kubernetes_2/test/e2e/generated
+++ [1016 15:14:26] Building go targets for darwin/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/cloud-controller-manager
cmd/kubelet
cmd/kubeadm
cmd/hyperkube
vendor/k8s.io/kube-aggregator
vendor/k8s.io/apiextensions-apiserver
plugin/cmd/kube-scheduler
cmd/kubectl
federation/cmd/kubefed
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/genyaml
cmd/genswaggertypedocs
cmd/linkcheck
federation/cmd/genfeddocs
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
cmd/kubemark
vendor/github.com/onsi/ginkgo/ginkgo
cmd/gke-certificates-controller
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build763038738/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:16:03 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:16:03 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:16:03 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:16:03 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:16:03 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:16:03 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:16:03 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 163.149145ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:16:03 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 168.158343ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:16:04.325: INFO: Overriding default scale value of zero to 1
Oct 16 15:16:04.325: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:16:04.425919 8714 e2e.go:383] Starting e2e run "9984ec93-b2bf-11e7-810d-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508192163 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:16:04.443: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:16:04.453: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:16:04.500: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:16:04.598: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:16:04.598: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:16:04.607: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:16:04.607: INFO: Dumping network health container logs from all nodes...
Oct 16 15:16:04.626: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:16:04.631: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:16:04.632: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:16:05.313: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-l9tg4 to have phase Bound
Oct 16 15:16:05.359: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:07.381: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:09.389: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:11.404: INFO: PersistentVolumeClaim pvc-l9tg4 found and phase=Bound (6.090428509s)
Oct 16 15:16:11.462: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-j9m85 to have phase Bound
Oct 16 15:16:11.476: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:13.489: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:15.502: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:17.509: INFO: PersistentVolumeClaim pvc-j9m85 found and phase=Bound (6.046381507s)
Oct 16 15:16:17.543: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mc77p to have phase Bound
Oct 16 15:16:17.558: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:19.592: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:21.598: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:23.609: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:25.618: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:27.655: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:29.699: INFO: PersistentVolumeClaim pvc-mc77p found and phase=Bound (12.155659079s)
Oct 16 15:16:29.801: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-2j86v to have phase Bound
Oct 16 15:16:29.815: INFO: PersistentVolumeClaim pvc-2j86v found and phase=Bound (14.767532ms)
Oct 16 15:16:29.847: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-q7rsq to have phase Bound
Oct 16 15:16:29.882: INFO: PersistentVolumeClaim pvc-q7rsq found but phase is Pending instead of Bound.
Oct 16 15:16:31.896: INFO: PersistentVolumeClaim pvc-q7rsq found and phase=Bound (2.048751822s)
Oct 16 15:16:31.928: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qsh8l to have phase Bound
Oct 16 15:16:31.943: INFO: PersistentVolumeClaim pvc-qsh8l found and phase=Bound (14.944175ms)
Oct 16 15:16:31.975: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-52pcj to have phase Bound
Oct 16 15:16:31.993: INFO: PersistentVolumeClaim pvc-52pcj found and phase=Bound (17.704673ms)
Oct 16 15:16:32.021: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-v5x89 to have phase Bound
Oct 16 15:16:32.043: INFO: PersistentVolumeClaim pvc-v5x89 found and phase=Bound (21.44398ms)
Oct 16 15:16:32.073: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-f9pnm to have phase Bound
Oct 16 15:16:32.096: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:34.163: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:36.174: INFO: PersistentVolumeClaim pvc-f9pnm found and phase=Bound (4.100911147s)
Oct 16 15:16:36.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-m5fqt to have phase Bound
Oct 16 15:16:36.239: INFO: PersistentVolumeClaim pvc-m5fqt found and phase=Bound (14.819033ms)
Oct 16 15:16:36.284: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mbsvx to have phase Bound
Oct 16 15:16:36.302: INFO: PersistentVolumeClaim pvc-mbsvx found and phase=Bound (18.02845ms)
Oct 16 15:16:36.334: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-s4sr2 to have phase Bound
Oct 16 15:16:36.352: INFO: PersistentVolumeClaim pvc-s4sr2 found and phase=Bound (17.921955ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:17:57.069: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:57.397: INFO: stderr: ""
Oct 16 15:17:57.397: INFO: stdout: ""
Oct 16 15:17:57.527: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:57.836: INFO: stderr: ""
Oct 16 15:17:57.836: INFO: stdout: ""
Oct 16 15:17:57.981: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:58.290: INFO: stderr: ""
Oct 16 15:17:58.290: INFO: stdout: ""
Oct 16 15:17:58.421: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:58.755: INFO: stderr: ""
Oct 16 15:17:58.755: INFO: stdout: ""
Oct 16 15:17:58.884: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:59.188: INFO: stderr: ""
Oct 16 15:17:59.188: INFO: stdout: ""
Oct 16 15:17:59.287: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:59.602: INFO: stderr: ""
Oct 16 15:17:59.602: INFO: stdout: ""
Oct 16 15:17:59.721: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:00.101: INFO: stderr: ""
Oct 16 15:18:00.101: INFO: stdout: ""
Oct 16 15:18:00.265: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:00.611: INFO: stderr: ""
Oct 16 15:18:00.611: INFO: stdout: ""
Oct 16 15:18:00.720: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:01.092: INFO: stderr: ""
Oct 16 15:18:01.092: INFO: stdout: ""
Oct 16 15:18:01.212: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:01.589: INFO: stderr: ""
Oct 16 15:18:01.589: INFO: stdout: ""
Oct 16 15:18:01.694: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:02.023: INFO: stderr: ""
Oct 16 15:18:02.023: INFO: stdout: ""
Oct 16 15:18:02.502: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:02.805: INFO: stderr: ""
Oct 16 15:18:02.805: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:18:02.807: INFO: Deleting pod "pvc-tester-hrfpv" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:02.842: INFO: Wait up to 5m0s for pod "pvc-tester-hrfpv" to be fully deleted
Oct 16 15:18:42.875: INFO: Deleting pod "pvc-tester-vkgvj" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:42.913: INFO: Wait up to 5m0s for pod "pvc-tester-vkgvj" to be fully deleted
Oct 16 15:19:24.937: INFO: Deleting pod "pvc-tester-wvnrg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:24.971: INFO: Wait up to 5m0s for pod "pvc-tester-wvnrg" to be fully deleted
Oct 16 15:19:56.990: INFO: Deleting pod "pvc-tester-vdb6s" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:57.025: INFO: Wait up to 5m0s for pod "pvc-tester-vdb6s" to be fully deleted
Oct 16 15:20:41.866: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a1d277f-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a21e539-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a287a26-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99f9f244-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fe7a20-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fff232-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a033865-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0813e3-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0a963e-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0f575d-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a12e997-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a17cfa2-b2bf-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:20:41.872: INFO: Deleting PersistentVolumeClaim "pvc-l9tg4"
Oct 16 15:20:41.919: INFO: Deleting PersistentVolumeClaim "pvc-j9m85"
Oct 16 15:20:41.975: INFO: Deleting PersistentVolumeClaim "pvc-mc77p"
Oct 16 15:20:42.027: INFO: Deleting PersistentVolumeClaim "pvc-2j86v"
Oct 16 15:20:42.082: INFO: Deleting PersistentVolumeClaim "pvc-q7rsq"
Oct 16 15:20:42.147: INFO: Deleting PersistentVolumeClaim "pvc-qsh8l"
Oct 16 15:20:42.224: INFO: Deleting PersistentVolumeClaim "pvc-52pcj"
Oct 16 15:20:42.259: INFO: Deleting PersistentVolumeClaim "pvc-v5x89"
Oct 16 15:20:42.316: INFO: Deleting PersistentVolumeClaim "pvc-f9pnm"
Oct 16 15:20:42.369: INFO: Deleting PersistentVolumeClaim "pvc-m5fqt"
Oct 16 15:20:42.409: INFO: Deleting PersistentVolumeClaim "pvc-mbsvx"
Oct 16 15:20:42.448: INFO: Deleting PersistentVolumeClaim "pvc-s4sr2"
STEP: Creating 12 PVCs
Oct 16 15:20:42.807: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85px8 to have phase Bound
Oct 16 15:20:42.832: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:44.845: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:46.943: INFO: PersistentVolumeClaim pvc-85px8 found and phase=Bound (4.13527333s)
Oct 16 15:20:47.032: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-npbn8 to have phase Bound
Oct 16 15:20:47.048: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:49.086: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:51.097: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:53.108: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:55.128: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:57.148: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:59.160: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:01.172: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:03.185: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:05.194: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:07.223: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:09.239: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:11.261: INFO: PersistentVolumeClaim pvc-npbn8 found and phase=Bound (24.228554172s)
Oct 16 15:21:11.285: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ts6b8 to have phase Bound
Oct 16 15:21:11.298: INFO: PersistentVolumeClaim pvc-ts6b8 found and phase=Bound (12.795195ms)
Oct 16 15:21:11.325: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-hqb5d to have phase Bound
Oct 16 15:21:11.336: INFO: PersistentVolumeClaim pvc-hqb5d found and phase=Bound (11.085933ms)
Oct 16 15:21:11.359: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pzlmw to have phase Bound
Oct 16 15:21:11.374: INFO: PersistentVolumeClaim pvc-pzlmw found and phase=Bound (14.757981ms)
Oct 16 15:21:11.400: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4mljw to have phase Bound
Oct 16 15:21:11.426: INFO: PersistentVolumeClaim pvc-4mljw found and phase=Bound (25.6641ms)
Oct 16 15:21:11.450: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mz5br to have phase Bound
Oct 16 15:21:11.462: INFO: PersistentVolumeClaim pvc-mz5br found and phase=Bound (11.515099ms)
Oct 16 15:21:11.492: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-7fk8x to have phase Bound
Oct 16 15:21:11.505: INFO: PersistentVolumeClaim pvc-7fk8x found and phase=Bound (13.387584ms)
Oct 16 15:21:11.530: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-cb2dp to have phase Bound
Oct 16 15:21:11.550: INFO: PersistentVolumeClaim pvc-cb2dp found and phase=Bound (19.152805ms)
Oct 16 15:21:11.584: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85sqf to have phase Bound
Oct 16 15:21:11.599: INFO: PersistentVolumeClaim pvc-85sqf found and phase=Bound (14.406407ms)
Oct 16 15:21:11.632: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8zdmg to have phase Bound
Oct 16 15:21:11.651: INFO: PersistentVolumeClaim pvc-8zdmg found and phase=Bound (18.063182ms)
Oct 16 15:21:11.683: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-nntqr to have phase Bound
Oct 16 15:21:11.694: INFO: PersistentVolumeClaim pvc-nntqr found and phase=Bound (10.97945ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:23:16.187: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:16.646: INFO: stderr: ""
Oct 16 15:23:16.646: INFO: stdout: ""
Oct 16 15:23:16.755: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:17.090: INFO: stderr: ""
Oct 16 15:23:17.090: INFO: stdout: ""
Oct 16 15:23:17.184: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:17.509: INFO: stderr: ""
Oct 16 15:23:17.510: INFO: stdout: ""
Oct 16 15:23:17.606: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:17.910: INFO: stderr: ""
Oct 16 15:23:17.910: INFO: stdout: ""
Oct 16 15:23:18.007: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:18.324: INFO: stderr: ""
Oct 16 15:23:18.324: INFO: stdout: ""
Oct 16 15:23:18.417: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:18.718: INFO: stderr: ""
Oct 16 15:23:18.719: INFO: stdout: ""
Oct 16 15:23:18.818: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:19.137: INFO: stderr: ""
Oct 16 15:23:19.137: INFO: stdout: ""
Oct 16 15:23:19.244: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:19.556: INFO: stderr: ""
Oct 16 15:23:19.556: INFO: stdout: ""
Oct 16 15:23:19.638: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:19.961: INFO: stderr: ""
Oct 16 15:23:19.961: INFO: stdout: ""
Oct 16 15:23:20.060: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:20.365: INFO: stderr: ""
Oct 16 15:23:20.365: INFO: stdout: ""
Oct 16 15:23:20.464: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:20.837: INFO: stderr: ""
Oct 16 15:23:20.838: INFO: stdout: ""
Oct 16 15:23:20.948: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:21.258: INFO: stderr: ""
Oct 16 15:23:21.258: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:23:21.258: INFO: Deleting pod "pvc-tester-dpsht" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:23:21.299: INFO: Wait up to 5m0s for pod "pvc-tester-dpsht" to be fully deleted
Oct 16 15:24:03.361: INFO: Deleting pod "pvc-tester-kt8wp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:03.397: INFO: Wait up to 5m0s for pod "pvc-tester-kt8wp" to be fully deleted
Oct 16 15:24:45.415: INFO: Deleting pod "pvc-tester-lckz2" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:45.452: INFO: Wait up to 5m0s for pod "pvc-tester-lckz2" to be fully deleted
Oct 16 15:25:23.476: INFO: Deleting pod "pvc-tester-vrjxc" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:25:23.510: INFO: Wait up to 5m0s for pod "pvc-tester-vrjxc" to be fully deleted
Oct 16 15:26:07.784: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f7e96b8-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f825cec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8627c5-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f89ca32-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8cd95e-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f900995-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6a76ec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6d2d17-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6f2a1a-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f72bfae-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f760aab-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f791671-b2c0-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:26:07.784: INFO: Deleting PersistentVolumeClaim "pvc-85px8"
Oct 16 15:26:07.854: INFO: Deleting PersistentVolumeClaim "pvc-npbn8"
Oct 16 15:26:07.900: INFO: Deleting PersistentVolumeClaim "pvc-ts6b8"
Oct 16 15:26:07.954: INFO: Deleting PersistentVolumeClaim "pvc-hqb5d"
Oct 16 15:26:08.003: INFO: Deleting PersistentVolumeClaim "pvc-pzlmw"
Oct 16 15:26:08.044: INFO: Deleting PersistentVolumeClaim "pvc-4mljw"
Oct 16 15:26:08.090: INFO: Deleting PersistentVolumeClaim "pvc-mz5br"
Oct 16 15:26:08.130: INFO: Deleting PersistentVolumeClaim "pvc-7fk8x"
Oct 16 15:26:08.183: INFO: Deleting PersistentVolumeClaim "pvc-cb2dp"
Oct 16 15:26:08.230: INFO: Deleting PersistentVolumeClaim "pvc-85sqf"
Oct 16 15:26:08.282: INFO: Deleting PersistentVolumeClaim "pvc-8zdmg"
Oct 16 15:26:08.337: INFO: Deleting PersistentVolumeClaim "pvc-nntqr"
STEP: Creating 12 PVCs
Oct 16 15:26:08.691: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jwmql to have phase Bound
Oct 16 15:26:08.716: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:10.732: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:12.754: INFO: PersistentVolumeClaim pvc-jwmql found and phase=Bound (4.062803231s)
Oct 16 15:26:12.789: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jhrg7 to have phase Bound
Oct 16 15:26:12.801: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:14.817: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:16.834: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:18.854: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:20.871: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:22.888: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:24.901: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:26.918: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:28.929: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:30.941: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:32.958: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:34.976: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:37.013: INFO: PersistentVolumeClaim pvc-jhrg7 found and phase=Bound (24.222741938s)
Oct 16 15:26:37.042: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-lvvkl to have phase Bound
Oct 16 15:26:37.055: INFO: PersistentVolumeClaim pvc-lvvkl found and phase=Bound (12.935683ms)
Oct 16 15:26:37.078: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-bgkkc to have phase Bound
Oct 16 15:26:37.088: INFO: PersistentVolumeClaim pvc-bgkkc found and phase=Bound (9.861689ms)
Oct 16 15:26:37.109: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qt2lv to have phase Bound
Oct 16 15:26:37.126: INFO: PersistentVolumeClaim pvc-qt2lv found and phase=Bound (17.393667ms)
Oct 16 15:26:37.147: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pgs9s to have phase Bound
Oct 16 15:26:37.158: INFO: PersistentVolumeClaim pvc-pgs9s found but phase is Pending instead of Bound.
Oct 16 15:26:39.171: INFO: PersistentVolumeClaim pvc-pgs9s found and phase=Bound (2.023756794s)
Oct 16 15:26:39.217: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8h942 to have phase Bound
Oct 16 15:26:39.249: INFO: PersistentVolumeClaim pvc-8h942 found and phase=Bound (32.347782ms)
Oct 16 15:26:39.282: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-phtvg to have phase Bound
Oct 16 15:26:39.296: INFO: PersistentVolumeClaim pvc-phtvg found and phase=Bound (13.940285ms)
Oct 16 15:26:39.321: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ldv2f to have phase Bound
Oct 16 15:26:39.333: INFO: PersistentVolumeClaim pvc-ldv2f found and phase=Bound (11.888903ms)
Oct 16 15:26:39.360: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4v9hf to have phase Bound
Oct 16 15:26:39.375: INFO: PersistentVolumeClaim pvc-4v9hf found and phase=Bound (14.230796ms)
Oct 16 15:26:39.403: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jkfg5 to have phase Bound
Oct 16 15:26:39.419: INFO: PersistentVolumeClaim pvc-jkfg5 found and phase=Bound (15.47811ms)
Oct 16 15:26:39.449: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-87dwp to have phase Bound
Oct 16 15:26:39.463: INFO: PersistentVolumeClaim pvc-87dwp found and phase=Bound (13.680898ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:28:08.033: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:08.507: INFO: stderr: ""
Oct 16 15:28:08.507: INFO: stdout: ""
Oct 16 15:28:08.609: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:08.917: INFO: stderr: ""
Oct 16 15:28:08.917: INFO: stdout: ""
Oct 16 15:28:09.019: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:09.342: INFO: stderr: ""
Oct 16 15:28:09.342: INFO: stdout: ""
Oct 16 15:28:09.432: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:09.760: INFO: stderr: ""
Oct 16 15:28:09.760: INFO: stdout: ""
Oct 16 15:28:09.847: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:10.164: INFO: stderr: ""
Oct 16 15:28:10.164: INFO: stdout: ""
Oct 16 15:28:10.259: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:10.576: INFO: stderr: ""
Oct 16 15:28:10.576: INFO: stdout: ""
Oct 16 15:28:10.681: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:11.000: INFO: stderr: ""
Oct 16 15:28:11.000: INFO: stdout: ""
Oct 16 15:28:11.086: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:11.383: INFO: stderr: ""
Oct 16 15:28:11.383: INFO: stdout: ""
Oct 16 15:28:11.486: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:11.782: INFO: stderr: ""
Oct 16 15:28:11.782: INFO: stdout: ""
Oct 16 15:28:11.888: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:12.207: INFO: stderr: ""
Oct 16 15:28:12.207: INFO: stdout: ""
Oct 16 15:28:12.315: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:12.634: INFO: stderr: ""
Oct 16 15:28:12.634: INFO: stdout: ""
Oct 16 15:28:12.778: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:13.113: INFO: stderr: ""
Oct 16 15:28:13.113: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:28:13.113: INFO: Deleting pod "pvc-tester-n68rp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:13.157: INFO: Wait up to 5m0s for pod "pvc-tester-n68rp" to be fully deleted
Oct 16 15:28:53.195: INFO: Deleting pod "pvc-tester-qm7w8" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:53.224: INFO: Wait up to 5m0s for pod "pvc-tester-qm7w8" to be fully deleted
Oct 16 15:29:35.246: INFO: Deleting pod "pvc-tester-jslwg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:29:35.279: INFO: Wait up to 5m0s for pod "pvc-tester-jslwg" to be fully deleted
Oct 16 15:30:07.312: INFO: Deleting pod "pvc-tester-mcqqq" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:30:07.357: INFO: Wait up to 5m0s for pod "pvc-tester-mcqqq" to be fully deleted
Oct 16 15:31:03.595: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01aaa147-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ae1953-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b03dec-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b2ea3b-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b76412-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b8de3d-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01bd6a83-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c1b249-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c53dd9-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c941ba-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01caec5e-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ce2be9-b2c1-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:31:03.595: INFO: Deleting PersistentVolumeClaim "pvc-jwmql"
Oct 16 15:31:03.641: INFO: Deleting PersistentVolumeClaim "pvc-jhrg7"
Oct 16 15:31:03.681: INFO: Deleting PersistentVolumeClaim "pvc-lvvkl"
Oct 16 15:31:03.724: INFO: Deleting PersistentVolumeClaim "pvc-bgkkc"
Oct 16 15:31:03.771: INFO: Deleting PersistentVolumeClaim "pvc-qt2lv"
Oct 16 15:31:03.833: INFO: Deleting PersistentVolumeClaim "pvc-pgs9s"
Oct 16 15:31:03.887: INFO: Deleting PersistentVolumeClaim "pvc-8h942"
Oct 16 15:31:04.047: INFO: Deleting PersistentVolumeClaim "pvc-phtvg"
Oct 16 15:31:04.089: INFO: Deleting PersistentVolumeClaim "pvc-ldv2f"
Oct 16 15:31:04.153: INFO: Deleting PersistentVolumeClaim "pvc-4v9hf"
Oct 16 15:31:04.211: INFO: Deleting PersistentVolumeClaim "pvc-jkfg5"
Oct 16 15:31:04.263: INFO: Deleting PersistentVolumeClaim "pvc-87dwp"
Oct 16 15:31:04.317: INFO: Average latency for below operations
Oct 16 15:31:04.317: INFO: Creating 12 PVCs and waiting for bound phase: 30576919 microseconds
Oct 16 15:31:04.317: INFO: Creating 4 Pod: 97668230 microseconds
Oct 16 15:31:04.317: INFO: Deleting 4 Pod and waiting for disk to be detached: 154930158 microseconds
Oct 16 15:31:04.317: INFO: Deleting 12 PVCs: 660074 microseconds
[AfterEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 16 15:31:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vcp-performance-lfrbk" for this suite.
Oct 16 15:31:19.156: INFO: namespace: e2e-tests-vcp-performance-lfrbk, resource: bindings, ignored listing per whitelist
Oct 16 15:31:19.297: INFO: namespace e2e-tests-vcp-performance-lfrbk deletion completed in 14.690943637s
• [SLOW TEST:914.654 seconds]
[sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:31:19.305: INFO: Running AfterSuite actions on all node
Oct 16 15:31:19.305: INFO: Running AfterSuite actions on node 1
Ran 1 of 706 Specs in 914.851 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS
Ginkgo ran 1 suite in 15m15.380170791s
Test Suite Passed
2017/10/16 15:31:19 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 15m15.901302911s
2017/10/16 15:31:19 e2e.go:81: Done
```
</details>
```
None
```
- shouldn't scale up when expendable pod is created
- should scale up when non expendable pod is created
- shouldn't scale up when expendable pod is preempted
- should scale down when expendable pod is running
- shouldn't scale down when non expendable pod is running
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Deduplicate RC/RS controller code.
The code was already 99% similar between RC and RS. This is a wild idea to try to deduplicate the two controllers in a type-safe manner without adding tons of boilerplate, and without using code generation.
They are still separate resources and separate worker pools. This is a refactor that isn't intended to change any behavior.
```release-note
ReplicationController now shares its underlying controller implementation with ReplicaSet to reduce the maintenance burden going forward. However, they are still separate resources and there should be no externally visible effects from this change.
```
ref #49429
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
don't check in mounter binary
```release-note
GCI mounter is moved from the manifests tarball to the server tarball.
```
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add ephemeral storage e2e tests
Add e2e tests of limitrange/quota/downward_api for local ephemeral storage
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #52463
**Special notes for your reviewer**:
Add e2e tests of limitrange/quota/downwardapi for local ephemeral storage
**Release note**:
```release-note
Add limitrange/resourcequota/downward_api e2e tests for local ephemeral storage
```
/assign @jingxu97
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E scale test for vSphere Cloud Provider Volume lifecycle operations
This PR adds an E2E test for vSphere Cloud Provider which will create/attach/detach/detach the volumes at scale with multiple threads based on user configurable values for number of volumes, volumes per pod and number of threads. (Since this is a scale test, number of threads would be low. This is only used to speed up the operation)
Test performs following tasks.
1. Create Storage Classes of 4 Categories (Default, SC with Non Default Datastore, SC with SPBM Policy, SC with VSAN Storage Capalibilies.)
2. Read VCP_SCALE_VOLUME_COUNT from System Environment.
3. Launch VCP_SCALE_INSTANCES go routines for creating VCP_SCALE_VOLUME_COUNT volumes. Each go routine is responsible for create/attach of VCP_SCALE_VOLUME_COUNT/VCP_SCALE_INSTANCES volumes.
4. Read VCP_SCALE_VOLUMES_PER_POD from System Environment. Each pod will be have VCP_SCALE_VOLUMES_PER_POD attached to it.
5. Once all the go routines are completed, we delete all the pods and volumes.
Which issue this PR fixes
fixes # vmware#291
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add shyamjvs to test/OWNERS
I've been reviewing quite some PRs recently and have reviewed many in the past. + I have >80 commits in this code path (git log test | grep "shyamjvs@google.com") touching various parts including e2e/framework, utils, perftype, kubemark, e2e fixes from other SIGs (mostly in regard of scalability).
/cc @gmarek @spiffxp @krzyzacy @kubernetes/sig-testing-misc
Automatic merge from submit-queue (batch tested with PRs 53747, 54528, 55279, 55251, 55311). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e test to verify volume attach status after master kubelet restart
**What this PR does / why we need it**:
This PR adds test to verify volume remains attached after the kubelet is restarted on master node.
**Which issue this PR fixes** :
fixes vmware#274
**Special notes for your reviewer**:
This test does not run as part of existing sig-storage test grid. It has been tested internally at VMware.
Test logs
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=Volume\sAttach\sVerify'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build395888807/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/11 12:14:05 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/11 12:14:05 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/11 12:14:05 e2e.go:57: The separator is required to use --get or --old flags
2017/10/11 12:14:05 e2e.go:58: The -- flag separator also suppresses this message
2017/10/11 12:14:05 e2e.go:151: The kubetest binary is older than 24h0m0s.
2017/10/11 12:14:05 e2e.go:156: Updating kubetest binary...
2017/10/11 12:14:13 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=Volume\sAttach\sVerify...
2017/10/11 12:14:13 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/11 12:14:13 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 493.364761ms
2017/10/11 12:14:13 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17307+d274c30f81d1c2", GitCommit:"d274c30f81d1c2d966dc950014ac90f8fad140f7", GitTreeState:"clean", BuildDate:"2017-10-11T18:57:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:03:38Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
2017/10/11 12:14:14 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 352.041653ms
2017/10/11 12:14:14 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sAttach\sVerify
Conformance test: not doing test setup.
Oct 11 12:14:15.478: INFO: Overriding default scale value of zero to 1
Oct 11 12:14:15.478: INFO: Overriding default milliseconds value of zero to 5000
I1011 12:14:15.692022 29999 e2e.go:383] Starting e2e run "5f33ad5b-aeb8-11e7-9f17-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1507749254 - Will randomize all specs
Will run 1 of 709 specs
Oct 11 12:14:15.744: INFO: >>> kubeConfig: /tmp/kube204.json
Oct 11 12:14:15.751: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 11 12:14:15.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 11 12:14:16.067: INFO: 4 / 4 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 11 12:14:16.067: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Oct 11 12:14:16.077: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 11 12:14:16.077: INFO: Dumping network health container logs from all nodes...
Oct 11 12:14:16.083: INFO: Client version: v1.6.0-alpha.0.17307+d274c30f81d1c2
Oct 11 12:14:16.086: INFO: Server version: v1.6.5
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Attach Verify [Feature:vsphere]
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 11 12:14:16.087: INFO: >>> kubeConfig: /tmp/kube204.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:81
Oct 11 12:14:16.265: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
STEP: Creating a test vsphere volume 0
STEP: Creating pod 0 on node kubernetes-node1
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Creating a test vsphere volume 1
STEP: Creating pod 1 on node kubernetes-node2
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Creating a test vsphere volume 2
STEP: Creating pod 2 on node kubernetes-node3
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Creating a test vsphere volume 3
STEP: Creating pod 3 on node kubernetes-node4
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Restarting kubelet on master node
Oct 11 12:16:12.239: INFO: Restarting kubelet via ssh on host 10.192.113.70:22 with command systemctl restart kubelet
STEP: Verifying the kubelet on master node is up
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: command: curl http://localhost:10255/healthz
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stdout: ""
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 10255: Connection refused\n"
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: exit code: 7
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Deleting pod on node kubernetes-node1
Oct 11 12:16:18.538: INFO: Deleting pod "vsphere-e2e-pwjr1" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:16:18.559: INFO: Wait up to 5m0s for pod "vsphere-e2e-pwjr1" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk to be detached from the node kubernetes-node1
Oct 11 12:17:10.686: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Deleting pod on node kubernetes-node2
Oct 11 12:17:11.614: INFO: Deleting pod "vsphere-e2e-vqkbp" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:11.624: INFO: Wait up to 5m0s for pod "vsphere-e2e-vqkbp" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk to be detached from the node kubernetes-node2
Oct 11 12:17:55.748: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Deleting pod on node kubernetes-node3
Oct 11 12:17:56.051: INFO: Deleting pod "vsphere-e2e-fkrzb" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:56.069: INFO: Wait up to 5m0s for pod "vsphere-e2e-fkrzb" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk to be detached from the node kubernetes-node3
Oct 11 12:18:38.199: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Deleting pod on node kubernetes-node4
Oct 11 12:18:38.541: INFO: Deleting pod "vsphere-e2e-4cb0d" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:18:38.556: INFO: Wait up to 5m0s for pod "vsphere-e2e-4cb0d" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk to be detached from the node kubernetes-node4
Oct 11 12:19:22.672: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk
[AfterEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 11 12:19:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-restart-master-j9x0f" for this suite.
Oct 11 12:19:29.544: INFO: namespace: e2e-tests-restart-master-j9x0f, resource: bindings, ignored listing per whitelist
Oct 11 12:19:29.622: INFO: namespace e2e-tests-restart-master-j9x0f deletion completed in 6.156220683s
• [SLOW TEST:313.535 seconds]
[sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 11 12:19:29.666: INFO: Running AfterSuite actions on all node
Oct 11 12:19:29.666: INFO: Running AfterSuite actions on node 1
Ran 1 of 709 Specs in 313.923 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 708 Skipped PASS
```
Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use versiond group clients from client-go
**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.
**Which issue this PR fixes**: fixes#49760
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54493, 52501, 55172, 54780, 54819). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add integration test for deployment rolling update, rollback, rollover
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```
/cc @caesarxuchao
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Removes 'rwx' permissions for global users
- the tests make an assumption that the permissions on the /tmp dir have not
been altered
Signed-off-by: Brenda Chan <brchan@pivotal.io>
**What this PR does / why we need it**:
This PR modifies a conformance test that checks the file permissions when the`/tmp` dir is mounted.
The current tests make an assumption that the permissions on the `/tmp` dir on the host system has not been altered. We removed the check that global users need `rwx`, so the tests now only check for `dtrwxrwx`
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: N/A
**Special notes for your reviewer**: N/A
**Release note**:
```release-note
NONE
```
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Refactor kube-scheduler config API, command, and server setup
Refactor the kube-scheduler configuration API, command setup, and server setup according to the guidelines established in #32215 and using the kube-proxy refactor (#34727) as a model of a well factored component adhering to said guidelines.
* Config API: clarify meaning and use of algorithm source by replacing modality derived from bools and string emptiness checks with an explicit AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.
Fixes https://github.com/kubernetes/kubernetes/issues/52428.
@kubernetes/api-reviewers
@kubernetes/sig-cluster-lifecycle-pr-reviews
@kubernetes/sig-scheduling-pr-reviews
/cc @ncdc @timothysc @bsalamat
```release-note
The kube-scheduler command now supports a `--config` flag which is the location of a file containing a serialized scheduler configuration. Most other kube-scheduler flags are now deprecated.
```
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers
**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Node Autoprovisioning:
This PR adds e2e tests for Node Autoprovisioning: …
- should create new node if there is no node for node selector
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
delete if-else branch
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
The if-else branch is redundant.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate pod relevant e2e tests to sig node
**What this PR does / why we need it**:
Migrate pod relevant e2e tests to sig-node.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Refactor the kube-scheduler configuration API, command setup, and server
setup according to the guidelines established in #32215 and using the
kube-proxy refactor (#34727) as a model of a well factored component
adhering to said guidelines.
* Config API: clarify meaning and use of algorithm source by replacing
modality derived from bools and string emptiness checks with an explicit
AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.
Fixes#52428.
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove unused constant
**What this PR does / why we need it**:
These constant is never used so we can remove it safely.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Node Autoprovisioning:
Adds e2e tests for Node Autoprovisioning:
- shouldn't add new node group if not needed
- shouldn't scale up if cores limit too low, should scale up after limit is changed
Automatic merge from submit-queue (batch tested with PRs 55114, 52976, 54871, 55122, 55140). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Don't share nodePort service in session affinity tests
**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/54524, https://github.com/kubernetes/kubernetes/issues/54571.
Spent sometime to dig into it today, found this test is flaky mostly because it sends out service requests before kube-proxy reacts on the service session affinity update, hence multiple endpoints are responding instead of one. It is more flaky in alpha CIs probably due to different test sequences.
This PR creates a separate service with `sessionAffinity=ClientIP` so there wouldn't be a race between test begins and kube-proxy reacts. On the other hand, it also seems inappropriate to tweak the`config.NodePortService`, which is shared by other networking tests.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # (will mark them fixed later).
**Special notes for your reviewer**:
/assign @m1093782566 @bowei
cc @spiffxp
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
e2e-node:the value of bestEffortCgroup is wrong
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
The value of bestEffortCgroup is wrong in e2e-node. The test case is invalid actually.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Incorrect Scale Subresources and HPA e2e ScaleTargetRefs
The HPA e2es failed to actually set `apiVersion` on the created HPAs, which previous was ignored. Since the polymorphic scale client was merged, this behavior is no longer tolerated (it was never correct to begin with, but it accidentally worked).
Additionally, the `apps` resources have their own version of scale. Until `apps/v1beta1` and `apps/v1beta2` go away, we need to support those versions in the scale client.
Together, these broke some of the HPA e2es.
Fixes#54574
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
move KubeProxyConfiguration out of componentconfig API group
**What this PR does / why we need it**:
move KubeProxyConfiguration out of componentconfig API group
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53577
**Special notes for your reviewer**:
/cc @thockin @ncdc
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55034, 55068). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Clarify what each "version" means.
Some folks were getting confused by this output.
Fixes#54821
```release-note
NONE
```
/area conformance
/sig architecture
/assign @timothysc @WilliamDenniss
This commit fixes an issue where in clusters which have FQDN as the node names,
one of the scheduling predicates tests will fail because it will try and run a
pod with a name that violates DNS-1123 rules. As an example, one such pod name
could look like "filler-pod-kube-node-0.kubelet.mesos".
Signed-off-by: Paulo Pires <pjpires@gmail.com>
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Workloads V1
**What this PR does / why we need it**: This PR promotes the Deployment, ReplicaSet, and DaemonSet StatefulSet, ControllerRevision kinds to the apps/v1 group version.
https://github.com/kubernetes/features/issues/353
**Special notes for your reviewer**:
There will be at least two followups to this PR. The first to add a scale sub-resource when the correct location is resolved, and the second to deal with Conditions in the workloads API.
While it would have been preferable to move the kinds individually providing a lesser burden on reviewers, this proved impracticable due to the intricacies of version resolution in kubectl for objects of the different kinds in the same group.
```release-note
DaemonSet, Deployment, ReplicaSet, and StatefulSet have been promoted to GA and are available in the apps/v1 group version.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add prometheus-to-sd-exporter to metadata-proxy addon; bump to v0.1.4
**What this PR does / why we need it**: Add metrics exporters to the metadata-proxy addon for GCE. Work toward #8867.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove version check in kubectl e2e test.
**What this PR does / why we need it**:
We don't need to check these versions for kubectl e2e tests in current cycle.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: #55053
**Special notes for your reviewer**:
/cc @liggitt
since you're also from sig-cli-maintainers :)
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51401, 54056, 54977, 55017, 55052). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
extensions: remove TPR remnants
The extensions group still had the TPR types + generated client. Having this in the codebase doesn't create any problems but would be good to clean up, especially since TPR access has been removed in 1.8.
**Release note**:
```release-note
NONE
```
/assign @sttts @deads2k
Automatic merge from submit-queue (batch tested with PRs 55063, 54523, 55053). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Don't need to check version for auth e2e test
**What this PR does / why we need it**:
In 1.9 cycle, some e2e test don't need to run against so older versions.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: #55050
**Special notes for your reviewer**:
/cc @tallclair @liggitt
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Node autoprovisioning e2e test.
This PR adds test scenario for cluster-autoscaler in GKE for node autoprovisioning.
apps/v1betaX inadventertently contains its own variant of Scale. In
order to support scaling Deployments, ReplicaSets, etc, we need to support
these versions of Scale as well.
Previously, the HPA controller ignored APIVersion when resolving the
scale subresource for a kind, meaning if it was set incorrectly in the
HPA's scaleTargetRef, it would not matter. This was the case for
several of the HPA e2e tests.
Since the polymorphic scale client merged, APIVersion now matters. This
updates the HPA e2es to care about APIVersion, by passing kind as a full
GroupVersionKind, and not just a string.
Automatic merge from submit-queue (batch tested with PRs 52367, 53363, 54989, 54872, 54643). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Basic GCE PodSecurityPolicy Config
**What this PR does / why we need it**:
This PR lays the foundation for enabling PodSecurityPolicy in GCE and other default deployments. The 3 commits are:
1. Add policies, roles & bindings for the default addons on GCE.
2. Enable the PSP admission controller & load the addon policies when the`ENABLE_POD_SECURITY_POLICY=true` environment variable is set.
3. Support the PodSecurityPolicy in the E2E environment & add PSP tests.
NOTES:
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/52301 for privileged capabilities~~
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/52849 for sane mutations~~
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/53479 for aggregator tests to pass~~
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/54175 for dedicated fluentd service~~ account
- This PR is a fork of https://github.com/kubernetes/kubernetes/pull/46064, credit to @Q-Lee
**Which issue this PR fixes**: #43538
**Release note**:
```release-note
Add support for PodSecurityPolicy on GCE: `ENABLE_POD_SECURITY_POLICY=true` enables the admission controller, and installs policies for default addons.
```
Automatic merge from submit-queue (batch tested with PRs 54787, 51940). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate netwrok partition test to sig apps
**What this PR does / why we need it**:
Migrate network partition relevant e2e test to sig-app.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 54895, 54449). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix a bug checking DaemonSet pods are updated in e2e test
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#50586
**Special notes for your reviewer**: @kubernetes/sig-apps-bugs
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add sig-storage prefix for common e2e tests
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54894, 54630, 54828, 54926, 54865). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
DaemonSet e2e should wait for history creation
**What this PR does / why we need it**:
Found a potential test flake while debugging #54575. ControllerRevisions are created separately with DaemonSet pods by controller, so we should wait for its creation in e2e.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**: @kubernetes/sig-apps-bugs
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update volume OWNERS to reflect active sig-storage reviewers
**What this PR does / why we need it**:
Update sig-storage reviewers to add new members and remove those that don't have as much time to review storage PRs. Approvers are unchanged.
**Special notes for your reviewer**:
For all those that have been removed, please approve. If you want to remain as a reviewer, let me know and I will add you back.
**Release note**:
NONE
Automatic merge from submit-queue (batch tested with PRs 53190, 54790, 54445, 52607, 54801). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
remove created-by annotation
**What this PR does / why we need it**:
This PR removes `CreatedByAnnotation`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50720
**Release note**:
```release-note
The `kubernetes.io/created-by` annotation is no longer added to controller-created objects. Use the `metadata.ownerReferences` item that has `controller` set to `true` to determine which controller, if any, owns an object.
```
Automatic merge from submit-queue (batch tested with PRs 54326, 54046). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
convert testOverlappingDeployment e2e test to integration test
**What this PR does / why we need it**:
This PR convert a deployment e2e test named "testOverlappingDeployment" to integration test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54774, 54820, 52192, 54827). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Initial integration test setup for DaemonSet controller
**What this PR does / why we need it**:
This PR setup and added some initial integration tests for the DaemonSet controller. All tests included were ported from their unit test counterparts that currently use fake client, informers, reactor, etc. Future PRs will port more tests over once this PR is approved and merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52191.
**Special notes for your reviewer**:
@kow3ns
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54774, 54820, 52192, 54827). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added a test for proper `%s` handling when display last applied confi…
**What this PR does / why we need it**:
Added missing tests which checks proper handling of `%s` in arguments for command `kubectl apply view-last-applied`
**Which issue this PR fixes**
#54645
**Special notes for your reviewer**:
Added a test case to cover specific issue described.
It fails on version `v1.7.3`..`v1.7.9` and passes since `v1.8.0`.
P.S. Not sure if there is already a lower level test which covers this case in the k8s test suite. I would recommend to add this test, so the issue would not reoccur.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54572, 54686). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix service session affinity e2e failure cases
**What this PR does / why we need it**:
Fix service session affinity e2e failure cases - debuging...
**Which issue this PR fixes**:
xref #54571#54524
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig network
Automatic merge from submit-queue (batch tested with PRs 54728, 54818). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Metadata concealment e2e
**What this PR does / why we need it**: Add e2e for metadata concealment. Ref #8867.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54761, 54748, 53991, 54485, 46951). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
admission: unify plugin constructors
It's common in Go to return the actual object in constructors, not **one interface**
it implements. This allows us to implement multiple interfaces, but only have
one constructor. As having private types in constructors, we export all plugin structs, of course with private fields.
Note: super interfaces do not work if there are overlapping methods.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix subresource discovery and versioning
Fixes https://github.com/kubernetes/kubernetes/issues/54684
Related to https://github.com/kubernetes/kubernetes/pull/54586
Allows distinct subresource group/version/kind to be used for each version (gives us a path to move to autoscaling/v1 for apps, or policy/v1 for eviction, etc)
Added tests to ensure scale subresources have expected discovery info, and that the object returned matches discovery, and that the endpoint accepts the advertised version
```release-note
Fixes discovery information for scale subresources in the apps API group
```
Automatic merge from submit-queue (batch tested with PRs 49762, 52256). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add node e2e tests for pulling images from credential providers
**What this PR does / why we need it**:
Add node e2e tests for pulling images from credential providers.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Refer https://github.com/kubernetes/kubernetes/pull/51870#issuecomment-328234010
**Special notes for your reviewer**:
/assign @yujuhong @Random-Liu
1. We still need to add ResetDefaultDockerProviderExpiration for facilitating tests
2. Do we need a separate image for pulling private image from credential provider?
3. Any suggestion of also adding this for sandbox images? the pause image is a global config of kubelet, but we only need to set a private one for just one test case.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54165, 53909). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance test test.
Add new `test/conformance` subdir, add code to generate a list of conformance tests, and add a test that verifies the list of tests.
The intent is to move management of the definition of conformance to sig-architecture.
```release-note
NONE
```
ref. #54726
Automatic merge from submit-queue (batch tested with PRs 54331, 54655, 54320, 54639, 54288). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Ability to do object count quota for all namespaced resources
**What this PR does / why we need it**:
- Defines syntax for generic object count quota `count/<resource>.<group>`
- Migrates existing objects to support new syntax with old syntax
- Adds support to quota all standard namespace resources
- Updates the controller to do discovery and replenishment on those resources
- Updates unit tests
- Tweaks admission configuration around quota
- Add e2e test for replicasets (demonstrate dynamic generic counting)
```
$ kubectl create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4
resourcequota "test" created
$ kubectl run nginx --image=nginx --replicas=2
$ kubectl describe quota
Name: test
Namespace: default
Resource Used Hard
-------- ---- ----
count/deployments.extensions 1 2
count/pods 2 3
count/replicasets.extensions 1 4
count/secrets 1 4
```
**Special notes for your reviewer**:
- simple object count quotas no longer require writing code
- deferring support for custom resources pending investigation about how to share caches with garbage collector. in addition, i would like to see how this integrates with downstream quota usage in openshift.
**Release note**:
```release-note
Object count quotas supported on all standard resources using `count/<resource>.<group>` syntax
```
This test creates a golden list of existing conformance tests. Efforts
to add or remove conformance tests will require you to rebuild the
golden list, and changes to the golden list will be reviewed by
sig-architecture.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
convert testFailedDeployment e2e test to integration test
**What this PR does / why we need it**:
This PR convert a deployment e2e test named "testFailedDeployment" to integration test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Release note**:
```release-note
NONE
```
/assign
Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add probe, pre_stop, and networking related container annotations.
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add probe, pre_stop, and networking related container annotations.
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds probe, pre_stop, and networking related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance annotations for projected volume tests
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add projected volume related conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds projected volume related related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add Windows support to the system verification check
**What this PR does / why we need it**: This PR (in conjunction with https://github.com/kubernetes/kubernetes/pull/53553 ) adds initial support for adding a Windows worker node to a Kubernetes cluster using
kubeadm. It was suggested on that PR to open a separate PR for the changes in test/e2e_node for review by sig-node devs.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#364 in conjuction with #53553
**Special notes for your reviewer**:
**Release note**:
```release-note
Add Windows support to the system verification check
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add service latency and secret related conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds service latency and secret related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance annotations for expansion and service tests
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds expansion and service test conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove federation
This PR removes the federation codebase and associated tooling from the tree.
The first commit just removes the `federation` path and should be uncontroversial. The second commit removes references and associated tooling and suggests careful review.
Requirements for merge:
- [x] Bazel jobs no longer hard-code federation as a target ([test infra #4983](https://github.com/kubernetes/test-infra/pull/4983))
- [x] `federation-e2e` jobs are not run by default for k/k
**Release note**:
```release-note
Development of Kubernetes Federation has moved to github.com/kubernetes/federation. This move out of tree also means that Federation will begin releasing separately from Kubernetes. The impact of this is Federation-specific behavior will no longer be included in kubectl, kubefed will no longer be released as part of Kubernetes, and the Federation servers will no longer be included in the hyperkube binary and image.
```
cc: @kubernetes/sig-multicluster-pr-reviews @kubernetes/sig-testing-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 54455, 54431). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance annotations for proxy and scheduler predicate tests
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds proxy and scheduler predicate related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54455, 54431). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate cluster dns test to sig network
**What this PR does / why we need it**:
Just migrate dns relevant e2e test files to sig network.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 53000, 52870, 53569). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fallback to internal addrs in e2e tests when no external addrs available
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.
Without this change, the e2e service-related Networking tests will always
panic when config.ExternalAddrs[0] is accessed and the slice is empty.
This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.
fixes#53568
**What this PR does / why we need it**:
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.
Without this change, the e2e service-related Networking tests will always
panic when no cloud provider is being used, or the cloud provider does
not have external addresses configured.
This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53568
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix issue(#52994)kubectl set resource can not update multi resource i…
…n local
**What this PR does / why we need it**:
Fixes#52994
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54593, 54607, 54539, 54105). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Move deployment e2e test for hash label adoption to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52113
**Special notes for your reviewer**: depends on #53918
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53760, 48996, 51267, 54414). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix CI error for service session affinity
**What this PR does / why we need it**:
Fix CI error for service session affinity -- debug
**Which issue this PR fixes**:
fixes#53741
**Special notes for your reviewer**:
I remove the [slow] tag so that these test cases can be run in PR request. We may need to add back the [slow] tag when this PR is ready to get in.
**Release note**:
```release-note
NONE
```
Pulled SysSpecs out of types.go and created two os specific implementations with build tags
Similarly created conditionally compiled implementations of KernelValidationHelper to get Kernel version in os specific manner, as well as os specific docker endpoints (socket vs named pipes)
Automatic merge from submit-queue (batch tested with PRs 52868, 53196, 54207). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
eviction/detach test
**What this PR does / why we need it**:
e2e test for detach after a pod is evicted.
**Which issue this PR fixes** : fixes#52676
**Release note**:
```release-note
NONE
```
cc @jingxu97 @copejon
Automatic merge from submit-queue (batch tested with PRs 53051, 52489, 53920). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix todo
**What this PR does / why we need it**:
fix todo
thanks
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 53474, 54258, 54356). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Directly using std{in,out} for test helper subproc
**What this PR does / why we need it**
This fixes one TODO in the code of a test helper and is an extremely minor improvement
**Which issue this PR fixes**
Fixes issue #54258
**Special notes for your reviewer**
I'm using this to familiarize myself with the Kubernetes contribution process while being helpful in the process.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add e2e tests for NEG
This PR includes tests:
- ingress conformance test
- scaling up and down backends
- switching backend between IG and NEG
- rolling update backend should not cause service disruption
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix detach metric flake by not using exact equals
Also poll for detach value increase.
Fixes https://github.com/kubernetes/kubernetes/issues/52871
I have ran these tests for more than 3 hours in a tight loop and did not see it flake. The changes here include dropping exact equality test and making sure we poll for increase in detach metric count.
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 54270, 54479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delete maxNumPVs and maxNumPVCs const of persistent_volume.go
**What this PR does / why we need it**:
the two const of maxNumPVs and maxNumPVCs haven't used and delete it!
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Speed up volume tests by reducing pod grace period
busybox based pods don't react to docker stop nicely. By reducing the pod grace period we can save ~29 seconds per volume test.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52479, 53956). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update scheduler to use schedulerName selector
**What this PR does / why we need it**:
Update scheduler to use schedulerName selector when select pods in the podInformer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52254
**Special notes for your reviewer**:
**Release note**:
```
None
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Move deployment e2e test for rollback with no revision to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
NetworkPolicy e2e: use ClientSet and update to CoreV1 and NetworkingV1 apis.
**What this PR does / why we need it**:
Update NetworkPolicy e2e test: use the public ClientSet and update to CoreV1 and NetworkingV1 apis.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52556, 52897, 54342). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delete unused yaml files
**What this PR does / why we need it**:
This PR is for removing some of these unused yaml files copied earlier from doc dir.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#54447
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Aggregator test uses framework namespace
**What this PR does / why we need it**:
Remove namespace duplicate in aggregator test, using test framework namespace instead of creating a new one.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53478
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig api-machinery
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add a notice for node e2e config files
ref https://github.com/kubernetes/kubernetes/pull/53542 and patched up with https://github.com/kubernetes/test-infra/pull/5107
So while migrating the jobs to prow, I haven't kill the `*.properties` files yet because some lingering jobs, and possibly local tests are still using them. We have a copy of image-config.yaml in test-infra, and all *.properties file is merged into job configs.
Add a notice to remind people also update the job configs in test-infra. Also add myself as a reviewer here so I can subscribe some notice. I'll remove them once I cleaned up all legacy files here.
/assign @yguo0905 @dashpole @yujuhong
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Updating E2E test for deleting PVC when PVC is in use
**What this PR does / why we need it**:
This test updates an existing e2e test and adds extra verification.
Updated workflow of the test is as below
1. Create PVC, Wait until PV is provisioned. Create POD using PVC.
2. Verify POD is running and PV is attached to the node.
3. Delete PVC.
4. Verify Volume remains attached to the pod after deleting claim.
5. Verify Volume is accessible in the pod after deleting claim.
6. Verify associated PV is present and its status should be failed.
7. Delete Pod and wait until PV is unmounted and detached from the Node.
6. Wait and Verify PV is deleted after POD is deleted.
**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/279
**Special notes for your reviewer**:
Test logs
```
# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build371606839/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:42:40 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:42:40 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:42:40 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:42:40 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:42:40 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod...
2017/10/16 15:42:40 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:42:40 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 293.775296ms
2017/10/16 15:42:40 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.913+297ab03890a6a7-dirty", GitCommit:"297ab03890a6a76f268eb5415e0fb16f20b2309e", GitTreeState:"dirty", BuildDate:"2017-10-16T20:50:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:42:40 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 317.940582ms
2017/10/16 15:42:40 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod
Conformance test: not doing test setup.
Oct 16 15:42:42.327: INFO: Overriding default scale value of zero to 1
Oct 16 15:42:42.327: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:42:42.577720 8325 e2e.go:369] Starting e2e run "51f11717-b2c3-11e7-bd54-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508193761 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:42:42.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 16 15:42:42.686: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:42:42.724: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:42:42.883: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:42:42.883: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:42:42.891: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:42:42.891: INFO: Dumping network health container logs from all nodes...
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere
should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
[BeforeEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:42:42.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:48
Oct 16 15:42:42.994: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[BeforeEach] [sig-storage] persistentvolumereclaim:vsphere
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:56
[It] should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
STEP: running testSetupVSpherePersistentVolumeReclaim
STEP: creating vmdk
STEP: creating the pv
STEP: creating the pvc
Oct 16 15:42:44.595: INFO: Waiting for PV vspherepv-ksccp to bind to PVC pvc-n4rq7
Oct 16 15:42:44.595: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-n4rq7 to have phase Bound
Oct 16 15:42:44.606: INFO: PersistentVolumeClaim pvc-n4rq7 found but phase is Pending instead of Bound.
Oct 16 15:42:47.625: INFO: PersistentVolumeClaim pvc-n4rq7 found and phase=Bound (3.029926391s)
Oct 16 15:42:47.625: INFO: Waiting up to 5m0s for PersistentVolume vspherepv-ksccp to have phase Bound
Oct 16 15:42:47.632: INFO: PersistentVolume vspherepv-ksccp found and phase=Bound (6.598243ms)
STEP: Creating the Pod
STEP: Deleting the Claim
Oct 16 15:42:59.709: INFO: Deleting PersistentVolumeClaim "pvc-n4rq7"
STEP: Verify the volume is attached to the node
STEP: Verify the volume is accessible and available in the pod
Oct 16 15:43:00.076: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/root/.kube/config exec pvc-tester-r9ww9 --namespace=e2e-tests-persistentvolumereclaim-6pfpf -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:43:00.604: INFO: stderr: ""
Oct 16 15:43:00.604: INFO: stdout: ""
Oct 16 15:43:00.604: INFO: Verified that Volume is accessible in the POD after deleting PV claim
Oct 16 15:43:00.610: INFO: Waiting up to 1m0s for PersistentVolume vspherepv-ksccp to have phase Failed
Oct 16 15:43:00.619: INFO: PersistentVolume vspherepv-ksccp found and phase=Failed (9.016306ms)
STEP: Deleting the Pod
Oct 16 15:43:00.619: INFO: Deleting pod pvc-tester-r9ww9
Oct 16 15:43:00.650: INFO: Waiting up to 5m0s for pod "pvc-tester-r9ww9" in namespace "e2e-tests-persistentvolumereclaim-6pfpf" to be "terminated due to deadline exceeded"
Oct 16 15:43:00.668: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 18.507993ms
Oct 16 15:43:02.675: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 2.024854663s
Oct 16 15:43:04.682: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 4.03197856s
Oct 16 15:43:06.688: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 6.037718623s
Oct 16 15:43:08.697: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 8.047192574s
Oct 16 15:43:10.703: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 10.052754761s
Oct 16 15:43:12.708: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 12.057876018s
Oct 16 15:43:14.714: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 14.063962712s
Oct 16 15:43:16.719: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 16.068826626s
Oct 16 15:43:18.725: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 18.074735397s
Oct 16 15:43:20.730: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 20.080498293s
Oct 16 15:43:22.736: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 22.086586123s
Oct 16 15:43:24.742: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 24.092219324s
Oct 16 15:43:26.747: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 26.097385301s
Oct 16 15:43:28.753: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 28.103127591s
Oct 16 15:43:30.758: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 30.108014823s
Oct 16 15:43:32.764: INFO: Pod "pvc-tester-r9ww9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.113847674s
Oct 16 15:43:34.772: INFO: Pod "pvc-tester-r9ww9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.122010171s
Oct 16 15:43:36.787: INFO: Pod "pvc-tester-r9ww9" in namespace "e2e-tests-persistentvolumereclaim-6pfpf" not found. Error: pods "pvc-tester-r9ww9" not found
Oct 16 15:43:36.787: INFO: Ignore "not found" error above. Pod "pvc-tester-r9ww9" successfully deleted
STEP: Verify PV is detached from the node after Pod is deleted
Oct 16 15:43:46.913: INFO: Waiting for Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" to detach from "kubernetes-node2".
Oct 16 15:43:56.918: INFO: Waiting for Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" to detach from "kubernetes-node2".
Oct 16 15:44:06.905: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Verify PV should be deleted automatically
Oct 16 15:44:06.905: INFO: Waiting up to 30s for PersistentVolume vspherepv-ksccp to get deleted
Oct 16 15:44:06.909: INFO: PersistentVolume vspherepv-ksccp was removed
[AfterEach] [sig-storage] persistentvolumereclaim:vsphere
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:62
STEP: running testCleanupVSpherePersistentVolumeReclaim
Oct 16 15:44:06.962: INFO: Deleting PersistentVolume "vspherepv-ksccp"
[AfterEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 16 15:44:06.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-persistentvolumereclaim-6pfpf" for this suite.
Oct 16 15:44:15.325: INFO: namespace: e2e-tests-persistentvolumereclaim-6pfpf, resource: bindings, ignored listing per whitelist
Oct 16 15:44:15.638: INFO: namespace e2e-tests-persistentvolumereclaim-6pfpf deletion completed in 8.651759385s
• [SLOW TEST:92.734 seconds]
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
[sig-storage] persistentvolumereclaim:vsphere
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:44:15.651: INFO: Running AfterSuite actions on all node
Oct 16 15:44:15.651: INFO: Running AfterSuite actions on node 1
Ran 1 of 706 Specs in 92.974 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS
Ginkgo ran 1 suite in 1m33.830856163s
Test Suite Passed
2017/10/16 15:44:15 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod' finished in 1m34.75838192s
2017/10/16 15:44:15 e2e.go:81: Done
```
VVMware Reviewers: @rohitjogvmw @BaluDontu @tusharnt
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 53903, 53914, 54374). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate resource relevant e2e test files to sig scheduling
**What this PR does / why we need it**:
Migrate resource relevant e2e test files to sig scheduling. Not fully sure whether these e2e files belong to sig-node or sig-scheduling, feel free to contact me if you have better solution.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 53903, 53914, 54374). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add PodDisruptionBudget to scheduler cache.
**What this PR does / why we need it**:
This is the first step to add support for PodDisruptionBudget during preemption. This PR adds PDB to scheduler cache.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: None
**Release note**:
```release-note
Add PodDisruptionBudget to scheduler cache.
```
ref/ #53913
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add timothysc to test approvers
I've avoided this responsibility and leveraged super-powers, but I should own up and make it more legit. I've been working on the testing jiggery since epoch.
/cc @spiffxp @ixdy @fejta
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Try in-cluster config before using localhost:8080
**What this PR does / why we need it**:
When starting an e2e test in a pod in a cluster, if the host is
not specified in the command line, we default to using
'http://127.0.0.1:8080' currently. We should be discovering the
host/port using the in-cluster config and using that if
possible.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#53894
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make scheduler integration test faster
Not to wait for 30 seconds for every negative test case. This commit
also organizes the test code to make it more readable.
It cuts the test time from 450s to 125s.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53302
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52794, 54243, 54248, 53491, 53841). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
revamp deployment upgrade test
**What this PR does / why we need it**:
This PR revamps existing deployment upgrade test, removing redundant steps that is covered by replicaset upgrade test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Special notes for your reviewer**:
The replicaset upgrade test PR is here: #52449
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e test for statefulsets for vsphere cloud provider
**What this PR does / why we need it**:
This PR adds a new e2e test for statefulsets for vSphere cloud Provider.
Test does following tasks.
- Create a storage class with thin diskformat.
- Create nginx service.
- Create nginx statefulsets with 3 replicas.
- Wait until all Pods are ready and PVCs are bounded with PV.
- Verify volumes are accessible in all statefulsets pods with creating empty file.
- Scale down statefulsets to 2 replicas.
- Scale up statefulsets to 3 replicas.
- Scale down statefulsets to 0 replicas and delete all pods.
- Delete all PVCs from the test namespace.
- Delete the storage class.
**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/275
**Special notes for your reviewer**:
Test Logs
```
root@k8s-dev-vm-02:~/divyenp/kubernetes# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=vsphere\sstatefulset\stesting'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build247641121/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/18 19:24:33 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/18 19:24:33 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/18 19:24:33 e2e.go:57: The separator is required to use --get or --old flags
2017/10/18 19:24:33 e2e.go:58: The -- flag separator also suppresses this message
2017/10/18 19:24:33 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=vsphere\sstatefulset\stesting...
2017/10/18 19:24:33 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/18 19:24:34 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 290.682219ms
2017/10/18 19:24:34 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1217+8b041da0f996c1-dirty", GitCommit:"8b041da0f996c185438a7ed8282f92734a2ed0e7", GitTreeState:"dirty", BuildDate:"2017-10-19T00:46:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1293+d462bac7805f53", GitCommit:"d462bac7805f536a43c7d5fb98aca138ba1237eb", GitTreeState:"clean", BuildDate:"2017-10-18T07:07:08Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/18 19:24:34 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 305.965323ms
2017/10/18 19:24:34 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstatefulset\stesting
Conformance test: not doing test setup.
Oct 18 19:24:35.808: INFO: Overriding default scale value of zero to 1
Oct 18 19:24:35.808: INFO: Overriding default milliseconds value of zero to 5000
I1018 19:24:36.073718 7768 e2e.go:383] Starting e2e run "a63561de-b474-11e7-8f6b-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508379875 - Will randomize all specs
Will run 1 of 713 specs
Oct 18 19:24:36.132: INFO: >>> kubeConfig: /root/.kube/config
Oct 18 19:24:36.139: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 18 19:24:36.177: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 18 19:24:36.321: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 18 19:24:36.321: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 18 19:24:36.326: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 18 19:24:36.326: INFO: Dumping network health container logs from all nodes...
Oct 18 19:24:36.338: INFO: Client version: v1.9.0-alpha.1.1217+8b041da0f996c1-dirty
Oct 18 19:24:36.340: INFO: Server version: v1.9.0-alpha.1.1293+d462bac7805f53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vsphere statefulset
vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
[BeforeEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 19:24:36.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:63
[It] vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
STEP: Creating StorageClass for Statefulset
STEP: Creating statefulset
Oct 18 19:24:36.489: INFO: Parsing statefulset from test/e2e/testing-manifests/statefulset/nginx/statefulset.yaml
Oct 18 19:24:36.503: INFO: Parsing service from test/e2e/testing-manifests/statefulset/nginx/service.yaml
Oct 18 19:24:36.514: INFO: creating web service
Oct 18 19:24:36.527: INFO: creating statefulset e2e-tests-vsphere-statefulset-gnfmp/web with 3 replicas and selector &LabelSelector{MatchLabels:map[string]string{app: nginx,},MatchExpressions:[],}
Oct 18 19:24:36.561: INFO: Found 0 stateful pods, waiting for 3
Oct 18 19:24:46.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:24:56.568: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:06.568: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:16.566: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:26.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:36.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:46.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:25:56.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:06.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:16.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:26.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:36.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:46.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:56.571: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:06.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:16.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:26.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:36.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:46.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:56.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:06.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:16.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:26.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:36.574: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:46.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:56.571: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:06.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:16.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:26.566: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:36.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:46.566: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:56.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:30:06.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:06.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:06.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:16.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:16.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:16.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:26.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:26.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:36.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:36.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:36.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:46.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:46.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:46.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:56.566: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:56.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:56.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:06.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:06.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:06.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:16.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:16.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:16.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:26.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:26.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:36.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:36.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:36.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:46.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:46.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:46.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:56.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:56.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:56.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:06.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:06.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:06.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:16.571: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:16.571: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:16.571: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for statefulset status.replicas updated to 3
Oct 18 19:32:26.605: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:27.170: INFO: stderr: ""
Oct 18 19:32:27.170: INFO: stdout of ls -idlh /usr/share/nginx/html on web-0: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:25 /usr/share/nginx/html
Oct 18 19:32:27.171: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:27.687: INFO: stderr: ""
Oct 18 19:32:27.688: INFO: stdout of ls -idlh /usr/share/nginx/html on web-1: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:29 /usr/share/nginx/html
Oct 18 19:32:27.688: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:28.177: INFO: stderr: ""
Oct 18 19:32:28.177: INFO: stdout of ls -idlh /usr/share/nginx/html on web-2: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:32 /usr/share/nginx/html
Oct 18 19:32:28.183: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:28.690: INFO: stderr: ""
Oct 18 19:32:28.690: INFO: stdout of find /usr/share/nginx/html on web-0: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:28.690: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:29.166: INFO: stderr: ""
Oct 18 19:32:29.166: INFO: stdout of find /usr/share/nginx/html on web-1: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:29.166: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:29.696: INFO: stderr: ""
Oct 18 19:32:29.696: INFO: stdout of find /usr/share/nginx/html on web-2: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:29.707: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:30.171: INFO: stderr: ""
Oct 18 19:32:30.171: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-0:
Oct 18 19:32:30.171: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:30.653: INFO: stderr: ""
Oct 18 19:32:30.653: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-1:
Oct 18 19:32:30.654: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:31.149: INFO: stderr: ""
Oct 18 19:32:31.150: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-2:
STEP: Scaling down statefulsets to number of Replica: 2
Oct 18 19:32:31.263: INFO: Scaling statefulset web to 2
Oct 18 19:32:51.314: INFO: Waiting for statefulset status.replicas updated to 2
STEP: Verify Volumes are detached from Nodes after Statefulsets is scaled down
Oct 18 19:32:51.524: INFO: Waiting for Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" to detach from Node: "kubernetes-node2"
Oct 18 19:33:01.657: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Scaling up statefulsets to number of Replica: 3
Oct 18 19:33:01.657: INFO: Scaling statefulset web to 3
Oct 18 19:33:11.731: INFO: Waiting for statefulset status.replicas updated to 3
Oct 18 19:33:11.747: INFO: Waiting for statefulset status.replicas updated to 3
STEP: Verify all volumes are attached to Nodes after Statefulsets is scaled up
Oct 18 19:33:13.823: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-a6cf15ef-b474-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node4"
Oct 18 19:33:15.990: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-cfb65f92-b474-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node3"
Oct 18 19:33:18.154: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node2"
[AfterEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 19:33:18.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vsphere-statefulset-gnfmp" for this suite.
Oct 18 19:33:44.960: INFO: namespace: e2e-tests-vsphere-statefulset-gnfmp, resource: bindings, ignored listing per whitelist
Oct 18 19:33:44.960: INFO: namespace e2e-tests-vsphere-statefulset-gnfmp deletion completed in 26.620223678s
[AfterEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:67
Oct 18 19:33:44.960: INFO: Deleting all statefulset in namespace: e2e-tests-vsphere-statefulset-gnfmp
• [SLOW TEST:548.654 seconds]
[sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 18 19:33:45.006: INFO: Running AfterSuite actions on all node
Oct 18 19:33:45.006: INFO: Running AfterSuite actions on node 1
Ran 1 of 713 Specs in 548.875 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 712 Skipped PASS
Ginkgo ran 1 suite in 9m9.728218415s
Test Suite Passed
2017/10/18 19:33:45 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstatefulset\stesting' finished in 9m10.656371481s
2017/10/18 19:33:45 e2e.go:81: Done
```
VMware Reviewers: @rohitjogvmw @BaluDontu @tusharnt
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52753, 54034, 53982, 54209). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use multi-arch busybox for e2e
**What this PR does / why we need it**:
Since [multi-arch is supported already for Official images on Dockerhub](https://blog.docker.com/2017/09/docker-official-images-now-multi-platform/), we can use `busybox` directly instead of having our own `GetBusyBoxImage` for multi-arch.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
xref #53958
**Special notes for your reviewer**:
/assign @mkumatag @ixdy
**Release note**:
```release-note
Use multi-arch busybox image for e2e
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Replace storage-class annotations with field in examples
**What this PR does / why we need it**:
storage class is already GA. Replace annotations with field `StorageClassName` in examples.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51435 (update: thanks @gyliu513 for the issue)
ref: https://github.com/kubernetes/kubernetes/pull/50654#discussion_r134954171
**Special notes for your reviewer**:
We may also want to remove the beta annotations in 1.8 since the field will have already been in two releases. If @kubernetes/sig-storage-api-reviews confirm this, I'd like to help remove it.
/cc @liggitt @jsafrane @msau42
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
pkg/api: extract Scheme/Registry/Codecs into pkg/api/legacyscheme
This serves as
- a preparation for the pkg/api->pkg/apis/core move
- and makes the dependency to the scheme explicit when vizualizing
left depenncies.
The later helps with our our efforts to split up the monolithic repo
into self-contained sub-repos, e.g. for kubectl, controller-manager
and kube-apiserver in the future.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add pod related conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds pod related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
**Special notes for your reviewer**:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54045, 51375). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Upgrade to go1.9
**What this PR does / why we need it**:
Upgrade to go1.9. Upgrading is good. It's "the best golang release ever"!
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49484
**Special notes for your reviewer**:
**Release note**:
```release-note
Upgrade to go1.9
```
/assign @luxas @ixdy @wojtek-t
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
refactor pd.go for future tests
**What this PR does / why we need it**:
Refactored _test/e2e/storage/pd.go_ so that it will be easier to add new tests, which I plan on doing to address issue 52676
1. Condenses 8 `It` blocks into 3 table driven tests.
2. Adds several `By` descriptions and `Logf` messages.
3. provides more consistent formatting and messages.
**Special notes for your reviewer**:
The diff is large but mostly I've not altered any test. The one semantic change I made was to remove the call to verify a write to a PD when, in fact, nothing had been written yet. This was essentially a no-op since the verify code returned immediately if the passed-in map was empty (which it was since nothing had been written).
```release-note
NONE
```
cc @jingxu97 @copejon
Automatic merge from submit-queue (batch tested with PRs 54036, 53739). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove Sprintf when there are no placeholders in the formatting.
**What this PR does / why we need it**:
Minor cleanup.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53978, 54008, 53037). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix expected result in Custom Metrics - Stackdriver Adapter e2e test
**What this PR does / why we need it**:
This PR fixes a bug in e2e tests for Custom Metrics - Stackdriver Adapter
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix kubemark, juju, and libvirt-coreos README (from minions to nodes)
**What this PR does / why we need it**:
This PR will fix old name(minison) to new name(node) in kubemark README.md.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 53575, 53794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add e2e test case for downward API exposing pod UID
**What this PR does / why we need it**:
Pod UID is added to downward API env var in #48125 for 1.8. This PR adds a e2e test case for it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: #48125
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48665, 52849, 54006, 53755). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add named-port ingress test
**What this PR does / why we need it**:
Validate correct behavior when a `NetworkPolicyIngressRule` refers to a named port rather than a numerical port, e.g. `serve-80` rather than `80`.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53106, 52193, 51250, 52449, 53861). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add replicaset upgrade test
**What this PR does / why we need it**:
This PR adds existing replicaset upgrade test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52118
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53106, 52193, 51250, 52449, 53861). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
bump CNI to v0.6.0
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49480
**Special notes for your reviewer**:
/assign @luxas @bboreham @feiskyer
**Release note**:
```release-note
bump CNI to v0.6.0
```
When starting an e2e test in a pod in a cluster, if the host is
not specified in the command line, we default to using
'http://127.0.0.1:8080' currently. We should try the in-cluster
config, save it to a temporary file and use that with kubectl
Automatic merge from submit-queue (batch tested with PRs 53507, 53772, 52903, 53543). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Split downward API e2e test case for pod/host IP into two
**What this PR does / why we need it**:
Split the test case in order to avoid version block pod IP e2e test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: https://github.com/kubernetes/kubernetes/pull/42717#discussion_r144026427
**Special notes for your reviewer**:
/cc @timothysc @andrewsykim
Automatic merge from submit-queue (batch tested with PRs 53507, 53772, 52903, 53543). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e tests to verify vsphere volume lifecycle on a clustered datastore
**What this PR does / why we need it**:
This PR introduces tests for volume provisioning on a clustered datastore. It does so in three ways
1. Static provisioning (create vsphere volume and then create a pod with it)
2. Dynamic provisioning (specify clustered datastore in storage class parameters)
3. Dynamic provisioning with spbm policy (specify storage policy name in storage class parameters. This policy is a tag based policy and tagged to a clustered datastore)
**Which issue this PR fixes** :
fixes vmware#278
**Special notes for your reviewer**:
Set env as per following example due to the need mentioned in description
```
export CLUSTER_DATASTORE="dscl1/sharedVmfs-1"
export VSPHERE_SPBM_POLICY_DS_CLUSTER="gold_cluster"
```
Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 51840, 53542, 53857, 53831, 53702). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Kubelet Evictions take Priority into account
Issue: https://github.com/kubernetes/kubernetes/issues/22212
This implements the eviction strategy documented here: https://github.com/kubernetes/community/pull/1162, and discussed here: https://github.com/kubernetes/community/pull/846.
When priority is not enabled, all pods are treated as equal priority.
This PR makes the following changes:
1. Changes the eviction ordering strategy to (usage < requests, priority, usage - requests)
2. Changes unit testing to account for this change in eviction strategy (including tests where priority is disabled).
3. Adds a node e2e test which tests the eviction ordering of pods with different priorities.
/assign @dchen1107 @vishh
cc @bsalamat @derekwaynecarr
```release-note
Kubelet evictions take pod priority into account
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use gcloud for enabling/disabling autoscaling in e2e tests
This removes temporary solution added in #28011 as it's no longer necessary. Should reduce flakes caused by not waiting for master restart after disabling autoscaling.
Automatic merge from submit-queue (batch tested with PRs 47039, 53681, 53303, 53181, 53781). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use bazel to build/push kubemark image
try to get some proof of concept, kubemark image is probably simple enough to get converted to bazel. (me bazel noob still trying it out locally)
cc @BenTheElder @ixdy @shyamjvs
/release-note-none
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improve e2e tests of audit logging.
Now test includes:
* Verbs: create, list, watch, delete, get, update, patch.
* Resources: pods, deployments, secrets, config maps, custom resource
definition.
* More fields: user, resource, level, stage, presence of request and
response objects.
Fixes#49653
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[GCE kube-up] Allow creating/deleting custom network
**What this PR does / why we need it**:
From https://github.com/kubernetes/test-infra/issues/4472.
This is the first step to make PR jobs use custom network instead of auto network (so that we will be less likely hitting subnetwork quota issue).
The last commit is purely for testing out the changes on PR jobs. It will be removed after review.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #NONE.
**Special notes for your reviewer**:
/assign @bowei @nicksardo
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53668, 53624, 52639, 53581, 51215). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add extra log and node env metadata support.
This PR:
1) Make log collection logic extensible via flags, so that we could collect more daemon logs in this PR. (e.g. `containerd.log` and `cri-containerd.log`)
2) Add extra node metadata from specified environment variable. (e.g. `PULL_REFS` in prow).
@krzyzacy I'll change the test-infra side soon. Let's discuss whether we should move/copy this code to test infra in your refactoring.
/cc @dchen1107 @yujuhong @abhi @mikebrow
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53668, 53624, 52639, 53581, 51215). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Local e2e test fixes
**What this PR does / why we need it**:
1. Remove tests using TestContainerOutput because they don't wait for unmount
2. Fix scheduling error test to handle updated event msgs.
@kubernetes/sig-storage-pr-reviews
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53597
**Release note**:
NONE
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bump kube-dns version used in e2e
**What this PR does / why we need it**: Updates the version of kube-dns used in the e2e network tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #53153
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53204, 53364, 53559, 53589, 53088). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Mulligan: Remove deprecated and experimental fields from KubeletConfiguration
Revert "Merge pull request #51857 from kubernetes/revert-51307-kc-type-refactor"
This reverts commit 9d27d92420, reversing
changes made to 2e69d4e625.
See original: #51307
We punted this from 1.8 so it could go through an API review. The point
of this PR is that we are trying to stabilize the kubeletconfig API so
that we can move it out of alpha, and unblock features like Dynamic
Kubelet Config, Kubelet loading its initial config from a file instead
of flags, kubeadm and other install tools having a versioned API to rely
on, etc.
We shouldn't rev the version without both removing all the deprecated
junk from the KubeletConfiguration struct, and without (at least
temporarily) removing all of the fields that have "Experimental" in
their names. It wouldn't make sense to lock in to deprecated fields.
"Experimental" fields can be audited on a 1-by-1 basis after this PR,
and if found to be stable (or sufficiently alpha-gated), can be restored
to the KubeletConfiguration without the "Experimental" prefix.
Related issue: https://github.com/kubernetes/kubernetes/issues/53084
**Release note**:
```release-note
NONE
```
/cc @kubernetes/api-reviewers
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Skip podpreset test if the alpha feature setttings/v1alpha1 is disabled
**What this PR does / why we need it**: Skip this test if it is not able to find the requested resource, so the test does not consistently fail.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53079
**Special notes for your reviewer**:
**Release note**:
```release-note
Skip podpreset test if the alpha feature setttings/v1alpha1 is disabled
```
Revert "Merge pull request #51857 from kubernetes/revert-51307-kc-type-refactor"
This reverts commit 9d27d92420, reversing
changes made to 2e69d4e625.
See original: #51307
We punted this from 1.8 so it could go through an API review. The point
of this PR is that we are trying to stabilize the kubeletconfig API so
that we can move it out of alpha, and unblock features like Dynamic
Kubelet Config, Kubelet loading its initial config from a file instead
of flags, kubeadm and other install tools having a versioned API to rely
on, etc.
We shouldn't rev the version without both removing all the deprecated
junk from the KubeletConfiguration struct, and without (at least
temporarily) removing all of the fields that have "Experimental" in
their names. It wouldn't make sense to lock in to deprecated fields.
"Experimental" fields can be audited on a 1-by-1 basis after this PR,
and if found to be stable (or sufficiently alpha-gated), can be restored
to the KubeletConfiguration without the "Experimental" prefix.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make feature gates loadable from a map[string]bool
Command line flag API remains the same. This allows ComponentConfig
structures (e.g. KubeletConfiguration) to express the map structure
behind feature gates in a natural way when written as JSON or YAML.
For example:
KubeletConfiguration Before:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates: "DynamicKubeletConfig=true,Accelerators=true"
```
KubeletConfiguration After:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates:
DynamicKubeletConfig: true
Accelerators: true
```
Fixes: #53024
```release-note
The Kubelet's feature gates are now specified as a map when provided via a JSON or YAML KubeletConfiguration, rather than as a string of key-value pairs.
```
/cc @mikedanese @jlowdermilk @smarterclayton
Automatic merge from submit-queue (batch tested with PRs 50223, 53205). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Create e2e tests for Custom Metrics - Stackdriver Adapter and HPA based on custom metrics from Stackdriver
**What this PR does / why we need it**:
- Add e2e test for Custom Metrics - Stackdriver Adapter
- Add 2e2 test for HPA based on custom metrics from Stackdriver
- Enable HorizontalPodAutoscalerUseRESTClients option
**Release note**:
```release-note
Horizontal pod autoscaler uses REST clients through the kube-aggregator instead of the legacy client through the API server proxy.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Increase backoffLimit for job that we expect to fail several times
**What this PR does / why we need it**:
Since the introduction of `backoffLimit` for a job that single test failed majority of times on: `BackoffLimitExceeded: Job has reach the specified backoff limit`.
I'm bumping this to 999, so that it has enough room to fail several times.
**Which issue this PR fixes**:
Fixes#35507.
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 53678, 53677, 53682, 53673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix to prevent downward api change break on older versions
Signed-off-by: Timothy St. Clair <timothysc@gmail.com>
**What this PR does / why we need it**:
Prevents "should provide pod and host IP as an env var [Conformance]" from running on older versions whose api does not have that field and will break on those clusters.
This is not a upstream tested configuration, but downstream folks do this regularly.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
N/A
**Special notes for your reviewer**:
N/A
**Release note**:
```
Prevent downward api-change from breaking on older version
```
/cc @kubernetes/sig-testing-bugs @jpbetz @marun
Automatic merge from submit-queue (batch tested with PRs 53678, 53677, 53682, 53673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix typo in StatefulSet e2e test
Found it while reviewing #53218
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
wait for pod to be fully deleted
**What this PR does / why we need it**:
Fix flaky glusterfs io-streaming tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49529
**Special notes for your reviewer**:
1) max potential wait for complete pod deletion is ~~15m~~ 5m.
2) ~~removed [Flaky] from HostCleanup, _e2e/node/kubelet.go_ since pod deletion is reliable now.~~
3) ~~added tag [Slow] to HostCleanup due to long max wait for pod deletion.~~
After all CI tests run reliably we can consider removing the [Flaky] tag (2, above), or do that in a separate pr.
```release-note
NONE
```
cc @msau42
Automatic merge from submit-queue (batch tested with PRs 52354, 52949, 53551). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add client and server versions to the e2e.test output.
Fixes#53502.
```release-note
NONE
```
Sample output:
```
Oct 6 15:02:44.001: INFO: Client version: v1.9.0-alpha.1.737+3b1b19a1e2a9a4-dirty
Oct 6 15:02:44.039: INFO: Server version: v1.8.0
```
/assign @timothysc
Automatic merge from submit-queue (batch tested with PRs 52354, 52949, 53551). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable API chunking and promote to beta for 1.9
All list watchers default to using chunking. The server by default fills pages to avoid low cardinality filters from making excessive numbers of requests. Fix an issue with continuation tokens where a `../` could be used if the feature was enabled.
```release-note
API chunking via the `limit` and `continue` request parameters is promoted to beta in this release. Client libraries using the Informer or ListWatch types will automatically opt in to chunking.
```
Command line flag API remains the same. This allows ComponentConfig
structures (e.g. KubeletConfiguration) to express the map structure
behind feature gates in a natural way when written as JSON or YAML.
For example:
KubeletConfiguration Before:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates: "DynamicKubeletConfig=true,Accelerators=true"
```
KubeletConfiguration After:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates:
DynamicKubeletConfig: true
Accelerators: true
```
Automatic merge from submit-queue (batch tested with PRs 53525, 53652). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
apimachinery: remove ObjectCopier interface(s)
The big commit is a mechanical, transitive removal of the copier interfaces in all structs and function calls.
The etcd3 storage now attempts to fill partial pages to prevent clients
having to make more round trips (latency from server to etcd is lower
than client to server). The server makes repeated requests to etcd of
the current page size, then uses the filter function to eliminate any
matches. After this change the apiserver will always return full pages,
but we leave the language in place that clients must tolerate it.
Reduces tail latency of large filtered lists, such as viewing pods
assigned to a node.
Automatic merge from submit-queue (batch tested with PRs 53444, 52067, 53571, 53182). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
revamp replicaset integration tests
**What this PR does / why we need it**:
This PR revamps existing replicaset integration tests. Some unit tests have been converted to integration tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51484
**Release note**:
```release-note
NONE
```
**TODO List**:
- [x] add an integration test to verify scale endpoint works
- [ ] convert testReplicaSetConditionCheck() to integration test, and modify the test as replicaset's condition has been removed
- [ ] ~~HPA-related replicaset integration test (may be better suited under HPA integration tests)~~
- [x] verify all tests from "Suggested unit tests to retain" list of the internal doc will not be converted to integration tests, or convert the tests accordingly
- [ ] ~~refactor sync call tree (refer deployment and daemonset PRs)~~
- [x] further improve written integration tests (revise test strategies, remove redundant GET / UPDATE calls, add more relevant sub-tests)
- [x] remove unit tests that have overlapping testing goals with written integration tests
Automatic merge from submit-queue (batch tested with PRs 53621, 52320, 53625). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
revamp replicaset e2e tests
**What this PR does / why we need it**:
This PR removes some replicaset e2e tests as they will be converted to integration tests:
(1) condition check test
(2) pod adoption test
(3) pod release(orphaning) test
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52118
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53567, 53197, 52944, 49593). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Clean up in `cluster_size_autoscaling.go`
**What this PR does / why we need it**:
Fix `golint` errors.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add launching Cluster Autoscaler in Kubemark
**What this PR does / why we need it**:
Allows to launch Cluster Autoscaler in Kubemark.
To do it, set ENABLE_KUBEMARK_CLUSTER_AUTOSCALER flag to true. This currently only works with one nodegroup, for which you can specify minimum and maximum number of nodes and name. (KUBEMARK_AUTOSCALER_MIN_NODES, KUBEMARK_AUTOSCALER_MAX_NODES, KUBEMARK_AUTOSCALER_MIG_NAME).
Is is important to note that NUM_NODES has a different meaning when launching Cluster Autoscaler - we always start with only one node, but NUM_NODES is used to calculate the size of Kubemark master and addon components.
There are no changes to the current setup if ENABLE_KUBEMARK_CLUSTER_AUTOSCALER is set to false.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50447, 53308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[e2e] add service session affinity test case
**What this PR does / why we need it**:
**Which issue this PR fixes**:
Add service session affinity test case for e2e
fixes#31712
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51771, 52971). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
pass labelSelector to server side opaquely
**What this PR does / why we need it**:
From @smarterclayton
> The server is responsible for handling label selection for the most part. There is some level of client side processing possible, but for the most part `label selector` should be able to be passed opaquely.
xref #50140
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
/assign @smarterclayton @liggitt
**Release note**:
```release-note
None
```
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.
Without this change, the e2e service-related Networking tests would always
panic when config.ExternalAddrs[0] is accessed and the slice is empty.
This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.
fixes#53568
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Let local node e2e return error.
Fixes#52665
Let `make test-e2e-node` return error when it fails. Now it always returns exit code 0, whenever it fails or not.
@yguo0905 Could you help me review this?
Signed-off-by: Lantao Liu <lantaol@google.com>
Automatic merge from submit-queue (batch tested with PRs 53350, 52688, 53531, 52515). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Skip e2e check for logs API path if provider is skeleton
There is a networking e2e test with the It() description:
```
"should provide unchanging, static URL paths for kubernetes api services"
```
This test performs GETs from the Kubernetes API using various paths,
including "/logs". This test for a GET using path "/logs" should be
skipped for provider type "skeleton", since this path is unsupported.
This change adds "skeleton" to the list of providers for which
this test case should be skipped.
fixes#53529
**What this PR does / why we need it**:
This change adds "skeleton" to the list of providers for which
the test for an API GET using the "/logs" path should be skipped.
This is needed because, as far as I can tell, the "skeleton" provider
doesn't support the "/logs" api path.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53529
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53350, 52688, 53531, 52515). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
PodReady should be replaced with podutil.IsPodReady
**What this PR does / why we need it**:
PodReady should be replaced with podutil.IsPodReady.
Thanks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```