Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e test for statefulsets for vsphere cloud provider
**What this PR does / why we need it**:
This PR adds a new e2e test for statefulsets for vSphere cloud Provider.
Test does following tasks.
- Create a storage class with thin diskformat.
- Create nginx service.
- Create nginx statefulsets with 3 replicas.
- Wait until all Pods are ready and PVCs are bounded with PV.
- Verify volumes are accessible in all statefulsets pods with creating empty file.
- Scale down statefulsets to 2 replicas.
- Scale up statefulsets to 3 replicas.
- Scale down statefulsets to 0 replicas and delete all pods.
- Delete all PVCs from the test namespace.
- Delete the storage class.
**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/275
**Special notes for your reviewer**:
Test Logs
```
root@k8s-dev-vm-02:~/divyenp/kubernetes# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=vsphere\sstatefulset\stesting'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build247641121/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/18 19:24:33 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/18 19:24:33 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/18 19:24:33 e2e.go:57: The separator is required to use --get or --old flags
2017/10/18 19:24:33 e2e.go:58: The -- flag separator also suppresses this message
2017/10/18 19:24:33 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=vsphere\sstatefulset\stesting...
2017/10/18 19:24:33 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/18 19:24:34 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 290.682219ms
2017/10/18 19:24:34 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1217+8b041da0f996c1-dirty", GitCommit:"8b041da0f996c185438a7ed8282f92734a2ed0e7", GitTreeState:"dirty", BuildDate:"2017-10-19T00:46:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1293+d462bac7805f53", GitCommit:"d462bac7805f536a43c7d5fb98aca138ba1237eb", GitTreeState:"clean", BuildDate:"2017-10-18T07:07:08Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/18 19:24:34 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 305.965323ms
2017/10/18 19:24:34 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstatefulset\stesting
Conformance test: not doing test setup.
Oct 18 19:24:35.808: INFO: Overriding default scale value of zero to 1
Oct 18 19:24:35.808: INFO: Overriding default milliseconds value of zero to 5000
I1018 19:24:36.073718 7768 e2e.go:383] Starting e2e run "a63561de-b474-11e7-8f6b-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508379875 - Will randomize all specs
Will run 1 of 713 specs
Oct 18 19:24:36.132: INFO: >>> kubeConfig: /root/.kube/config
Oct 18 19:24:36.139: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 18 19:24:36.177: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 18 19:24:36.321: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 18 19:24:36.321: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 18 19:24:36.326: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 18 19:24:36.326: INFO: Dumping network health container logs from all nodes...
Oct 18 19:24:36.338: INFO: Client version: v1.9.0-alpha.1.1217+8b041da0f996c1-dirty
Oct 18 19:24:36.340: INFO: Server version: v1.9.0-alpha.1.1293+d462bac7805f53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vsphere statefulset
vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
[BeforeEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 19:24:36.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:63
[It] vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
STEP: Creating StorageClass for Statefulset
STEP: Creating statefulset
Oct 18 19:24:36.489: INFO: Parsing statefulset from test/e2e/testing-manifests/statefulset/nginx/statefulset.yaml
Oct 18 19:24:36.503: INFO: Parsing service from test/e2e/testing-manifests/statefulset/nginx/service.yaml
Oct 18 19:24:36.514: INFO: creating web service
Oct 18 19:24:36.527: INFO: creating statefulset e2e-tests-vsphere-statefulset-gnfmp/web with 3 replicas and selector &LabelSelector{MatchLabels:map[string]string{app: nginx,},MatchExpressions:[],}
Oct 18 19:24:36.561: INFO: Found 0 stateful pods, waiting for 3
Oct 18 19:24:46.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:24:56.568: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:06.568: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:16.566: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:26.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:36.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:46.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:25:56.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:06.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:16.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:26.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:36.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:46.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:56.571: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:06.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:16.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:26.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:36.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:46.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:56.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:06.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:16.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:26.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:36.574: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:46.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:56.571: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:06.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:16.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:26.566: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:36.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:46.566: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:56.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:30:06.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:06.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:06.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:16.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:16.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:16.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:26.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:26.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:36.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:36.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:36.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:46.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:46.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:46.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:56.566: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:56.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:56.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:06.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:06.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:06.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:16.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:16.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:16.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:26.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:26.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:36.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:36.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:36.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:46.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:46.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:46.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:56.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:56.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:56.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:06.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:06.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:06.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:16.571: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:16.571: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:16.571: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for statefulset status.replicas updated to 3
Oct 18 19:32:26.605: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:27.170: INFO: stderr: ""
Oct 18 19:32:27.170: INFO: stdout of ls -idlh /usr/share/nginx/html on web-0: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:25 /usr/share/nginx/html
Oct 18 19:32:27.171: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:27.687: INFO: stderr: ""
Oct 18 19:32:27.688: INFO: stdout of ls -idlh /usr/share/nginx/html on web-1: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:29 /usr/share/nginx/html
Oct 18 19:32:27.688: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:28.177: INFO: stderr: ""
Oct 18 19:32:28.177: INFO: stdout of ls -idlh /usr/share/nginx/html on web-2: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:32 /usr/share/nginx/html
Oct 18 19:32:28.183: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:28.690: INFO: stderr: ""
Oct 18 19:32:28.690: INFO: stdout of find /usr/share/nginx/html on web-0: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:28.690: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:29.166: INFO: stderr: ""
Oct 18 19:32:29.166: INFO: stdout of find /usr/share/nginx/html on web-1: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:29.166: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:29.696: INFO: stderr: ""
Oct 18 19:32:29.696: INFO: stdout of find /usr/share/nginx/html on web-2: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:29.707: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:30.171: INFO: stderr: ""
Oct 18 19:32:30.171: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-0:
Oct 18 19:32:30.171: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:30.653: INFO: stderr: ""
Oct 18 19:32:30.653: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-1:
Oct 18 19:32:30.654: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:31.149: INFO: stderr: ""
Oct 18 19:32:31.150: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-2:
STEP: Scaling down statefulsets to number of Replica: 2
Oct 18 19:32:31.263: INFO: Scaling statefulset web to 2
Oct 18 19:32:51.314: INFO: Waiting for statefulset status.replicas updated to 2
STEP: Verify Volumes are detached from Nodes after Statefulsets is scaled down
Oct 18 19:32:51.524: INFO: Waiting for Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" to detach from Node: "kubernetes-node2"
Oct 18 19:33:01.657: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Scaling up statefulsets to number of Replica: 3
Oct 18 19:33:01.657: INFO: Scaling statefulset web to 3
Oct 18 19:33:11.731: INFO: Waiting for statefulset status.replicas updated to 3
Oct 18 19:33:11.747: INFO: Waiting for statefulset status.replicas updated to 3
STEP: Verify all volumes are attached to Nodes after Statefulsets is scaled up
Oct 18 19:33:13.823: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-a6cf15ef-b474-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node4"
Oct 18 19:33:15.990: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-cfb65f92-b474-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node3"
Oct 18 19:33:18.154: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node2"
[AfterEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 19:33:18.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vsphere-statefulset-gnfmp" for this suite.
Oct 18 19:33:44.960: INFO: namespace: e2e-tests-vsphere-statefulset-gnfmp, resource: bindings, ignored listing per whitelist
Oct 18 19:33:44.960: INFO: namespace e2e-tests-vsphere-statefulset-gnfmp deletion completed in 26.620223678s
[AfterEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:67
Oct 18 19:33:44.960: INFO: Deleting all statefulset in namespace: e2e-tests-vsphere-statefulset-gnfmp
• [SLOW TEST:548.654 seconds]
[sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 18 19:33:45.006: INFO: Running AfterSuite actions on all node
Oct 18 19:33:45.006: INFO: Running AfterSuite actions on node 1
Ran 1 of 713 Specs in 548.875 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 712 Skipped PASS
Ginkgo ran 1 suite in 9m9.728218415s
Test Suite Passed
2017/10/18 19:33:45 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstatefulset\stesting' finished in 9m10.656371481s
2017/10/18 19:33:45 e2e.go:81: Done
```
VMware Reviewers: @rohitjogvmw @BaluDontu @tusharnt
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
refactor pd.go for future tests
**What this PR does / why we need it**:
Refactored _test/e2e/storage/pd.go_ so that it will be easier to add new tests, which I plan on doing to address issue 52676
1. Condenses 8 `It` blocks into 3 table driven tests.
2. Adds several `By` descriptions and `Logf` messages.
3. provides more consistent formatting and messages.
**Special notes for your reviewer**:
The diff is large but mostly I've not altered any test. The one semantic change I made was to remove the call to verify a write to a PD when, in fact, nothing had been written yet. This was essentially a no-op since the verify code returned immediately if the passed-in map was empty (which it was since nothing had been written).
```release-note
NONE
```
cc @jingxu97 @copejon
Automatic merge from submit-queue (batch tested with PRs 53507, 53772, 52903, 53543). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e tests to verify vsphere volume lifecycle on a clustered datastore
**What this PR does / why we need it**:
This PR introduces tests for volume provisioning on a clustered datastore. It does so in three ways
1. Static provisioning (create vsphere volume and then create a pod with it)
2. Dynamic provisioning (specify clustered datastore in storage class parameters)
3. Dynamic provisioning with spbm policy (specify storage policy name in storage class parameters. This policy is a tag based policy and tagged to a clustered datastore)
**Which issue this PR fixes** :
fixes vmware#278
**Special notes for your reviewer**:
Set env as per following example due to the need mentioned in description
```
export CLUSTER_DATASTORE="dscl1/sharedVmfs-1"
export VSPHERE_SPBM_POLICY_DS_CLUSTER="gold_cluster"
```
Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 53668, 53624, 52639, 53581, 51215). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Local e2e test fixes
**What this PR does / why we need it**:
1. Remove tests using TestContainerOutput because they don't wait for unmount
2. Fix scheduling error test to handle updated event msgs.
@kubernetes/sig-storage-pr-reviews
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53597
**Release note**:
NONE
This test has been flaking. The current working theory is that
volume stats collection didn't run in time to grab the metrics
from the newly created pod.
Made the following changes:
- Added more logs to help debug future failures
- Poll metrics a few additional times before failing the test
Automatic merge from submit-queue (batch tested with PRs 52990, 53064, 52686, 52221, 53069). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Allow kubelet metrics tests to run on gke
**What this PR does / why we need it**:
On GKE, you can still access kubelet metrics, so allow the kubelet metrics test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
NONE
Automatic merge from submit-queue (batch tested with PRs 52469, 52574, 52330, 52689, 52829). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fixing E2E Test - After restarting kubelet test expects node's status to be NotReady
**What this PR does / why we need it**:
This PR is fixing the e2e tests involves restarting the kubelets. After the kubelet is restarted, test expect the desired state to be NotReady.
After restarting the kubelet we should wait for some time and then check nodes status to be Ready.
Node should not be checked for NotReady state, after restarting kubelet.
**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/285
**Special notes for your reviewer**:
@BaluDontu @rohitjogvmw @tusharnt
Test logs before fix
-----
STEP: Restarting kubelet
Sep 15 11:26:32.768: INFO: Attempting sudo systemctl restart kubelet
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: command: sudo systemctl restart kubelet
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: stdout: ""
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: stderr: ""
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: exit code: 0
Sep 15 11:26:33.002: INFO: Waiting up to 1m0s for node kubernetes-node2 condition Ready to be false
Sep 15 11:26:33.012: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:35.023: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:37.032: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:39.041: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:41.051: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:43.061: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:45.070: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:47.080: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:49.093: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:51.105: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:53.117: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:55.128: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:57.140: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:59.151: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:01.158: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:03.167: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:05.180: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:07.188: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:09.210: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:11.221: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:13.231: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:15.240: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:17.249: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:19.263: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:21.272: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:23.283: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:25.309: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:27.317: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:29.327: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:31.342: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:33.343: INFO: Node kubernetes-node2 didn't reach desired Ready condition status (false) within 1m0s
Sep 15 11:27:33.343: INFO: Node kubernetes-node2 failed to enter NotReady state
[AfterEach] [sig-storage] PersistentVolumes:vsphere
Test logs after fix
-----
STEP: Restarting kubelet
Sep 18 15:40:49.066: INFO: Checking if sudo command is present
Sep 18 15:40:49.342: INFO: Checking if systemctl command is present
Sep 18 15:40:49.573: INFO: Attempting `sudo systemctl status kubelet | grep 'Main PID'`
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: command: sudo systemctl status kubelet | grep 'Main PID'
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: stdout: " Main PID: 19715 (docker)\n"
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: stderr: ""
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: exit code: 0
Sep 18 15:40:49.733: INFO: Attempting `sudo systemctl restart kubelet`
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: command: sudo systemctl restart kubelet
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: stdout: ""
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: stderr: ""
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: exit code: 0
Sep 18 15:40:49.988: INFO: Attempting `sudo systemctl status kubelet | grep 'Main PID'`
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: command: sudo systemctl status kubelet | grep 'Main PID'
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: stdout: " Main PID: 25021 (docker)\n"
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: stderr: ""
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: exit code: 0
Sep 18 15:40:50.158: INFO: Noticed that kubelet PID is changed. Waiting for 30 Seconds for Kubelet to come back
Sep 18 15:41:20.159: INFO: Waiting up to 1m0s for node kubernetes-node4 condition Ready to be true
STEP: Testing that written file is accessible.
Sep 18 15:41:20.191: INFO: Running '/Users/divyenp/github/vmware/kubernetes/_output/dockerized/bin/darwin/amd64/kubectl --server=https://10.162.0.45 --kubeconfig=/Users/divyenp/.kube/config exec --namespace=e2e-tests-pv-9j8j0 pvc-tester-3t9ds -- /bin/sh -c cat /mnt/_SUCCESS'
Sep 18 15:41:20.855: INFO: stderr: ""
Sep 18 15:41:20.855: INFO:
Sep 18 15:41:20.855: INFO: Volume mount detected on pod pvc-tester-3t9ds and written file /mnt/_SUCCESS is readable post-restart.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48406, 52819). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fixed nil dereference in dynamic provisioning e2e tests
**What this PR does / why we need it**: Fixed nil dereference in dynamic provisioning e2e tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52815
**Release note**:
```release-note-none
NONE
```
/sig storage
/assign @saad-ali
/cc @wongma7
/release-note-none
Automatic merge from submit-queue (batch tested with PRs 51833, 51936)
Changed volume IO e2e test to verify file hash instead of content.
**What this PR does / why we need it**: The existing way of verifying file content takes too much memory, causing processes to be OOM killed.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/51717
**Release note**:
```release-note
NONE
```
/sig storage
/release-note-none
/assign @jeffvance @rootfs
/cc @msau42
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)
Flexvolume dynamic plugin discovery: Prober unit tests and basic e2e test.
**What this PR does / why we need it**: Tests for changes introduced in PR #50031 .
As part of the prober unit test, I mocked filesystem, filesystem watch, and Flexvolume plugin initialization.
Moved the filesystem event goroutine to watcher implementation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51147
**Special notes for your reviewer**:
First commit contains added functionality of the mock filesystem.
Second commit is the refactor for moving mock filesystem into a common util directory.
Third commit is the unit and e2e tests.
**Release note**:
```release-note
NONE
```
/release-note-none
/sig storage
/assign @saad-ali @liggitt
/cc @mtaufen @chakri-nelluri @wongma7
Automatic merge from submit-queue (batch tested with PRs 50670, 50332)
e2e test for local storage mount point
**What this PR does / why we need it**:
We discovered that kubernetes can treat local directories and actual mountpoints differently. For example, https://github.com/kubernetes/kubernetes/issues/48331. The current local storage e2e tests use directories.
This PR introduces a test that creates a tmpfs and mounts it, and runs one of the local storage e2e tests.
**Which issue this PR fixes**: fixes https://github.com/kubernetes/kubernetes/issues/49126
**Special notes for your reviewer**:
I cherrypicked PR https://github.com/kubernetes/kubernetes/pull/50177, since local storage e2e tests are broken in master on 2017-08-08 due to "no such host" error. This PR replaces NodeExec with SSH commands.
You can run the tests using the following commands:
```
$ NUM_NODES=1 KUBE_FEATURE_GATES="PersistentLocalVolumes=true" go run hack/e2e.go -- -v --up
$ go run hack/e2e.go -- -v --test --test_args="--ginkgo.focus=\[Feature:LocalPersistentVolumes\]"
```
Here are the summary of results from my test run:
```
Ran 9 of 651 Specs in 387.905 seconds
SUCCESS! -- 9 Passed | 0 Failed | 0 Pending | 642 Skipped PASS
Ginkgo ran 1 suite in 6m29.369318483s
Test Suite Passed
2017/08/08 11:54:01 util.go:133: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:LocalPersistentVolumes\]' finished in 6m32.077462612s
```
**Release note**:
`NONE`
LocalVolumeType tmpfs added
Added checks to ensure tha volume created during setup contains expected testFileContent
Refactored tests out to avoid code duplication
Two different tests are performed with tmpfs:
-serial write and read in two different pods
-write and read in two different pods mounted at the same time
Fixed local storage test failures by integrating https://github.com/kubernetes/kubernetes/pull/50177
Switched NodeExec to SSH