58823a75a451b1ac735d218a8bbd68cac7531fd5
9283 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
ea66c00522 |
Merge pull request #54509 from vmware/node_poweroff_test
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. E2E test to verify pod failover during node power-off **What this PR does / why we need it**: This PR adds test to verify volume status after the node where the pod got provisioned being powered off and failed over to a different node. Test performs following tasks: 1. Create a StorageClass 2. Create a PVC with the StorageClass 3. Create a Deployment with 1 replica, using the PVC 4. Verify the pod got provisioned on a node 5. Verify the volume is attached to the node 6. Power off the node where pod got provisioned 7. Verify the pod got provisioned on a different node 8. Verify the volume is attached to the new node 9. Verify the volume is detached from the previous node 10. Power on the previous node 11. Delete the Deployment 12. Delete the PVC 13. Delete the StorageClass **Which issue this PR fixes**: Fixes https://github.com/vmware/kubernetes/issues/272 **Special notes for your reviewer**: Test logs: ``` # go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Node\sPoweroff' flag provided but not defined: -check-version-skew Usage of /tmp/go-build212295472/command-line-arguments/_obj/exe/e2e: -get go get -u kubetest if old or not installed (default true) -old duration Consider kubetest old if it exceeds this (default 24h0m0s) 2017/10/24 11:48:28 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest 2017/10/24 11:48:28 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS] 2017/10/24 11:48:28 e2e.go:57: The separator is required to use --get or --old flags 2017/10/24 11:48:28 e2e.go:58: The -- flag separator also suppresses this message 2017/10/24 11:48:28 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Node\sPoweroff... 2017/10/24 11:48:28 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version 2017/10/24 11:48:28 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 350.700421ms 2017/10/24 11:48:28 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh Skeleton Provider: prepare-e2e not implemented Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1627+54fc02df4a3a2a", GitCommit:"54fc02df4a3a2a12e14fb72d84a1aaa658ba6689", GitTreeState:"clean", BuildDate:"2017-10-24T18:33:37Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1437+ba66fcb63de9e9", GitCommit:"ba66fcb63de9e9b72e2ccf8b823df33a22df0522", GitTreeState:"clean", BuildDate:"2017-10-20T07:16:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"} 2017/10/24 11:48:28 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 315.334518ms 2017/10/24 11:48:28 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff Conformance test: not doing test setup. Oct 24 11:48:30.391: INFO: Overriding default scale value of zero to 1 Oct 24 11:48:30.391: INFO: Overriding default milliseconds value of zero to 5000 I1024 11:48:30.637436 409 e2e.go:378] Starting e2e run "ed9fdfc7-b8eb-11e7-a595-0050569c26b8" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1508870909 - Will randomize all specs Will run 1 of 717 specs Oct 24 11:48:30.678: INFO: >>> kubeConfig: /root/.kube/config Oct 24 11:48:30.685: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Oct 24 11:48:30.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 24 11:48:30.857: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 24 11:48:30.857: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Oct 24 11:48:30.863: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Oct 24 11:48:30.863: INFO: Dumping network health container logs from all nodes... Oct 24 11:48:30.877: INFO: Client version: v1.9.0-alpha.1.1627+54fc02df4a3a2a Oct 24 11:48:30.879: INFO: Server version: v1.9.0-alpha.1.1437+ba66fcb63de9e9 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] verify volume status after node power off /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149 [BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133 STEP: Creating a kubernetes client Oct 24 11:48:30.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:64 Oct 24 11:48:30.984: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable [It] verify volume status after node power off /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149 STEP: Creating a Storage Class STEP: Creating PVC using the Storage Class STEP: Waiting for PVC to be in bound phase Oct 24 11:48:31.141: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zxz56 to have phase Bound Oct 24 11:48:31.150: INFO: PersistentVolumeClaim pvc-zxz56 found but phase is Pending instead of Bound. Oct 24 11:48:33.155: INFO: PersistentVolumeClaim pvc-zxz56 found and phase=Bound (2.013403698s) STEP: Creating a Deployment I1024 11:48:33.180161 409 deployment_util.go:254] Waiting deployment "deployment-ef6b820e-b8eb-11e7-a595-0050569c26b8" to complete Oct 24 11:48:33.192: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Oct 24 11:48:35.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:37.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:39.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:41.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:43.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:45.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:47.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:49.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:51.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} Oct 24 11:48:53.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)} STEP: Get pod from the deployement STEP: Verify disk is attached to the node: kubernetes-node5 STEP: Power off the node: kubernetes-node5 Oct 24 11:49:07.337: INFO: Waiting for pod to be failed over from "kubernetes-node5" Oct 24 11:49:17.336: INFO: Waiting for pod to be failed over from "kubernetes-node5" Oct 24 11:49:27.340: INFO: Waiting for pod to be failed over from "kubernetes-node5" Oct 24 11:49:37.340: INFO: The pod has been failed over from "kubernetes-node5" to "kubernetes-node7" STEP: Waiting for disk to be attached to the new node: kubernetes-node7 Oct 24 11:49:47.534: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully attached to "kubernetes-node7". STEP: Waiting for disk to be detached from the previous node: kubernetes-node5 Oct 24 11:49:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:50:07.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:50:17.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:50:27.733: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:50:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:50:47.723: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:50:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:51:07.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:51:17.719: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:51:27.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:51:37.717: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:51:47.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:51:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:52:07.724: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:52:17.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:52:27.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:52:37.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:52:47.709: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:52:57.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:53:07.715: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:53:17.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:53:27.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:53:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:53:47.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:53:57.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:54:07.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:54:17.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:54:27.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:54:37.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:54:47.698: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:54:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:55:07.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:55:17.699: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:55:27.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:55:37.704: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5". Oct 24 11:55:47.703: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully detached from "kubernetes-node5". STEP: Power on the previous node: kubernetes-node5 Oct 24 11:55:49.168: INFO: Deleting PersistentVolumeClaim "pvc-zxz56" [AfterEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134 Oct 24 11:55:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-node-poweroff-l245b" for this suite. Oct 24 11:55:57.630: INFO: namespace: e2e-tests-node-poweroff-l245b, resource: bindings, ignored listing per whitelist Oct 24 11:55:57.643: INFO: namespace e2e-tests-node-poweroff-l245b deletion completed in 8.379395732s • [SLOW TEST:446.758 seconds] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22 verify volume status after node power off /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 24 11:55:57.647: INFO: Running AfterSuite actions on all node Oct 24 11:55:57.647: INFO: Running AfterSuite actions on node 1 Ran 1 of 717 Specs in 446.969 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 716 Skipped PASS Ginkgo ran 1 suite in 7m27.797177022s Test Suite Passed 2017/10/24 11:55:57 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff' finished in 7m28.760818768s 2017/10/24 11:55:57 e2e.go:81: Done ``` VMware Reviewers: @divyenpatel @pshahzeb **Release note**: ```release-note NONE ``` |
||
|
|
c1cd70ad16 |
Merge pull request #55533 from janetkuo/hook-e2e-multi
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Webhook e2e test: PUT and PATCH operations **What this PR does / why we need it**: **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Ref: https://github.com/kubernetes/features/issues/492 **Special notes for your reviewer**: ~depends on #55127~ (merged) @kubernetes/sig-api-machinery-api-reviews **Release note**: ```release-note NONE ``` |
||
|
|
3479549a62 |
Merge pull request #55532 from ianchakeres/validate-greater-than-zero-pv-pvc
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Validate that PV capacity and PVC capacity requests are positive, greater than 0 **What this PR does / why we need it**: Zero (0) capacity PVs cause related pods to fail, and zero (0) capacity PVCs create zero (0) capacity PVs. **Which issue(s) this PR fixes** : Fixes #55553 **Special notes for your reviewer**: **Release note**: ```release-note Validate positive capacity for PVs and PVCs. ``` |
||
|
|
51c8e9294b |
Merge pull request #55009 from bradtopol/addhosteventsemptyconform2
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Add empty dir and host related conformance annotations Signed-off-by: Brad Topol <btopol@us.ibm.com> Add empty dir and host related conformance annotations /sig testing /area conformance @sig-testing-pr-reviews This PR adds pod related conformance annotations to the e2e test suite. The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the empty dir and host based e2e conformance tests. Special notes for your reviewer: Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400 for the list of SIG Arch approved test names and descriptions that I am using. **Release note**: ```release-note NONE ``` |
||
|
|
3db4f2b843 | E2E test to verify pod failover during node power-off | ||
|
|
710523ed7d |
Merge pull request #53541 from jiayingz/e2e-stats
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of installer and device plugin containers. To support this, exports certain functions and fields in framework/resource_usage_gatherer.go so that it can be used in any e2e test to track any specified pod resource usage with the specified probe interval and duration. **What this PR does / why we need it**: We need to quantify the resource usage of the device plugin DaemonSet to make sure it can run reliably on nodes with GPUs. We also want to measure gpu driver installer resource usage to track any unexpected resource consumption during driver installation. For the later part, see a related issue https://github.com/kubernetes/features/issues/368. Example resource summary output: Oct 6 12:35:07.289: INFO: Printing summary: ResourceUsageSummary Oct 6 12:35:07.289: INFO: ResourceUsageSummary JSON { "100": [ { "Name": "nvidia-device-plugin-6kqxp/nvidia-device-plugin", "Cpu": 0.000507167, "Mem": 2134016 }, { "Name": "nvidia-device-plugin-6kqxp/nvidia-driver-installer", "Cpu": 1.915508718, "Mem": 663330816 }, { "Name": "nvidia-device-plugin-l28zc/nvidia-device-plugin", "Cpu": 0.000836256, "Mem": 2211840 }, { "Name": "nvidia-device-plugin-l28zc/nvidia-driver-installer", "Cpu": 1.916886293, "Mem": 691449856 }, { "Name": "nvidia-device-plugin-xb4vh/nvidia-device-plugin", "Cpu": 0.000515103, "Mem": 2265088 }, { "Name": "nvidia-device-plugin-xb4vh/nvidia-driver-installer", "Cpu": 1.909435982, "Mem": 832430080 } ], "50": [ { ... **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # **Special notes for your reviewer**: **Release note**: ```release-note ``` |
||
|
|
7ffaa06ab3 | Webhook e2e test: PUT and PATCH operations | ||
|
|
cba5aa0590 |
Merge pull request #55127 from caesarxuchao/webhook-do-conversion
Automatic merge from submit-queue (batch tested with PRs 54005, 55127, 53850, 55486, 53440). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Validation webhook plugin converts objects to the external version before sending to webhooks **What this PR does / why we need it**: **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: https://github.com/kubernetes/features/issues/492 **Special notes for your reviewer**: **Release note**: ```release-note The apiserver sends external versioned object to the admission webhooks now. Please update the webhooks to expect admissionReview.spec.object.raw to be serialized external versions of objects. ``` |
||
|
|
ae36f8ee95 |
Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of
installer and device plugin containers. To support this, exports certain functions and fields in framework/resource_usage_gatherer.go so that it can be used in any e2e test to track any specified pod resource usage with the specified probe interval and duration. |
||
|
|
beefab8a8e |
Merge pull request #54825 from bradtopol/adddownwarddockerconf
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Add downward api and docker container conformance annotations Signed-off-by: Brad Topol <btopol@us.ibm.com> Add downward api and docker container conformance annotations /sig testing /area conformance @sig-testing-pr-reviews This PR adds downward api and docker container related conformance annotations to the e2e test suite. The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the downward api and docker container based e2e conformance tests. Special notes for your reviewer: Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400 for the list of SIG Arch approved test names and descriptions that I am using. **Release note**: ```release-note NONE ``` |
||
|
|
6e2e5bac40 |
Merge pull request #54946 from bradtopol/adddnscrdcmprobeconform
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Add dns, configmap, and custom resource definition conformance annotations. Signed-off-by: Brad Topol <btopol@us.ibm.com> Add dns, configmap, and custom resource definition related conformance annotations /sig testing /area conformance @sig-testing-pr-reviews This PR adds pod related conformance annotations to the e2e test suite. The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the dns, configmap, and custom resource definition based e2e conformance tests. Special notes for your reviewer: Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400 for the list of SIG Arch approved test names and descriptions that I am using. **Release note**: ```release-note NONE ``` |
||
|
|
ab053a224d | let validation webhook convert objects to the external version before sending them | ||
|
|
74ec8d0fe8 |
Merge pull request #55288 from Random-Liu/e2e-log-for-alternative-runtime
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Support collecting log for alternative container runtime in e2e test. Fixes https://github.com/kubernetes/kubernetes/issues/55629. Add support to collect logs for alternative container runtime in e2e. Example for `cri-containerd`: ``` $ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation" ``` ```release-note none ``` /cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews |
||
|
|
98e2c8cdee | Validate that PV capacity and PVC capacity requests are greater than zero | ||
|
|
3431411e79 |
Regional support in CA tests.
When calling GKE API andd gcloud, take into account that clusters can be regional. This currently uses MultiZonal as an indicator that cluster is regional, which is suboptimal, but considering that our tests do not work with multizonal clusters at the moment, there is no regression. This should be changed once there is an indicator available that the cluster is regional. |
||
|
|
52e712913d |
Merge pull request #55478 from kawych/e2e
Automatic merge from submit-queue (batch tested with PRs 55594, 47849, 54692, 55478, 54133). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Use HPA permissions to read custom metrics in Custom Metrics e2e test **What this PR does / why we need it**: This PR fixes e2e test for Stackdriver Custom Metrics on GKE. With PR: https://github.com/kubernetes/kubernetes/pull/55387 it will be also necessary for analogous test on GCE. **Release note**: ```release-note NONE ``` |
||
|
|
fd3de96be6 |
Merge pull request #55594 from krzysztof-jastrzebski/e2e6
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Fix typo in e2e test name. |
||
|
|
41fe3ed5bc |
Merge pull request #54405 from resouer/clean-docker-dep
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. [Part 1] Remove docker dep in kubelet startup **What this PR does / why we need it**: Remove dependency of docker during kubelet start up. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: Part 1 of #54090 **Special notes for your reviewer**: Changes include: 1. Move docker client initialization into dockershim pkg. 2. Pass a docker `ClientConfig` from kubelet to dockershim 3. Pass parameters needed by `FakeDockerClient` thru `ClientConfig` to dockershim (TODO, the second part) Make dockershim tolerate when dockerd is down, otherwise it will still fail kubelet Please note after this PR, kubelet will still fail if dockerd is down, this will be fixed in the subsequent PR by making dockershim tolerate dockerd failure (initializing docker client in a separate goroutine), and refactoring cgroup and log driver detection. **Release note**: ```release-note Remove docker dependency during kubelet start up ``` |
||
|
|
ee5e6d85de | Fix typo in e2e test name. | ||
|
|
91615e4fd9 |
Merge pull request #49258 from xiangpengzhao/fix-dup-port-panic
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Check dup NodePort with protocols when update services **What this PR does / why we need it**: As the title says. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #48579 fixes: #54898 fixes: #55327 **Special notes for your reviewer**: /assign @freehan /cc @cblecker **Release note**: ```release-note NONE ``` |
||
|
|
e93819049d |
Merge pull request #54889 from lavalamp/wh-api
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Fix webhook API to also support URLs ref: https://github.com/kubernetes/features/issues/492 ```release-note The dynamic admission webhook now supports a URL in addition to a service reference, to accommodate out-of-cluster webhooks. ``` |
||
|
|
a0cb2ce697 | Add URL beside service | ||
|
|
858f3cbf59 |
Merge pull request #55503 from mml/conformance
Automatic merge from submit-queue (batch tested with PRs 52461, 55503). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. A few improvements to the conformance regtesst - Set OWNERS files to disallow parent approvers (doesn't work yet, but should be live next week.) - Document how to fix failing test. - Add a better error message. ```release-note NONE ``` |
||
|
|
dbcab6d744 |
Merge pull request #55510 from yguo0905/use-whitelisted-test-image
Automatic merge from submit-queue (batch tested with PRs 54460, 55258, 54858, 55506, 55510). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Use whitelisted test image in Docker live-restore node e2e test **What this PR does / why we need it**: This PR fixes this test: `[k8s.io] Docker features [Feature:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts` https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2enode-cosbeta-k8sdev-serial/1199#k8sio-docker-features-featuredocker-when-live-restore-is-enabled-serial-slow-disruptive-containers-should-not-be-disrupted-when-the-daemon-shuts-down-and-restarts **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: **Release note**: ``` None ``` /assign @yujuhong |
||
|
|
e52e79342c |
Merge pull request #54727 from caesarxuchao/namespaceSelector
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Add namespace selector to admission webhook Implementing the [design](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission-webhook-bootstrapping.md). * Added the NamespaceSelector field to the webhook configuration API * Let the webhook plugin respect the NamespaceSelector * Added unit test and e2e test cc @kubernetes/sig-api-machinery-api-reviews ```release-note Added namespaceSelector to externalAdmissionWebhook configuration to allow applying webhooks only to objects in the namespaces that have matching labels. ``` |
||
|
|
fe599c7dcf |
Merge pull request #54992 from porridge/perf-timing
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Add performance test phase timing export. **What this PR does / why we need it**: First step totwards allowing us to get a quick overview of test length via perf-dash.k8s.io. **Release note**: ```release-note NONE ``` @kubernetes/sig-scalability-feature-requests |
||
|
|
fdea39d158 |
Merge pull request #54386 from yanxuean/testfmt
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. missing the format string Signed-off-by: yanxuean <yan.xuean@zte.com.cn> **What this PR does / why we need it**: missing the format string **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # **Special notes for your reviewer**: **Release note**: ``` NONE ``` |
||
|
|
9c38abd482 | Expose accelerator metrics in the summary API. | ||
|
|
ed8cd396dd | Use whitelisted test image | ||
|
|
7006d224be |
add NamespaceSelector to the api
business logic in webhook plugin and unit test add a e2e test for namespace selector |
||
|
|
3483447ebc | Refer to instructions when the test fails. | ||
|
|
13f3844ef5 | Add README.md to test/conformance. | ||
|
|
97e669abdf | Disallow parent approvals. | ||
|
|
32c4295bcf | Support collecting log for alternative container runtime in e2e test. | ||
|
|
66965daf56 | bump base images to debian stretch | ||
|
|
ae2edc439e |
Merge pull request #55413 from liggitt/internal-autoscaling
Automatic merge from submit-queue (batch tested with PRs 53047, 54861, 55413, 55395, 55308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Switch internal scale type to autoscaling, enable apps/v1 scale subresources xref #49504 * Switch workload internal scale type to autoscaling.Scale (internal-only change) * Enable scale subresources for apps/v1 deployments, replicasets, statefulsets ```release-note NONE ``` |
||
|
|
770dacde45 | Use HPA permissions to read custom metrics in Custom Metrics e2e test | ||
|
|
def49db058 | Support multizone clusters in GCE and GKE e2e tests | ||
|
|
7c04a684ae |
Merge pull request #55412 from loburm/fix-infludb-e2e
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Fix influxdb e2e test failure. In scalability testing influxdb was recently disabled, but we still trying to execute corresponidng test, as a result it fails all the time. Skip test if influxdb is disabled. Fixes #54636 ```release-note NONE ``` |
||
|
|
c0e111a21c |
Merge pull request #55394 from krzysztof-jastrzebski/e2e6
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Adds e2e tests for Pod Priority and Preemption in Cluster Autoscaler This PR adds e2e tests for Pod Priority and Preemption in Clucter Autoscaler: - shouldn't scale up when expendable pod is created - should scale up when non expendable pod is created - shouldn't scale up when expendable pod is preempted - should scale down when expendable pod is running - shouldn't scale down when non expendable pod is running |
||
|
|
1d5dff0e05 |
Merge pull request #55426 from shyamjvs/disable-service-e2e-for-large-cluster
Automatic merge from submit-queue (batch tested with PRs 46581, 55426, 54849). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Disable service e2e test related to LB for huge clusters Based on https://github.com/kubernetes/kubernetes/issues/52495#issuecomment-343263564 /cc @MrHohn |
||
|
|
c7644dd104 |
Merge pull request #46581 from m1093782566/fix-net-perf
Automatic merge from submit-queue (batch tested with PRs 46581, 55426, 54849). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. fix newline in raw string in e2e net perf case **Which issue this PR fixes** fixes #46083 |
||
|
|
0a33cec59a |
Merge pull request #54092 from vmware/volume_perf_test
Automatic merge from submit-queue (batch tested with PRs 55265, 54092, 55353, 53733, 55385). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. E2E Performance test to print latency numbers for vsphere volume lifecycle operations **What this PR does / why we need it**: This PR introduces test that prints latency numbers for volume lifecycle operations. The operations that are evaluated are: 1. Create n number of PVCs 2. Create pods with these PVCs and ensure pods are in ready state 3. Delete pods 4. Delete the PVCs **Which issue this PR fixes** : fixes vmware#292 **Special notes for your reviewer**: 1. This PR has some duplicate code changes from existing open PRs to add e2e tests. If those PRs are merged before, I ll rebase this PR to avoid redundant changes. 2. Following are the test logs with total number of volumes as 12, volumes per pod as 4 and total iterations of test to be 3. <details> Test logs: ``` pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance' flag provided but not defined: -check-version-skew Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build041717622/command-line-arguments/_obj/exe/e2e: -get go get -u kubetest if old or not installed (default true) -old duration Consider kubetest old if it exceeds this (default 24h0m0s) 2017/10/16 15:11:29 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest 2017/10/16 15:11:29 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS] 2017/10/16 15:11:29 e2e.go:57: The separator is required to use --get or --old flags 2017/10/16 15:11:29 e2e.go:58: The -- flag separator also suppresses this message 2017/10/16 15:11:29 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance... 2017/10/16 15:11:29 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version 2017/10/16 15:11:29 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 280.313212ms 2017/10/16 15:11:29 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh Skeleton Provider: prepare-e2e not implemented Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} 2017/10/16 15:11:30 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 156.135002ms 2017/10/16 15:11:30 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance Conformance test: not doing test setup. Oct 16 15:11:30.867: INFO: Overriding default scale value of zero to 1 Oct 16 15:11:30.867: INFO: Overriding default milliseconds value of zero to 5000 I1016 15:11:30.981146 6068 e2e.go:383] Starting e2e run "f687717b-b2be-11e7-b207-784f435ee632" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1508191890 - Will randomize all specs Will run 1 of 706 specs Oct 16 15:11:31.007: INFO: >>> kubeConfig: /tmp/kube199.json Oct 16 15:11:31.018: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Oct 16 15:11:31.061: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 16 15:11:31.155: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 16 15:11:31.155: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Oct 16 15:11:31.163: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Oct 16 15:11:31.163: INFO: Dumping network health container logs from all nodes... Oct 16 15:11:31.177: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty Oct 16 15:11:31.181: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] vcp-performance vcp performance tests /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99 [BeforeEach] [sig-storage] vcp-performance /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133 STEP: Creating a kubernetes client Oct 16 15:11:31.183: INFO: >>> kubeConfig: /tmp/kube199.json STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] vcp-performance /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68 [It] vcp performance tests /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99 STEP: Creating Storage Class : sc-default STEP: Creating Storage Class : sc-vsan STEP: Creating Storage Class : sc-spbm STEP: Creating Storage Class : sc-user-specified-ds STEP: Creating 12 PVCs Oct 16 15:11:31.708: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-5rrtp to have phase Bound Oct 16 15:11:31.718: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:33.730: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:35.737: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:37.747: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:39.753: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:41.763: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:43.774: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:45.814: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:47.839: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:49.852: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:51.869: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:53.877: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:55.888: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:57.896: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:11:59.904: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:01.916: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:03.941: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:05.947: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:07.957: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:09.985: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:12.002: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:14.009: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:16.017: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:18.026: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:20.034: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:22.096: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:24.116: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:26.124: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:28.134: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:30.147: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:32.153: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:34.162: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:36.177: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:38.185: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:40.193: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:42.203: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:44.210: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:46.217: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:48.227: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:50.236: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:52.242: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:54.258: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:56.268: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:12:58.290: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:00.304: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:02.321: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:04.330: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:06.338: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:08.345: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:10.351: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:12.367: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:14.384: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:16.394: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:18.410: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:20.421: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:22.430: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:24.439: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:26.448: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:28.465: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:30.473: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:32.482: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:34.490: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:36.500: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:38.510: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:40.517: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. Oct 16 15:13:42.527: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound. ^C2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal 2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal 2017/10/16 15:13:43 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 2m13.976765704s 2017/10/16 15:13:43 main.go:260: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance: signal: killed] 2017/10/16 15:13:43 e2e.go:79: err: exit status 1 exit status 1 pshahzeb-m01:kubernetes_2 pshahzeb$ pshahzeb-m01:kubernetes_2 pshahzeb$ pshahzeb-m01:kubernetes_2 pshahzeb$ make +++ [1016 15:14:25] Building the toolchain targets: k8s.io/kubernetes/hack/cmd/teststale k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata +++ [1016 15:14:25] Generating bindata: test/e2e/generated/gobindata_util.go ~/k8s/kubernetes_2 ~/k8s/kubernetes_2/test/e2e/generated ~/k8s/kubernetes_2/test/e2e/generated +++ [1016 15:14:26] Building go targets for darwin/amd64: cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/cloud-controller-manager cmd/kubelet cmd/kubeadm cmd/hyperkube vendor/k8s.io/kube-aggregator vendor/k8s.io/apiextensions-apiserver plugin/cmd/kube-scheduler cmd/kubectl federation/cmd/kubefed cmd/gendocs cmd/genkubedocs cmd/genman cmd/genyaml cmd/genswaggertypedocs cmd/linkcheck federation/cmd/genfeddocs vendor/github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test cmd/kubemark vendor/github.com/onsi/ginkgo/ginkgo cmd/gke-certificates-controller pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance' flag provided but not defined: -check-version-skew Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build763038738/command-line-arguments/_obj/exe/e2e: -get go get -u kubetest if old or not installed (default true) -old duration Consider kubetest old if it exceeds this (default 24h0m0s) 2017/10/16 15:16:03 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest 2017/10/16 15:16:03 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS] 2017/10/16 15:16:03 e2e.go:57: The separator is required to use --get or --old flags 2017/10/16 15:16:03 e2e.go:58: The -- flag separator also suppresses this message 2017/10/16 15:16:03 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance... 2017/10/16 15:16:03 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version 2017/10/16 15:16:03 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 163.149145ms 2017/10/16 15:16:03 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh Skeleton Provider: prepare-e2e not implemented Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} 2017/10/16 15:16:03 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 168.158343ms 2017/10/16 15:16:03 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance Conformance test: not doing test setup. Oct 16 15:16:04.325: INFO: Overriding default scale value of zero to 1 Oct 16 15:16:04.325: INFO: Overriding default milliseconds value of zero to 5000 I1016 15:16:04.425919 8714 e2e.go:383] Starting e2e run "9984ec93-b2bf-11e7-810d-784f435ee632" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1508192163 - Will randomize all specs Will run 1 of 706 specs Oct 16 15:16:04.443: INFO: >>> kubeConfig: /tmp/kube199.json Oct 16 15:16:04.453: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Oct 16 15:16:04.500: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 16 15:16:04.598: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 16 15:16:04.598: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Oct 16 15:16:04.607: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Oct 16 15:16:04.607: INFO: Dumping network health container logs from all nodes... Oct 16 15:16:04.626: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty Oct 16 15:16:04.631: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] vcp-performance vcp performance tests /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99 [BeforeEach] [sig-storage] vcp-performance /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133 STEP: Creating a kubernetes client Oct 16 15:16:04.632: INFO: >>> kubeConfig: /tmp/kube199.json STEP: Building a namespace api object STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] vcp-performance /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68 [It] vcp performance tests /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99 STEP: Creating Storage Class : sc-default STEP: Creating Storage Class : sc-vsan STEP: Creating Storage Class : sc-spbm STEP: Creating Storage Class : sc-user-specified-ds STEP: Creating 12 PVCs Oct 16 15:16:05.313: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-l9tg4 to have phase Bound Oct 16 15:16:05.359: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound. Oct 16 15:16:07.381: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound. Oct 16 15:16:09.389: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound. Oct 16 15:16:11.404: INFO: PersistentVolumeClaim pvc-l9tg4 found and phase=Bound (6.090428509s) Oct 16 15:16:11.462: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-j9m85 to have phase Bound Oct 16 15:16:11.476: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound. Oct 16 15:16:13.489: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound. Oct 16 15:16:15.502: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound. Oct 16 15:16:17.509: INFO: PersistentVolumeClaim pvc-j9m85 found and phase=Bound (6.046381507s) Oct 16 15:16:17.543: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mc77p to have phase Bound Oct 16 15:16:17.558: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound. Oct 16 15:16:19.592: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound. Oct 16 15:16:21.598: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound. Oct 16 15:16:23.609: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound. Oct 16 15:16:25.618: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound. Oct 16 15:16:27.655: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound. Oct 16 15:16:29.699: INFO: PersistentVolumeClaim pvc-mc77p found and phase=Bound (12.155659079s) Oct 16 15:16:29.801: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-2j86v to have phase Bound Oct 16 15:16:29.815: INFO: PersistentVolumeClaim pvc-2j86v found and phase=Bound (14.767532ms) Oct 16 15:16:29.847: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-q7rsq to have phase Bound Oct 16 15:16:29.882: INFO: PersistentVolumeClaim pvc-q7rsq found but phase is Pending instead of Bound. Oct 16 15:16:31.896: INFO: PersistentVolumeClaim pvc-q7rsq found and phase=Bound (2.048751822s) Oct 16 15:16:31.928: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qsh8l to have phase Bound Oct 16 15:16:31.943: INFO: PersistentVolumeClaim pvc-qsh8l found and phase=Bound (14.944175ms) Oct 16 15:16:31.975: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-52pcj to have phase Bound Oct 16 15:16:31.993: INFO: PersistentVolumeClaim pvc-52pcj found and phase=Bound (17.704673ms) Oct 16 15:16:32.021: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-v5x89 to have phase Bound Oct 16 15:16:32.043: INFO: PersistentVolumeClaim pvc-v5x89 found and phase=Bound (21.44398ms) Oct 16 15:16:32.073: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-f9pnm to have phase Bound Oct 16 15:16:32.096: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound. Oct 16 15:16:34.163: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound. Oct 16 15:16:36.174: INFO: PersistentVolumeClaim pvc-f9pnm found and phase=Bound (4.100911147s) Oct 16 15:16:36.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-m5fqt to have phase Bound Oct 16 15:16:36.239: INFO: PersistentVolumeClaim pvc-m5fqt found and phase=Bound (14.819033ms) Oct 16 15:16:36.284: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mbsvx to have phase Bound Oct 16 15:16:36.302: INFO: PersistentVolumeClaim pvc-mbsvx found and phase=Bound (18.02845ms) Oct 16 15:16:36.334: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-s4sr2 to have phase Bound Oct 16 15:16:36.352: INFO: PersistentVolumeClaim pvc-s4sr2 found and phase=Bound (17.921955ms) STEP: Creating pod to attach PVs to the node Oct 16 15:17:57.069: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:17:57.397: INFO: stderr: "" Oct 16 15:17:57.397: INFO: stdout: "" Oct 16 15:17:57.527: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:17:57.836: INFO: stderr: "" Oct 16 15:17:57.836: INFO: stdout: "" Oct 16 15:17:57.981: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:17:58.290: INFO: stderr: "" Oct 16 15:17:58.290: INFO: stdout: "" Oct 16 15:17:58.421: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:17:58.755: INFO: stderr: "" Oct 16 15:17:58.755: INFO: stdout: "" Oct 16 15:17:58.884: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:17:59.188: INFO: stderr: "" Oct 16 15:17:59.188: INFO: stdout: "" Oct 16 15:17:59.287: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:17:59.602: INFO: stderr: "" Oct 16 15:17:59.602: INFO: stdout: "" Oct 16 15:17:59.721: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:18:00.101: INFO: stderr: "" Oct 16 15:18:00.101: INFO: stdout: "" Oct 16 15:18:00.265: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:18:00.611: INFO: stderr: "" Oct 16 15:18:00.611: INFO: stdout: "" Oct 16 15:18:00.720: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:18:01.092: INFO: stderr: "" Oct 16 15:18:01.092: INFO: stdout: "" Oct 16 15:18:01.212: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:18:01.589: INFO: stderr: "" Oct 16 15:18:01.589: INFO: stdout: "" Oct 16 15:18:01.694: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:18:02.023: INFO: stderr: "" Oct 16 15:18:02.023: INFO: stdout: "" Oct 16 15:18:02.502: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:18:02.805: INFO: stderr: "" Oct 16 15:18:02.805: INFO: stdout: "" STEP: Deleting pods Oct 16 15:18:02.807: INFO: Deleting pod "pvc-tester-hrfpv" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:18:02.842: INFO: Wait up to 5m0s for pod "pvc-tester-hrfpv" to be fully deleted Oct 16 15:18:42.875: INFO: Deleting pod "pvc-tester-vkgvj" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:18:42.913: INFO: Wait up to 5m0s for pod "pvc-tester-vkgvj" to be fully deleted Oct 16 15:19:24.937: INFO: Deleting pod "pvc-tester-wvnrg" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:19:24.971: INFO: Wait up to 5m0s for pod "pvc-tester-wvnrg" to be fully deleted Oct 16 15:19:56.990: INFO: Deleting pod "pvc-tester-vdb6s" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:19:57.025: INFO: Wait up to 5m0s for pod "pvc-tester-vdb6s" to be fully deleted Oct 16 15:20:41.866: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a1d277f-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a21e539-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a287a26-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99f9f244-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fe7a20-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fff232-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a033865-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0813e3-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0a963e-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0f575d-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a12e997-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a17cfa2-b2bf-11e7-aeb5-0050569c38f9.vmdk]] STEP: Deleting the PVCs Oct 16 15:20:41.872: INFO: Deleting PersistentVolumeClaim "pvc-l9tg4" Oct 16 15:20:41.919: INFO: Deleting PersistentVolumeClaim "pvc-j9m85" Oct 16 15:20:41.975: INFO: Deleting PersistentVolumeClaim "pvc-mc77p" Oct 16 15:20:42.027: INFO: Deleting PersistentVolumeClaim "pvc-2j86v" Oct 16 15:20:42.082: INFO: Deleting PersistentVolumeClaim "pvc-q7rsq" Oct 16 15:20:42.147: INFO: Deleting PersistentVolumeClaim "pvc-qsh8l" Oct 16 15:20:42.224: INFO: Deleting PersistentVolumeClaim "pvc-52pcj" Oct 16 15:20:42.259: INFO: Deleting PersistentVolumeClaim "pvc-v5x89" Oct 16 15:20:42.316: INFO: Deleting PersistentVolumeClaim "pvc-f9pnm" Oct 16 15:20:42.369: INFO: Deleting PersistentVolumeClaim "pvc-m5fqt" Oct 16 15:20:42.409: INFO: Deleting PersistentVolumeClaim "pvc-mbsvx" Oct 16 15:20:42.448: INFO: Deleting PersistentVolumeClaim "pvc-s4sr2" STEP: Creating 12 PVCs Oct 16 15:20:42.807: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85px8 to have phase Bound Oct 16 15:20:42.832: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound. Oct 16 15:20:44.845: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound. Oct 16 15:20:46.943: INFO: PersistentVolumeClaim pvc-85px8 found and phase=Bound (4.13527333s) Oct 16 15:20:47.032: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-npbn8 to have phase Bound Oct 16 15:20:47.048: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:20:49.086: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:20:51.097: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:20:53.108: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:20:55.128: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:20:57.148: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:20:59.160: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:21:01.172: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:21:03.185: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:21:05.194: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:21:07.223: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:21:09.239: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound. Oct 16 15:21:11.261: INFO: PersistentVolumeClaim pvc-npbn8 found and phase=Bound (24.228554172s) Oct 16 15:21:11.285: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ts6b8 to have phase Bound Oct 16 15:21:11.298: INFO: PersistentVolumeClaim pvc-ts6b8 found and phase=Bound (12.795195ms) Oct 16 15:21:11.325: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-hqb5d to have phase Bound Oct 16 15:21:11.336: INFO: PersistentVolumeClaim pvc-hqb5d found and phase=Bound (11.085933ms) Oct 16 15:21:11.359: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pzlmw to have phase Bound Oct 16 15:21:11.374: INFO: PersistentVolumeClaim pvc-pzlmw found and phase=Bound (14.757981ms) Oct 16 15:21:11.400: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4mljw to have phase Bound Oct 16 15:21:11.426: INFO: PersistentVolumeClaim pvc-4mljw found and phase=Bound (25.6641ms) Oct 16 15:21:11.450: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mz5br to have phase Bound Oct 16 15:21:11.462: INFO: PersistentVolumeClaim pvc-mz5br found and phase=Bound (11.515099ms) Oct 16 15:21:11.492: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-7fk8x to have phase Bound Oct 16 15:21:11.505: INFO: PersistentVolumeClaim pvc-7fk8x found and phase=Bound (13.387584ms) Oct 16 15:21:11.530: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-cb2dp to have phase Bound Oct 16 15:21:11.550: INFO: PersistentVolumeClaim pvc-cb2dp found and phase=Bound (19.152805ms) Oct 16 15:21:11.584: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85sqf to have phase Bound Oct 16 15:21:11.599: INFO: PersistentVolumeClaim pvc-85sqf found and phase=Bound (14.406407ms) Oct 16 15:21:11.632: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8zdmg to have phase Bound Oct 16 15:21:11.651: INFO: PersistentVolumeClaim pvc-8zdmg found and phase=Bound (18.063182ms) Oct 16 15:21:11.683: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-nntqr to have phase Bound Oct 16 15:21:11.694: INFO: PersistentVolumeClaim pvc-nntqr found and phase=Bound (10.97945ms) STEP: Creating pod to attach PVs to the node Oct 16 15:23:16.187: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:23:16.646: INFO: stderr: "" Oct 16 15:23:16.646: INFO: stdout: "" Oct 16 15:23:16.755: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:23:17.090: INFO: stderr: "" Oct 16 15:23:17.090: INFO: stdout: "" Oct 16 15:23:17.184: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:23:17.509: INFO: stderr: "" Oct 16 15:23:17.510: INFO: stdout: "" Oct 16 15:23:17.606: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:23:17.910: INFO: stderr: "" Oct 16 15:23:17.910: INFO: stdout: "" Oct 16 15:23:18.007: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:23:18.324: INFO: stderr: "" Oct 16 15:23:18.324: INFO: stdout: "" Oct 16 15:23:18.417: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:23:18.718: INFO: stderr: "" Oct 16 15:23:18.719: INFO: stdout: "" Oct 16 15:23:18.818: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:23:19.137: INFO: stderr: "" Oct 16 15:23:19.137: INFO: stdout: "" Oct 16 15:23:19.244: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:23:19.556: INFO: stderr: "" Oct 16 15:23:19.556: INFO: stdout: "" Oct 16 15:23:19.638: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:23:19.961: INFO: stderr: "" Oct 16 15:23:19.961: INFO: stdout: "" Oct 16 15:23:20.060: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:23:20.365: INFO: stderr: "" Oct 16 15:23:20.365: INFO: stdout: "" Oct 16 15:23:20.464: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:23:20.837: INFO: stderr: "" Oct 16 15:23:20.838: INFO: stdout: "" Oct 16 15:23:20.948: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:23:21.258: INFO: stderr: "" Oct 16 15:23:21.258: INFO: stdout: "" STEP: Deleting pods Oct 16 15:23:21.258: INFO: Deleting pod "pvc-tester-dpsht" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:23:21.299: INFO: Wait up to 5m0s for pod "pvc-tester-dpsht" to be fully deleted Oct 16 15:24:03.361: INFO: Deleting pod "pvc-tester-kt8wp" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:24:03.397: INFO: Wait up to 5m0s for pod "pvc-tester-kt8wp" to be fully deleted Oct 16 15:24:45.415: INFO: Deleting pod "pvc-tester-lckz2" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:24:45.452: INFO: Wait up to 5m0s for pod "pvc-tester-lckz2" to be fully deleted Oct 16 15:25:23.476: INFO: Deleting pod "pvc-tester-vrjxc" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:25:23.510: INFO: Wait up to 5m0s for pod "pvc-tester-vrjxc" to be fully deleted Oct 16 15:26:07.784: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f7e96b8-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f825cec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8627c5-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f89ca32-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8cd95e-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f900995-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6a76ec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6d2d17-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6f2a1a-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f72bfae-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f760aab-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f791671-b2c0-11e7-aeb5-0050569c38f9.vmdk]] STEP: Deleting the PVCs Oct 16 15:26:07.784: INFO: Deleting PersistentVolumeClaim "pvc-85px8" Oct 16 15:26:07.854: INFO: Deleting PersistentVolumeClaim "pvc-npbn8" Oct 16 15:26:07.900: INFO: Deleting PersistentVolumeClaim "pvc-ts6b8" Oct 16 15:26:07.954: INFO: Deleting PersistentVolumeClaim "pvc-hqb5d" Oct 16 15:26:08.003: INFO: Deleting PersistentVolumeClaim "pvc-pzlmw" Oct 16 15:26:08.044: INFO: Deleting PersistentVolumeClaim "pvc-4mljw" Oct 16 15:26:08.090: INFO: Deleting PersistentVolumeClaim "pvc-mz5br" Oct 16 15:26:08.130: INFO: Deleting PersistentVolumeClaim "pvc-7fk8x" Oct 16 15:26:08.183: INFO: Deleting PersistentVolumeClaim "pvc-cb2dp" Oct 16 15:26:08.230: INFO: Deleting PersistentVolumeClaim "pvc-85sqf" Oct 16 15:26:08.282: INFO: Deleting PersistentVolumeClaim "pvc-8zdmg" Oct 16 15:26:08.337: INFO: Deleting PersistentVolumeClaim "pvc-nntqr" STEP: Creating 12 PVCs Oct 16 15:26:08.691: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jwmql to have phase Bound Oct 16 15:26:08.716: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound. Oct 16 15:26:10.732: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound. Oct 16 15:26:12.754: INFO: PersistentVolumeClaim pvc-jwmql found and phase=Bound (4.062803231s) Oct 16 15:26:12.789: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jhrg7 to have phase Bound Oct 16 15:26:12.801: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:14.817: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:16.834: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:18.854: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:20.871: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:22.888: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:24.901: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:26.918: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:28.929: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:30.941: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:32.958: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:34.976: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound. Oct 16 15:26:37.013: INFO: PersistentVolumeClaim pvc-jhrg7 found and phase=Bound (24.222741938s) Oct 16 15:26:37.042: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-lvvkl to have phase Bound Oct 16 15:26:37.055: INFO: PersistentVolumeClaim pvc-lvvkl found and phase=Bound (12.935683ms) Oct 16 15:26:37.078: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-bgkkc to have phase Bound Oct 16 15:26:37.088: INFO: PersistentVolumeClaim pvc-bgkkc found and phase=Bound (9.861689ms) Oct 16 15:26:37.109: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qt2lv to have phase Bound Oct 16 15:26:37.126: INFO: PersistentVolumeClaim pvc-qt2lv found and phase=Bound (17.393667ms) Oct 16 15:26:37.147: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pgs9s to have phase Bound Oct 16 15:26:37.158: INFO: PersistentVolumeClaim pvc-pgs9s found but phase is Pending instead of Bound. Oct 16 15:26:39.171: INFO: PersistentVolumeClaim pvc-pgs9s found and phase=Bound (2.023756794s) Oct 16 15:26:39.217: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8h942 to have phase Bound Oct 16 15:26:39.249: INFO: PersistentVolumeClaim pvc-8h942 found and phase=Bound (32.347782ms) Oct 16 15:26:39.282: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-phtvg to have phase Bound Oct 16 15:26:39.296: INFO: PersistentVolumeClaim pvc-phtvg found and phase=Bound (13.940285ms) Oct 16 15:26:39.321: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ldv2f to have phase Bound Oct 16 15:26:39.333: INFO: PersistentVolumeClaim pvc-ldv2f found and phase=Bound (11.888903ms) Oct 16 15:26:39.360: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4v9hf to have phase Bound Oct 16 15:26:39.375: INFO: PersistentVolumeClaim pvc-4v9hf found and phase=Bound (14.230796ms) Oct 16 15:26:39.403: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jkfg5 to have phase Bound Oct 16 15:26:39.419: INFO: PersistentVolumeClaim pvc-jkfg5 found and phase=Bound (15.47811ms) Oct 16 15:26:39.449: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-87dwp to have phase Bound Oct 16 15:26:39.463: INFO: PersistentVolumeClaim pvc-87dwp found and phase=Bound (13.680898ms) STEP: Creating pod to attach PVs to the node Oct 16 15:28:08.033: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:28:08.507: INFO: stderr: "" Oct 16 15:28:08.507: INFO: stdout: "" Oct 16 15:28:08.609: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:28:08.917: INFO: stderr: "" Oct 16 15:28:08.917: INFO: stdout: "" Oct 16 15:28:09.019: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:28:09.342: INFO: stderr: "" Oct 16 15:28:09.342: INFO: stdout: "" Oct 16 15:28:09.432: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:28:09.760: INFO: stderr: "" Oct 16 15:28:09.760: INFO: stdout: "" Oct 16 15:28:09.847: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:28:10.164: INFO: stderr: "" Oct 16 15:28:10.164: INFO: stdout: "" Oct 16 15:28:10.259: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:28:10.576: INFO: stderr: "" Oct 16 15:28:10.576: INFO: stdout: "" Oct 16 15:28:10.681: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:28:11.000: INFO: stderr: "" Oct 16 15:28:11.000: INFO: stdout: "" Oct 16 15:28:11.086: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:28:11.383: INFO: stderr: "" Oct 16 15:28:11.383: INFO: stdout: "" Oct 16 15:28:11.486: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:28:11.782: INFO: stderr: "" Oct 16 15:28:11.782: INFO: stdout: "" Oct 16 15:28:11.888: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt' Oct 16 15:28:12.207: INFO: stderr: "" Oct 16 15:28:12.207: INFO: stdout: "" Oct 16 15:28:12.315: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt' Oct 16 15:28:12.634: INFO: stderr: "" Oct 16 15:28:12.634: INFO: stdout: "" Oct 16 15:28:12.778: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt' Oct 16 15:28:13.113: INFO: stderr: "" Oct 16 15:28:13.113: INFO: stdout: "" STEP: Deleting pods Oct 16 15:28:13.113: INFO: Deleting pod "pvc-tester-n68rp" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:28:13.157: INFO: Wait up to 5m0s for pod "pvc-tester-n68rp" to be fully deleted Oct 16 15:28:53.195: INFO: Deleting pod "pvc-tester-qm7w8" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:28:53.224: INFO: Wait up to 5m0s for pod "pvc-tester-qm7w8" to be fully deleted Oct 16 15:29:35.246: INFO: Deleting pod "pvc-tester-jslwg" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:29:35.279: INFO: Wait up to 5m0s for pod "pvc-tester-jslwg" to be fully deleted Oct 16 15:30:07.312: INFO: Deleting pod "pvc-tester-mcqqq" in namespace "e2e-tests-vcp-performance-lfrbk" Oct 16 15:30:07.357: INFO: Wait up to 5m0s for pod "pvc-tester-mcqqq" to be fully deleted Oct 16 15:31:03.595: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01aaa147-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ae1953-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b03dec-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b2ea3b-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b76412-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b8de3d-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01bd6a83-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c1b249-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c53dd9-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c941ba-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01caec5e-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ce2be9-b2c1-11e7-aeb5-0050569c38f9.vmdk]] STEP: Deleting the PVCs Oct 16 15:31:03.595: INFO: Deleting PersistentVolumeClaim "pvc-jwmql" Oct 16 15:31:03.641: INFO: Deleting PersistentVolumeClaim "pvc-jhrg7" Oct 16 15:31:03.681: INFO: Deleting PersistentVolumeClaim "pvc-lvvkl" Oct 16 15:31:03.724: INFO: Deleting PersistentVolumeClaim "pvc-bgkkc" Oct 16 15:31:03.771: INFO: Deleting PersistentVolumeClaim "pvc-qt2lv" Oct 16 15:31:03.833: INFO: Deleting PersistentVolumeClaim "pvc-pgs9s" Oct 16 15:31:03.887: INFO: Deleting PersistentVolumeClaim "pvc-8h942" Oct 16 15:31:04.047: INFO: Deleting PersistentVolumeClaim "pvc-phtvg" Oct 16 15:31:04.089: INFO: Deleting PersistentVolumeClaim "pvc-ldv2f" Oct 16 15:31:04.153: INFO: Deleting PersistentVolumeClaim "pvc-4v9hf" Oct 16 15:31:04.211: INFO: Deleting PersistentVolumeClaim "pvc-jkfg5" Oct 16 15:31:04.263: INFO: Deleting PersistentVolumeClaim "pvc-87dwp" Oct 16 15:31:04.317: INFO: Average latency for below operations Oct 16 15:31:04.317: INFO: Creating 12 PVCs and waiting for bound phase: 30576919 microseconds Oct 16 15:31:04.317: INFO: Creating 4 Pod: 97668230 microseconds Oct 16 15:31:04.317: INFO: Deleting 4 Pod and waiting for disk to be detached: 154930158 microseconds Oct 16 15:31:04.317: INFO: Deleting 12 PVCs: 660074 microseconds [AfterEach] [sig-storage] vcp-performance /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134 Oct 16 15:31:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-vcp-performance-lfrbk" for this suite. Oct 16 15:31:19.156: INFO: namespace: e2e-tests-vcp-performance-lfrbk, resource: bindings, ignored listing per whitelist Oct 16 15:31:19.297: INFO: namespace e2e-tests-vcp-performance-lfrbk deletion completed in 14.690943637s • [SLOW TEST:914.654 seconds] [sig-storage] vcp-performance /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22 vcp performance tests /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:31:19.305: INFO: Running AfterSuite actions on all node Oct 16 15:31:19.305: INFO: Running AfterSuite actions on node 1 Ran 1 of 706 Specs in 914.851 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS Ginkgo ran 1 suite in 15m15.380170791s Test Suite Passed 2017/10/16 15:31:19 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 15m15.901302911s 2017/10/16 15:31:19 e2e.go:81: Done ``` </details> ``` None ``` |
||
|
|
20e5b896e9 |
Adds e2e tests for Pod Priority and Preemption in Clucter Autoscaler:
- shouldn't scale up when expendable pod is created - should scale up when non expendable pod is created - shouldn't scale up when expendable pod is preempted - should scale down when expendable pod is running - shouldn't scale down when non expendable pod is running |
||
|
|
ba313796f1 |
Fix influxdb e2e test failure.
In scalability testing influxdb was recently disabled, but we still trying to execute corresponidng test, as a result it fails all the time. Skip test if influxdb is disabled. |
||
|
|
954c97fe6d | add e2e test on the hostport predicates | ||
|
|
36d16e0dbd | Add sig storage label to multizone static PV test | ||
|
|
fc5a613c17 | Add MutatingWebhookConfiguration type | ||
|
|
913721ebee | Disable service e2e on type and port change for huge clusters | ||
|
|
9ddea83a2c | Rename ExternalAdmissionHookConfiguration to ValidatingWebhookConfiguration |