Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E test to verify pod failover during node power-off
**What this PR does / why we need it**:
This PR adds test to verify volume status after the node where the pod got provisioned being powered off and failed over to a different node.
Test performs following tasks:
1. Create a StorageClass
2. Create a PVC with the StorageClass
3. Create a Deployment with 1 replica, using the PVC
4. Verify the pod got provisioned on a node
5. Verify the volume is attached to the node
6. Power off the node where pod got provisioned
7. Verify the pod got provisioned on a different node
8. Verify the volume is attached to the new node
9. Verify the volume is detached from the previous node
10. Power on the previous node
11. Delete the Deployment
12. Delete the PVC
13. Delete the StorageClass
**Which issue this PR fixes**:
Fixes https://github.com/vmware/kubernetes/issues/272
**Special notes for your reviewer**:
Test logs:
```
# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Node\sPoweroff'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build212295472/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/24 11:48:28 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/24 11:48:28 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/24 11:48:28 e2e.go:57: The separator is required to use --get or --old flags
2017/10/24 11:48:28 e2e.go:58: The -- flag separator also suppresses this message
2017/10/24 11:48:28 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Node\sPoweroff...
2017/10/24 11:48:28 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/24 11:48:28 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 350.700421ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1627+54fc02df4a3a2a", GitCommit:"54fc02df4a3a2a12e14fb72d84a1aaa658ba6689", GitTreeState:"clean", BuildDate:"2017-10-24T18:33:37Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1437+ba66fcb63de9e9", GitCommit:"ba66fcb63de9e9b72e2ccf8b823df33a22df0522", GitTreeState:"clean", BuildDate:"2017-10-20T07:16:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/24 11:48:28 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 315.334518ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff
Conformance test: not doing test setup.
Oct 24 11:48:30.391: INFO: Overriding default scale value of zero to 1
Oct 24 11:48:30.391: INFO: Overriding default milliseconds value of zero to 5000
I1024 11:48:30.637436 409 e2e.go:378] Starting e2e run "ed9fdfc7-b8eb-11e7-a595-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508870909 - Will randomize all specs
Will run 1 of 717 specs
Oct 24 11:48:30.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 24 11:48:30.685: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 24 11:48:30.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 24 11:48:30.857: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 24 11:48:30.857: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 24 11:48:30.863: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 24 11:48:30.863: INFO: Dumping network health container logs from all nodes...
Oct 24 11:48:30.877: INFO: Client version: v1.9.0-alpha.1.1627+54fc02df4a3a2a
Oct 24 11:48:30.879: INFO: Server version: v1.9.0-alpha.1.1437+ba66fcb63de9e9
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
verify volume status after node power off
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 24 11:48:30.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:64
Oct 24 11:48:30.984: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume status after node power off
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
STEP: Creating a Storage Class
STEP: Creating PVC using the Storage Class
STEP: Waiting for PVC to be in bound phase
Oct 24 11:48:31.141: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zxz56 to have phase Bound
Oct 24 11:48:31.150: INFO: PersistentVolumeClaim pvc-zxz56 found but phase is Pending instead of Bound.
Oct 24 11:48:33.155: INFO: PersistentVolumeClaim pvc-zxz56 found and phase=Bound (2.013403698s)
STEP: Creating a Deployment
I1024 11:48:33.180161 409 deployment_util.go:254] Waiting deployment "deployment-ef6b820e-b8eb-11e7-a595-0050569c26b8" to complete
Oct 24 11:48:33.192: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 24 11:48:35.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:37.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:39.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:41.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:43.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:45.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:47.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:49.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:51.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:53.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
STEP: Get pod from the deployement
STEP: Verify disk is attached to the node: kubernetes-node5
STEP: Power off the node: kubernetes-node5
Oct 24 11:49:07.337: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:17.336: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:27.340: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:37.340: INFO: The pod has been failed over from "kubernetes-node5" to "kubernetes-node7"
STEP: Waiting for disk to be attached to the new node: kubernetes-node7
Oct 24 11:49:47.534: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully attached to "kubernetes-node7".
STEP: Waiting for disk to be detached from the previous node: kubernetes-node5
Oct 24 11:49:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:07.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:17.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:27.733: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:47.723: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:07.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:17.719: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:27.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:37.717: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:47.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:07.724: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:17.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:27.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:37.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:47.709: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:57.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:07.715: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:17.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:27.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:47.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:57.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:07.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:17.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:27.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:37.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:47.698: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:07.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:17.699: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:27.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:37.704: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:47.703: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully detached from "kubernetes-node5".
STEP: Power on the previous node: kubernetes-node5
Oct 24 11:55:49.168: INFO: Deleting PersistentVolumeClaim "pvc-zxz56"
[AfterEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 24 11:55:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-node-poweroff-l245b" for this suite.
Oct 24 11:55:57.630: INFO: namespace: e2e-tests-node-poweroff-l245b, resource: bindings, ignored listing per whitelist
Oct 24 11:55:57.643: INFO: namespace e2e-tests-node-poweroff-l245b deletion completed in 8.379395732s
• [SLOW TEST:446.758 seconds]
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify volume status after node power off
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 24 11:55:57.647: INFO: Running AfterSuite actions on all node
Oct 24 11:55:57.647: INFO: Running AfterSuite actions on node 1
Ran 1 of 717 Specs in 446.969 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 716 Skipped PASS
Ginkgo ran 1 suite in 7m27.797177022s
Test Suite Passed
2017/10/24 11:55:57 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff' finished in 7m28.760818768s
2017/10/24 11:55:57 e2e.go:81: Done
```
VMware Reviewers: @divyenpatel @pshahzeb
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Webhook e2e test: PUT and PATCH operations
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Ref: https://github.com/kubernetes/features/issues/492
**Special notes for your reviewer**: ~depends on #55127~ (merged)
@kubernetes/sig-api-machinery-api-reviews
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Validate that PV capacity and PVC capacity requests are positive, greater than 0
**What this PR does / why we need it**: Zero (0) capacity PVs cause related pods to fail, and zero (0) capacity PVCs create zero (0) capacity PVs.
**Which issue(s) this PR fixes** :
Fixes#55553
**Special notes for your reviewer**:
**Release note**:
```release-note
Validate positive capacity for PVs and PVCs.
```
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add empty dir and host related conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add empty dir and host related conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds pod related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the empty dir and host based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
**What this PR does / why we need it**:
We need to quantify the resource usage of the device plugin DaemonSet to make sure it can run reliably on nodes with GPUs.
We also want to measure gpu driver installer resource usage to track any unexpected resource consumption during driver installation.
For the later part, see a related issue https://github.com/kubernetes/features/issues/368.
Example resource summary output:
Oct 6 12:35:07.289: INFO: Printing summary: ResourceUsageSummary
Oct 6 12:35:07.289: INFO: ResourceUsageSummary JSON
{
"100": [
{
"Name": "nvidia-device-plugin-6kqxp/nvidia-device-plugin",
"Cpu": 0.000507167,
"Mem": 2134016
},
{
"Name": "nvidia-device-plugin-6kqxp/nvidia-driver-installer",
"Cpu": 1.915508718,
"Mem": 663330816
},
{
"Name": "nvidia-device-plugin-l28zc/nvidia-device-plugin",
"Cpu": 0.000836256,
"Mem": 2211840
},
{
"Name": "nvidia-device-plugin-l28zc/nvidia-driver-installer",
"Cpu": 1.916886293,
"Mem": 691449856
},
{
"Name": "nvidia-device-plugin-xb4vh/nvidia-device-plugin",
"Cpu": 0.000515103,
"Mem": 2265088
},
{
"Name": "nvidia-device-plugin-xb4vh/nvidia-driver-installer",
"Cpu": 1.909435982,
"Mem": 832430080
}
],
"50": [
{
...
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 54005, 55127, 53850, 55486, 53440). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Validation webhook plugin converts objects to the external version before sending to webhooks
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
https://github.com/kubernetes/features/issues/492
**Special notes for your reviewer**:
**Release note**:
```release-note
The apiserver sends external versioned object to the admission webhooks now. Please update the webhooks to expect admissionReview.spec.object.raw to be serialized external versions of objects.
```
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add downward api and docker container conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add downward api and docker container conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds downward api and docker container related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the downward api and docker container based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add dns, configmap, and custom resource definition conformance
annotations.
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add dns, configmap, and custom resource definition related conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds pod related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the dns, configmap, and custom resource definition based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support collecting log for alternative container runtime in e2e test.
Fixes https://github.com/kubernetes/kubernetes/issues/55629.
Add support to collect logs for alternative container runtime in e2e.
Example for `cri-containerd`:
```
$ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation"
```
```release-note
none
```
/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
When calling GKE API andd gcloud, take into account
that clusters can be regional.
This currently uses MultiZonal as an indicator that
cluster is regional, which is suboptimal, but considering
that our tests do not work with multizonal clusters
at the moment, there is no regression.
This should be changed once there is an indicator available
that the cluster is regional.
Automatic merge from submit-queue (batch tested with PRs 55594, 47849, 54692, 55478, 54133). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use HPA permissions to read custom metrics in Custom Metrics e2e test
**What this PR does / why we need it**:
This PR fixes e2e test for Stackdriver Custom Metrics on GKE. With PR: https://github.com/kubernetes/kubernetes/pull/55387 it will be also necessary for analogous test on GCE.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[Part 1] Remove docker dep in kubelet startup
**What this PR does / why we need it**:
Remove dependency of docker during kubelet start up.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Part 1 of #54090
**Special notes for your reviewer**:
Changes include:
1. Move docker client initialization into dockershim pkg.
2. Pass a docker `ClientConfig` from kubelet to dockershim
3. Pass parameters needed by `FakeDockerClient` thru `ClientConfig` to dockershim
(TODO, the second part) Make dockershim tolerate when dockerd is down, otherwise it will still fail kubelet
Please note after this PR, kubelet will still fail if dockerd is down, this will be fixed in the subsequent PR by making dockershim tolerate dockerd failure (initializing docker client in a separate goroutine), and refactoring cgroup and log driver detection.
**Release note**:
```release-note
Remove docker dependency during kubelet start up
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check dup NodePort with protocols when update services
**What this PR does / why we need it**:
As the title says.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48579fixes: #54898fixes: #55327
**Special notes for your reviewer**:
/assign @freehan
/cc @cblecker
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52461, 55503). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
A few improvements to the conformance regtesst
- Set OWNERS files to disallow parent approvers (doesn't work yet, but should be live next week.)
- Document how to fix failing test.
- Add a better error message.
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add performance test phase timing export.
**What this PR does / why we need it**:
First step totwards allowing us to get a quick overview of test length
via perf-dash.k8s.io.
**Release note**:
```release-note
NONE
```
@kubernetes/sig-scalability-feature-requests
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
missing the format string
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
missing the format string
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53047, 54861, 55413, 55395, 55308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Switch internal scale type to autoscaling, enable apps/v1 scale subresources
xref #49504
* Switch workload internal scale type to autoscaling.Scale (internal-only change)
* Enable scale subresources for apps/v1 deployments, replicasets, statefulsets
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix influxdb e2e test failure.
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
Fixes#54636
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Pod Priority and Preemption in Cluster Autoscaler
This PR adds e2e tests for Pod Priority and Preemption in Clucter Autoscaler:
- shouldn't scale up when expendable pod is created
- should scale up when non expendable pod is created
- shouldn't scale up when expendable pod is preempted
- should scale down when expendable pod is running
- shouldn't scale down when non expendable pod is running
Automatic merge from submit-queue (batch tested with PRs 55265, 54092, 55353, 53733, 55385). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E Performance test to print latency numbers for vsphere volume lifecycle operations
**What this PR does / why we need it**:
This PR introduces test that prints latency numbers for volume lifecycle operations.
The operations that are evaluated are:
1. Create n number of PVCs
2. Create pods with these PVCs and ensure pods are in ready state
3. Delete pods
4. Delete the PVCs
**Which issue this PR fixes** : fixes vmware#292
**Special notes for your reviewer**:
1. This PR has some duplicate code changes from existing open PRs to add e2e tests. If those PRs are merged before, I ll rebase this PR to avoid redundant changes.
2. Following are the test logs with total number of volumes as 12, volumes per pod as 4 and total iterations of test to be 3.
<details>
Test logs:
```
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build041717622/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:11:29 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:11:29 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:11:29 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:11:29 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:11:29 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:11:29 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:11:29 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 280.313212ms
2017/10/16 15:11:29 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:11:30 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 156.135002ms
2017/10/16 15:11:30 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:11:30.867: INFO: Overriding default scale value of zero to 1
Oct 16 15:11:30.867: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:11:30.981146 6068 e2e.go:383] Starting e2e run "f687717b-b2be-11e7-b207-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508191890 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:11:31.007: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:11:31.018: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:11:31.061: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:11:31.155: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:11:31.155: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:11:31.163: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:11:31.163: INFO: Dumping network health container logs from all nodes...
Oct 16 15:11:31.177: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:11:31.181: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:11:31.183: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:11:31.708: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-5rrtp to have phase Bound
Oct 16 15:11:31.718: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:33.730: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:35.737: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:37.747: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:39.753: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:41.763: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:43.774: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:45.814: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:47.839: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:49.852: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:51.869: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:53.877: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:55.888: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:57.896: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:59.904: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:01.916: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:03.941: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:05.947: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:07.957: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:09.985: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:12.002: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:14.009: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:16.017: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:18.026: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:20.034: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:22.096: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:24.116: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:26.124: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:28.134: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:30.147: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:32.153: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:34.162: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:36.177: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:38.185: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:40.193: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:42.203: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:44.210: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:46.217: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:48.227: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:50.236: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:52.242: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:54.258: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:56.268: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:58.290: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:00.304: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:02.321: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:04.330: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:06.338: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:08.345: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:10.351: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:12.367: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:14.384: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:16.394: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:18.410: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:20.421: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:22.430: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:24.439: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:26.448: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:28.465: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:30.473: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:32.482: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:34.490: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:36.500: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:38.510: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:40.517: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:42.527: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
^C2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 2m13.976765704s
2017/10/16 15:13:43 main.go:260: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance: signal: killed]
2017/10/16 15:13:43 e2e.go:79: err: exit status 1
exit status 1
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$ make
+++ [1016 15:14:25] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [1016 15:14:25] Generating bindata:
test/e2e/generated/gobindata_util.go
~/k8s/kubernetes_2 ~/k8s/kubernetes_2/test/e2e/generated
~/k8s/kubernetes_2/test/e2e/generated
+++ [1016 15:14:26] Building go targets for darwin/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/cloud-controller-manager
cmd/kubelet
cmd/kubeadm
cmd/hyperkube
vendor/k8s.io/kube-aggregator
vendor/k8s.io/apiextensions-apiserver
plugin/cmd/kube-scheduler
cmd/kubectl
federation/cmd/kubefed
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/genyaml
cmd/genswaggertypedocs
cmd/linkcheck
federation/cmd/genfeddocs
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
cmd/kubemark
vendor/github.com/onsi/ginkgo/ginkgo
cmd/gke-certificates-controller
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build763038738/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:16:03 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:16:03 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:16:03 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:16:03 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:16:03 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:16:03 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:16:03 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 163.149145ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:16:03 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 168.158343ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:16:04.325: INFO: Overriding default scale value of zero to 1
Oct 16 15:16:04.325: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:16:04.425919 8714 e2e.go:383] Starting e2e run "9984ec93-b2bf-11e7-810d-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508192163 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:16:04.443: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:16:04.453: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:16:04.500: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:16:04.598: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:16:04.598: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:16:04.607: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:16:04.607: INFO: Dumping network health container logs from all nodes...
Oct 16 15:16:04.626: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:16:04.631: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:16:04.632: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:16:05.313: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-l9tg4 to have phase Bound
Oct 16 15:16:05.359: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:07.381: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:09.389: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:11.404: INFO: PersistentVolumeClaim pvc-l9tg4 found and phase=Bound (6.090428509s)
Oct 16 15:16:11.462: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-j9m85 to have phase Bound
Oct 16 15:16:11.476: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:13.489: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:15.502: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:17.509: INFO: PersistentVolumeClaim pvc-j9m85 found and phase=Bound (6.046381507s)
Oct 16 15:16:17.543: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mc77p to have phase Bound
Oct 16 15:16:17.558: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:19.592: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:21.598: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:23.609: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:25.618: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:27.655: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:29.699: INFO: PersistentVolumeClaim pvc-mc77p found and phase=Bound (12.155659079s)
Oct 16 15:16:29.801: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-2j86v to have phase Bound
Oct 16 15:16:29.815: INFO: PersistentVolumeClaim pvc-2j86v found and phase=Bound (14.767532ms)
Oct 16 15:16:29.847: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-q7rsq to have phase Bound
Oct 16 15:16:29.882: INFO: PersistentVolumeClaim pvc-q7rsq found but phase is Pending instead of Bound.
Oct 16 15:16:31.896: INFO: PersistentVolumeClaim pvc-q7rsq found and phase=Bound (2.048751822s)
Oct 16 15:16:31.928: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qsh8l to have phase Bound
Oct 16 15:16:31.943: INFO: PersistentVolumeClaim pvc-qsh8l found and phase=Bound (14.944175ms)
Oct 16 15:16:31.975: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-52pcj to have phase Bound
Oct 16 15:16:31.993: INFO: PersistentVolumeClaim pvc-52pcj found and phase=Bound (17.704673ms)
Oct 16 15:16:32.021: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-v5x89 to have phase Bound
Oct 16 15:16:32.043: INFO: PersistentVolumeClaim pvc-v5x89 found and phase=Bound (21.44398ms)
Oct 16 15:16:32.073: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-f9pnm to have phase Bound
Oct 16 15:16:32.096: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:34.163: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:36.174: INFO: PersistentVolumeClaim pvc-f9pnm found and phase=Bound (4.100911147s)
Oct 16 15:16:36.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-m5fqt to have phase Bound
Oct 16 15:16:36.239: INFO: PersistentVolumeClaim pvc-m5fqt found and phase=Bound (14.819033ms)
Oct 16 15:16:36.284: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mbsvx to have phase Bound
Oct 16 15:16:36.302: INFO: PersistentVolumeClaim pvc-mbsvx found and phase=Bound (18.02845ms)
Oct 16 15:16:36.334: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-s4sr2 to have phase Bound
Oct 16 15:16:36.352: INFO: PersistentVolumeClaim pvc-s4sr2 found and phase=Bound (17.921955ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:17:57.069: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:57.397: INFO: stderr: ""
Oct 16 15:17:57.397: INFO: stdout: ""
Oct 16 15:17:57.527: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:57.836: INFO: stderr: ""
Oct 16 15:17:57.836: INFO: stdout: ""
Oct 16 15:17:57.981: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:58.290: INFO: stderr: ""
Oct 16 15:17:58.290: INFO: stdout: ""
Oct 16 15:17:58.421: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:58.755: INFO: stderr: ""
Oct 16 15:17:58.755: INFO: stdout: ""
Oct 16 15:17:58.884: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:59.188: INFO: stderr: ""
Oct 16 15:17:59.188: INFO: stdout: ""
Oct 16 15:17:59.287: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:59.602: INFO: stderr: ""
Oct 16 15:17:59.602: INFO: stdout: ""
Oct 16 15:17:59.721: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:00.101: INFO: stderr: ""
Oct 16 15:18:00.101: INFO: stdout: ""
Oct 16 15:18:00.265: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:00.611: INFO: stderr: ""
Oct 16 15:18:00.611: INFO: stdout: ""
Oct 16 15:18:00.720: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:01.092: INFO: stderr: ""
Oct 16 15:18:01.092: INFO: stdout: ""
Oct 16 15:18:01.212: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:01.589: INFO: stderr: ""
Oct 16 15:18:01.589: INFO: stdout: ""
Oct 16 15:18:01.694: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:02.023: INFO: stderr: ""
Oct 16 15:18:02.023: INFO: stdout: ""
Oct 16 15:18:02.502: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:02.805: INFO: stderr: ""
Oct 16 15:18:02.805: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:18:02.807: INFO: Deleting pod "pvc-tester-hrfpv" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:02.842: INFO: Wait up to 5m0s for pod "pvc-tester-hrfpv" to be fully deleted
Oct 16 15:18:42.875: INFO: Deleting pod "pvc-tester-vkgvj" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:42.913: INFO: Wait up to 5m0s for pod "pvc-tester-vkgvj" to be fully deleted
Oct 16 15:19:24.937: INFO: Deleting pod "pvc-tester-wvnrg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:24.971: INFO: Wait up to 5m0s for pod "pvc-tester-wvnrg" to be fully deleted
Oct 16 15:19:56.990: INFO: Deleting pod "pvc-tester-vdb6s" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:57.025: INFO: Wait up to 5m0s for pod "pvc-tester-vdb6s" to be fully deleted
Oct 16 15:20:41.866: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a1d277f-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a21e539-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a287a26-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99f9f244-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fe7a20-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fff232-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a033865-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0813e3-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0a963e-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0f575d-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a12e997-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a17cfa2-b2bf-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:20:41.872: INFO: Deleting PersistentVolumeClaim "pvc-l9tg4"
Oct 16 15:20:41.919: INFO: Deleting PersistentVolumeClaim "pvc-j9m85"
Oct 16 15:20:41.975: INFO: Deleting PersistentVolumeClaim "pvc-mc77p"
Oct 16 15:20:42.027: INFO: Deleting PersistentVolumeClaim "pvc-2j86v"
Oct 16 15:20:42.082: INFO: Deleting PersistentVolumeClaim "pvc-q7rsq"
Oct 16 15:20:42.147: INFO: Deleting PersistentVolumeClaim "pvc-qsh8l"
Oct 16 15:20:42.224: INFO: Deleting PersistentVolumeClaim "pvc-52pcj"
Oct 16 15:20:42.259: INFO: Deleting PersistentVolumeClaim "pvc-v5x89"
Oct 16 15:20:42.316: INFO: Deleting PersistentVolumeClaim "pvc-f9pnm"
Oct 16 15:20:42.369: INFO: Deleting PersistentVolumeClaim "pvc-m5fqt"
Oct 16 15:20:42.409: INFO: Deleting PersistentVolumeClaim "pvc-mbsvx"
Oct 16 15:20:42.448: INFO: Deleting PersistentVolumeClaim "pvc-s4sr2"
STEP: Creating 12 PVCs
Oct 16 15:20:42.807: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85px8 to have phase Bound
Oct 16 15:20:42.832: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:44.845: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:46.943: INFO: PersistentVolumeClaim pvc-85px8 found and phase=Bound (4.13527333s)
Oct 16 15:20:47.032: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-npbn8 to have phase Bound
Oct 16 15:20:47.048: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:49.086: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:51.097: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:53.108: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:55.128: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:57.148: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:59.160: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:01.172: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:03.185: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:05.194: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:07.223: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:09.239: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:11.261: INFO: PersistentVolumeClaim pvc-npbn8 found and phase=Bound (24.228554172s)
Oct 16 15:21:11.285: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ts6b8 to have phase Bound
Oct 16 15:21:11.298: INFO: PersistentVolumeClaim pvc-ts6b8 found and phase=Bound (12.795195ms)
Oct 16 15:21:11.325: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-hqb5d to have phase Bound
Oct 16 15:21:11.336: INFO: PersistentVolumeClaim pvc-hqb5d found and phase=Bound (11.085933ms)
Oct 16 15:21:11.359: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pzlmw to have phase Bound
Oct 16 15:21:11.374: INFO: PersistentVolumeClaim pvc-pzlmw found and phase=Bound (14.757981ms)
Oct 16 15:21:11.400: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4mljw to have phase Bound
Oct 16 15:21:11.426: INFO: PersistentVolumeClaim pvc-4mljw found and phase=Bound (25.6641ms)
Oct 16 15:21:11.450: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mz5br to have phase Bound
Oct 16 15:21:11.462: INFO: PersistentVolumeClaim pvc-mz5br found and phase=Bound (11.515099ms)
Oct 16 15:21:11.492: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-7fk8x to have phase Bound
Oct 16 15:21:11.505: INFO: PersistentVolumeClaim pvc-7fk8x found and phase=Bound (13.387584ms)
Oct 16 15:21:11.530: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-cb2dp to have phase Bound
Oct 16 15:21:11.550: INFO: PersistentVolumeClaim pvc-cb2dp found and phase=Bound (19.152805ms)
Oct 16 15:21:11.584: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85sqf to have phase Bound
Oct 16 15:21:11.599: INFO: PersistentVolumeClaim pvc-85sqf found and phase=Bound (14.406407ms)
Oct 16 15:21:11.632: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8zdmg to have phase Bound
Oct 16 15:21:11.651: INFO: PersistentVolumeClaim pvc-8zdmg found and phase=Bound (18.063182ms)
Oct 16 15:21:11.683: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-nntqr to have phase Bound
Oct 16 15:21:11.694: INFO: PersistentVolumeClaim pvc-nntqr found and phase=Bound (10.97945ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:23:16.187: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:16.646: INFO: stderr: ""
Oct 16 15:23:16.646: INFO: stdout: ""
Oct 16 15:23:16.755: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:17.090: INFO: stderr: ""
Oct 16 15:23:17.090: INFO: stdout: ""
Oct 16 15:23:17.184: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:17.509: INFO: stderr: ""
Oct 16 15:23:17.510: INFO: stdout: ""
Oct 16 15:23:17.606: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:17.910: INFO: stderr: ""
Oct 16 15:23:17.910: INFO: stdout: ""
Oct 16 15:23:18.007: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:18.324: INFO: stderr: ""
Oct 16 15:23:18.324: INFO: stdout: ""
Oct 16 15:23:18.417: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:18.718: INFO: stderr: ""
Oct 16 15:23:18.719: INFO: stdout: ""
Oct 16 15:23:18.818: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:19.137: INFO: stderr: ""
Oct 16 15:23:19.137: INFO: stdout: ""
Oct 16 15:23:19.244: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:19.556: INFO: stderr: ""
Oct 16 15:23:19.556: INFO: stdout: ""
Oct 16 15:23:19.638: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:19.961: INFO: stderr: ""
Oct 16 15:23:19.961: INFO: stdout: ""
Oct 16 15:23:20.060: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:20.365: INFO: stderr: ""
Oct 16 15:23:20.365: INFO: stdout: ""
Oct 16 15:23:20.464: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:20.837: INFO: stderr: ""
Oct 16 15:23:20.838: INFO: stdout: ""
Oct 16 15:23:20.948: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:21.258: INFO: stderr: ""
Oct 16 15:23:21.258: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:23:21.258: INFO: Deleting pod "pvc-tester-dpsht" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:23:21.299: INFO: Wait up to 5m0s for pod "pvc-tester-dpsht" to be fully deleted
Oct 16 15:24:03.361: INFO: Deleting pod "pvc-tester-kt8wp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:03.397: INFO: Wait up to 5m0s for pod "pvc-tester-kt8wp" to be fully deleted
Oct 16 15:24:45.415: INFO: Deleting pod "pvc-tester-lckz2" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:45.452: INFO: Wait up to 5m0s for pod "pvc-tester-lckz2" to be fully deleted
Oct 16 15:25:23.476: INFO: Deleting pod "pvc-tester-vrjxc" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:25:23.510: INFO: Wait up to 5m0s for pod "pvc-tester-vrjxc" to be fully deleted
Oct 16 15:26:07.784: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f7e96b8-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f825cec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8627c5-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f89ca32-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8cd95e-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f900995-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6a76ec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6d2d17-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6f2a1a-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f72bfae-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f760aab-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f791671-b2c0-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:26:07.784: INFO: Deleting PersistentVolumeClaim "pvc-85px8"
Oct 16 15:26:07.854: INFO: Deleting PersistentVolumeClaim "pvc-npbn8"
Oct 16 15:26:07.900: INFO: Deleting PersistentVolumeClaim "pvc-ts6b8"
Oct 16 15:26:07.954: INFO: Deleting PersistentVolumeClaim "pvc-hqb5d"
Oct 16 15:26:08.003: INFO: Deleting PersistentVolumeClaim "pvc-pzlmw"
Oct 16 15:26:08.044: INFO: Deleting PersistentVolumeClaim "pvc-4mljw"
Oct 16 15:26:08.090: INFO: Deleting PersistentVolumeClaim "pvc-mz5br"
Oct 16 15:26:08.130: INFO: Deleting PersistentVolumeClaim "pvc-7fk8x"
Oct 16 15:26:08.183: INFO: Deleting PersistentVolumeClaim "pvc-cb2dp"
Oct 16 15:26:08.230: INFO: Deleting PersistentVolumeClaim "pvc-85sqf"
Oct 16 15:26:08.282: INFO: Deleting PersistentVolumeClaim "pvc-8zdmg"
Oct 16 15:26:08.337: INFO: Deleting PersistentVolumeClaim "pvc-nntqr"
STEP: Creating 12 PVCs
Oct 16 15:26:08.691: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jwmql to have phase Bound
Oct 16 15:26:08.716: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:10.732: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:12.754: INFO: PersistentVolumeClaim pvc-jwmql found and phase=Bound (4.062803231s)
Oct 16 15:26:12.789: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jhrg7 to have phase Bound
Oct 16 15:26:12.801: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:14.817: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:16.834: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:18.854: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:20.871: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:22.888: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:24.901: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:26.918: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:28.929: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:30.941: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:32.958: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:34.976: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:37.013: INFO: PersistentVolumeClaim pvc-jhrg7 found and phase=Bound (24.222741938s)
Oct 16 15:26:37.042: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-lvvkl to have phase Bound
Oct 16 15:26:37.055: INFO: PersistentVolumeClaim pvc-lvvkl found and phase=Bound (12.935683ms)
Oct 16 15:26:37.078: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-bgkkc to have phase Bound
Oct 16 15:26:37.088: INFO: PersistentVolumeClaim pvc-bgkkc found and phase=Bound (9.861689ms)
Oct 16 15:26:37.109: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qt2lv to have phase Bound
Oct 16 15:26:37.126: INFO: PersistentVolumeClaim pvc-qt2lv found and phase=Bound (17.393667ms)
Oct 16 15:26:37.147: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pgs9s to have phase Bound
Oct 16 15:26:37.158: INFO: PersistentVolumeClaim pvc-pgs9s found but phase is Pending instead of Bound.
Oct 16 15:26:39.171: INFO: PersistentVolumeClaim pvc-pgs9s found and phase=Bound (2.023756794s)
Oct 16 15:26:39.217: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8h942 to have phase Bound
Oct 16 15:26:39.249: INFO: PersistentVolumeClaim pvc-8h942 found and phase=Bound (32.347782ms)
Oct 16 15:26:39.282: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-phtvg to have phase Bound
Oct 16 15:26:39.296: INFO: PersistentVolumeClaim pvc-phtvg found and phase=Bound (13.940285ms)
Oct 16 15:26:39.321: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ldv2f to have phase Bound
Oct 16 15:26:39.333: INFO: PersistentVolumeClaim pvc-ldv2f found and phase=Bound (11.888903ms)
Oct 16 15:26:39.360: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4v9hf to have phase Bound
Oct 16 15:26:39.375: INFO: PersistentVolumeClaim pvc-4v9hf found and phase=Bound (14.230796ms)
Oct 16 15:26:39.403: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jkfg5 to have phase Bound
Oct 16 15:26:39.419: INFO: PersistentVolumeClaim pvc-jkfg5 found and phase=Bound (15.47811ms)
Oct 16 15:26:39.449: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-87dwp to have phase Bound
Oct 16 15:26:39.463: INFO: PersistentVolumeClaim pvc-87dwp found and phase=Bound (13.680898ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:28:08.033: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:08.507: INFO: stderr: ""
Oct 16 15:28:08.507: INFO: stdout: ""
Oct 16 15:28:08.609: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:08.917: INFO: stderr: ""
Oct 16 15:28:08.917: INFO: stdout: ""
Oct 16 15:28:09.019: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:09.342: INFO: stderr: ""
Oct 16 15:28:09.342: INFO: stdout: ""
Oct 16 15:28:09.432: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:09.760: INFO: stderr: ""
Oct 16 15:28:09.760: INFO: stdout: ""
Oct 16 15:28:09.847: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:10.164: INFO: stderr: ""
Oct 16 15:28:10.164: INFO: stdout: ""
Oct 16 15:28:10.259: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:10.576: INFO: stderr: ""
Oct 16 15:28:10.576: INFO: stdout: ""
Oct 16 15:28:10.681: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:11.000: INFO: stderr: ""
Oct 16 15:28:11.000: INFO: stdout: ""
Oct 16 15:28:11.086: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:11.383: INFO: stderr: ""
Oct 16 15:28:11.383: INFO: stdout: ""
Oct 16 15:28:11.486: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:11.782: INFO: stderr: ""
Oct 16 15:28:11.782: INFO: stdout: ""
Oct 16 15:28:11.888: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:12.207: INFO: stderr: ""
Oct 16 15:28:12.207: INFO: stdout: ""
Oct 16 15:28:12.315: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:12.634: INFO: stderr: ""
Oct 16 15:28:12.634: INFO: stdout: ""
Oct 16 15:28:12.778: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:13.113: INFO: stderr: ""
Oct 16 15:28:13.113: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:28:13.113: INFO: Deleting pod "pvc-tester-n68rp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:13.157: INFO: Wait up to 5m0s for pod "pvc-tester-n68rp" to be fully deleted
Oct 16 15:28:53.195: INFO: Deleting pod "pvc-tester-qm7w8" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:53.224: INFO: Wait up to 5m0s for pod "pvc-tester-qm7w8" to be fully deleted
Oct 16 15:29:35.246: INFO: Deleting pod "pvc-tester-jslwg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:29:35.279: INFO: Wait up to 5m0s for pod "pvc-tester-jslwg" to be fully deleted
Oct 16 15:30:07.312: INFO: Deleting pod "pvc-tester-mcqqq" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:30:07.357: INFO: Wait up to 5m0s for pod "pvc-tester-mcqqq" to be fully deleted
Oct 16 15:31:03.595: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01aaa147-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ae1953-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b03dec-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b2ea3b-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b76412-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b8de3d-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01bd6a83-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c1b249-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c53dd9-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c941ba-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01caec5e-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ce2be9-b2c1-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:31:03.595: INFO: Deleting PersistentVolumeClaim "pvc-jwmql"
Oct 16 15:31:03.641: INFO: Deleting PersistentVolumeClaim "pvc-jhrg7"
Oct 16 15:31:03.681: INFO: Deleting PersistentVolumeClaim "pvc-lvvkl"
Oct 16 15:31:03.724: INFO: Deleting PersistentVolumeClaim "pvc-bgkkc"
Oct 16 15:31:03.771: INFO: Deleting PersistentVolumeClaim "pvc-qt2lv"
Oct 16 15:31:03.833: INFO: Deleting PersistentVolumeClaim "pvc-pgs9s"
Oct 16 15:31:03.887: INFO: Deleting PersistentVolumeClaim "pvc-8h942"
Oct 16 15:31:04.047: INFO: Deleting PersistentVolumeClaim "pvc-phtvg"
Oct 16 15:31:04.089: INFO: Deleting PersistentVolumeClaim "pvc-ldv2f"
Oct 16 15:31:04.153: INFO: Deleting PersistentVolumeClaim "pvc-4v9hf"
Oct 16 15:31:04.211: INFO: Deleting PersistentVolumeClaim "pvc-jkfg5"
Oct 16 15:31:04.263: INFO: Deleting PersistentVolumeClaim "pvc-87dwp"
Oct 16 15:31:04.317: INFO: Average latency for below operations
Oct 16 15:31:04.317: INFO: Creating 12 PVCs and waiting for bound phase: 30576919 microseconds
Oct 16 15:31:04.317: INFO: Creating 4 Pod: 97668230 microseconds
Oct 16 15:31:04.317: INFO: Deleting 4 Pod and waiting for disk to be detached: 154930158 microseconds
Oct 16 15:31:04.317: INFO: Deleting 12 PVCs: 660074 microseconds
[AfterEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 16 15:31:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vcp-performance-lfrbk" for this suite.
Oct 16 15:31:19.156: INFO: namespace: e2e-tests-vcp-performance-lfrbk, resource: bindings, ignored listing per whitelist
Oct 16 15:31:19.297: INFO: namespace e2e-tests-vcp-performance-lfrbk deletion completed in 14.690943637s
• [SLOW TEST:914.654 seconds]
[sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:31:19.305: INFO: Running AfterSuite actions on all node
Oct 16 15:31:19.305: INFO: Running AfterSuite actions on node 1
Ran 1 of 706 Specs in 914.851 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS
Ginkgo ran 1 suite in 15m15.380170791s
Test Suite Passed
2017/10/16 15:31:19 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 15m15.901302911s
2017/10/16 15:31:19 e2e.go:81: Done
```
</details>
```
None
```
- shouldn't scale up when expendable pod is created
- should scale up when non expendable pod is created
- shouldn't scale up when expendable pod is preempted
- should scale down when expendable pod is running
- shouldn't scale down when non expendable pod is running
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Deduplicate RC/RS controller code.
The code was already 99% similar between RC and RS. This is a wild idea to try to deduplicate the two controllers in a type-safe manner without adding tons of boilerplate, and without using code generation.
They are still separate resources and separate worker pools. This is a refactor that isn't intended to change any behavior.
```release-note
ReplicationController now shares its underlying controller implementation with ReplicaSet to reduce the maintenance burden going forward. However, they are still separate resources and there should be no externally visible effects from this change.
```
ref #49429
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
don't check in mounter binary
```release-note
GCI mounter is moved from the manifests tarball to the server tarball.
```
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add ephemeral storage e2e tests
Add e2e tests of limitrange/quota/downward_api for local ephemeral storage
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #52463
**Special notes for your reviewer**:
Add e2e tests of limitrange/quota/downwardapi for local ephemeral storage
**Release note**:
```release-note
Add limitrange/resourcequota/downward_api e2e tests for local ephemeral storage
```
/assign @jingxu97
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E scale test for vSphere Cloud Provider Volume lifecycle operations
This PR adds an E2E test for vSphere Cloud Provider which will create/attach/detach/detach the volumes at scale with multiple threads based on user configurable values for number of volumes, volumes per pod and number of threads. (Since this is a scale test, number of threads would be low. This is only used to speed up the operation)
Test performs following tasks.
1. Create Storage Classes of 4 Categories (Default, SC with Non Default Datastore, SC with SPBM Policy, SC with VSAN Storage Capalibilies.)
2. Read VCP_SCALE_VOLUME_COUNT from System Environment.
3. Launch VCP_SCALE_INSTANCES go routines for creating VCP_SCALE_VOLUME_COUNT volumes. Each go routine is responsible for create/attach of VCP_SCALE_VOLUME_COUNT/VCP_SCALE_INSTANCES volumes.
4. Read VCP_SCALE_VOLUMES_PER_POD from System Environment. Each pod will be have VCP_SCALE_VOLUMES_PER_POD attached to it.
5. Once all the go routines are completed, we delete all the pods and volumes.
Which issue this PR fixes
fixes # vmware#291
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add shyamjvs to test/OWNERS
I've been reviewing quite some PRs recently and have reviewed many in the past. + I have >80 commits in this code path (git log test | grep "shyamjvs@google.com") touching various parts including e2e/framework, utils, perftype, kubemark, e2e fixes from other SIGs (mostly in regard of scalability).
/cc @gmarek @spiffxp @krzyzacy @kubernetes/sig-testing-misc
Automatic merge from submit-queue (batch tested with PRs 53747, 54528, 55279, 55251, 55311). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e test to verify volume attach status after master kubelet restart
**What this PR does / why we need it**:
This PR adds test to verify volume remains attached after the kubelet is restarted on master node.
**Which issue this PR fixes** :
fixes vmware#274
**Special notes for your reviewer**:
This test does not run as part of existing sig-storage test grid. It has been tested internally at VMware.
Test logs
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=Volume\sAttach\sVerify'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build395888807/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/11 12:14:05 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/11 12:14:05 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/11 12:14:05 e2e.go:57: The separator is required to use --get or --old flags
2017/10/11 12:14:05 e2e.go:58: The -- flag separator also suppresses this message
2017/10/11 12:14:05 e2e.go:151: The kubetest binary is older than 24h0m0s.
2017/10/11 12:14:05 e2e.go:156: Updating kubetest binary...
2017/10/11 12:14:13 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=Volume\sAttach\sVerify...
2017/10/11 12:14:13 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/11 12:14:13 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 493.364761ms
2017/10/11 12:14:13 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17307+d274c30f81d1c2", GitCommit:"d274c30f81d1c2d966dc950014ac90f8fad140f7", GitTreeState:"clean", BuildDate:"2017-10-11T18:57:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:03:38Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
2017/10/11 12:14:14 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 352.041653ms
2017/10/11 12:14:14 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sAttach\sVerify
Conformance test: not doing test setup.
Oct 11 12:14:15.478: INFO: Overriding default scale value of zero to 1
Oct 11 12:14:15.478: INFO: Overriding default milliseconds value of zero to 5000
I1011 12:14:15.692022 29999 e2e.go:383] Starting e2e run "5f33ad5b-aeb8-11e7-9f17-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1507749254 - Will randomize all specs
Will run 1 of 709 specs
Oct 11 12:14:15.744: INFO: >>> kubeConfig: /tmp/kube204.json
Oct 11 12:14:15.751: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 11 12:14:15.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 11 12:14:16.067: INFO: 4 / 4 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 11 12:14:16.067: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Oct 11 12:14:16.077: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 11 12:14:16.077: INFO: Dumping network health container logs from all nodes...
Oct 11 12:14:16.083: INFO: Client version: v1.6.0-alpha.0.17307+d274c30f81d1c2
Oct 11 12:14:16.086: INFO: Server version: v1.6.5
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Attach Verify [Feature:vsphere]
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 11 12:14:16.087: INFO: >>> kubeConfig: /tmp/kube204.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:81
Oct 11 12:14:16.265: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
STEP: Creating a test vsphere volume 0
STEP: Creating pod 0 on node kubernetes-node1
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Creating a test vsphere volume 1
STEP: Creating pod 1 on node kubernetes-node2
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Creating a test vsphere volume 2
STEP: Creating pod 2 on node kubernetes-node3
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Creating a test vsphere volume 3
STEP: Creating pod 3 on node kubernetes-node4
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Restarting kubelet on master node
Oct 11 12:16:12.239: INFO: Restarting kubelet via ssh on host 10.192.113.70:22 with command systemctl restart kubelet
STEP: Verifying the kubelet on master node is up
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: command: curl http://localhost:10255/healthz
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stdout: ""
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 10255: Connection refused\n"
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: exit code: 7
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Deleting pod on node kubernetes-node1
Oct 11 12:16:18.538: INFO: Deleting pod "vsphere-e2e-pwjr1" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:16:18.559: INFO: Wait up to 5m0s for pod "vsphere-e2e-pwjr1" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk to be detached from the node kubernetes-node1
Oct 11 12:17:10.686: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Deleting pod on node kubernetes-node2
Oct 11 12:17:11.614: INFO: Deleting pod "vsphere-e2e-vqkbp" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:11.624: INFO: Wait up to 5m0s for pod "vsphere-e2e-vqkbp" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk to be detached from the node kubernetes-node2
Oct 11 12:17:55.748: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Deleting pod on node kubernetes-node3
Oct 11 12:17:56.051: INFO: Deleting pod "vsphere-e2e-fkrzb" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:56.069: INFO: Wait up to 5m0s for pod "vsphere-e2e-fkrzb" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk to be detached from the node kubernetes-node3
Oct 11 12:18:38.199: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Deleting pod on node kubernetes-node4
Oct 11 12:18:38.541: INFO: Deleting pod "vsphere-e2e-4cb0d" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:18:38.556: INFO: Wait up to 5m0s for pod "vsphere-e2e-4cb0d" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk to be detached from the node kubernetes-node4
Oct 11 12:19:22.672: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk
[AfterEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 11 12:19:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-restart-master-j9x0f" for this suite.
Oct 11 12:19:29.544: INFO: namespace: e2e-tests-restart-master-j9x0f, resource: bindings, ignored listing per whitelist
Oct 11 12:19:29.622: INFO: namespace e2e-tests-restart-master-j9x0f deletion completed in 6.156220683s
• [SLOW TEST:313.535 seconds]
[sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 11 12:19:29.666: INFO: Running AfterSuite actions on all node
Oct 11 12:19:29.666: INFO: Running AfterSuite actions on node 1
Ran 1 of 709 Specs in 313.923 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 708 Skipped PASS
```
Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use versiond group clients from client-go
**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.
**Which issue this PR fixes**: fixes#49760
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54493, 52501, 55172, 54780, 54819). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add integration test for deployment rolling update, rollback, rollover
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```
/cc @caesarxuchao
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Removes 'rwx' permissions for global users
- the tests make an assumption that the permissions on the /tmp dir have not
been altered
Signed-off-by: Brenda Chan <brchan@pivotal.io>
**What this PR does / why we need it**:
This PR modifies a conformance test that checks the file permissions when the`/tmp` dir is mounted.
The current tests make an assumption that the permissions on the `/tmp` dir on the host system has not been altered. We removed the check that global users need `rwx`, so the tests now only check for `dtrwxrwx`
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: N/A
**Special notes for your reviewer**: N/A
**Release note**:
```release-note
NONE
```
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Refactor kube-scheduler config API, command, and server setup
Refactor the kube-scheduler configuration API, command setup, and server setup according to the guidelines established in #32215 and using the kube-proxy refactor (#34727) as a model of a well factored component adhering to said guidelines.
* Config API: clarify meaning and use of algorithm source by replacing modality derived from bools and string emptiness checks with an explicit AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.
Fixes https://github.com/kubernetes/kubernetes/issues/52428.
@kubernetes/api-reviewers
@kubernetes/sig-cluster-lifecycle-pr-reviews
@kubernetes/sig-scheduling-pr-reviews
/cc @ncdc @timothysc @bsalamat
```release-note
The kube-scheduler command now supports a `--config` flag which is the location of a file containing a serialized scheduler configuration. Most other kube-scheduler flags are now deprecated.
```
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers
**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Node Autoprovisioning:
This PR adds e2e tests for Node Autoprovisioning: …
- should create new node if there is no node for node selector
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
delete if-else branch
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
The if-else branch is redundant.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate pod relevant e2e tests to sig node
**What this PR does / why we need it**:
Migrate pod relevant e2e tests to sig-node.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Refactor the kube-scheduler configuration API, command setup, and server
setup according to the guidelines established in #32215 and using the
kube-proxy refactor (#34727) as a model of a well factored component
adhering to said guidelines.
* Config API: clarify meaning and use of algorithm source by replacing
modality derived from bools and string emptiness checks with an explicit
AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.
Fixes#52428.
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove unused constant
**What this PR does / why we need it**:
These constant is never used so we can remove it safely.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Node Autoprovisioning:
Adds e2e tests for Node Autoprovisioning:
- shouldn't add new node group if not needed
- shouldn't scale up if cores limit too low, should scale up after limit is changed
Automatic merge from submit-queue (batch tested with PRs 55114, 52976, 54871, 55122, 55140). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Don't share nodePort service in session affinity tests
**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/54524, https://github.com/kubernetes/kubernetes/issues/54571.
Spent sometime to dig into it today, found this test is flaky mostly because it sends out service requests before kube-proxy reacts on the service session affinity update, hence multiple endpoints are responding instead of one. It is more flaky in alpha CIs probably due to different test sequences.
This PR creates a separate service with `sessionAffinity=ClientIP` so there wouldn't be a race between test begins and kube-proxy reacts. On the other hand, it also seems inappropriate to tweak the`config.NodePortService`, which is shared by other networking tests.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # (will mark them fixed later).
**Special notes for your reviewer**:
/assign @m1093782566 @bowei
cc @spiffxp
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
e2e-node:the value of bestEffortCgroup is wrong
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
The value of bestEffortCgroup is wrong in e2e-node. The test case is invalid actually.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Incorrect Scale Subresources and HPA e2e ScaleTargetRefs
The HPA e2es failed to actually set `apiVersion` on the created HPAs, which previous was ignored. Since the polymorphic scale client was merged, this behavior is no longer tolerated (it was never correct to begin with, but it accidentally worked).
Additionally, the `apps` resources have their own version of scale. Until `apps/v1beta1` and `apps/v1beta2` go away, we need to support those versions in the scale client.
Together, these broke some of the HPA e2es.
Fixes#54574
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
move KubeProxyConfiguration out of componentconfig API group
**What this PR does / why we need it**:
move KubeProxyConfiguration out of componentconfig API group
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53577
**Special notes for your reviewer**:
/cc @thockin @ncdc
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55034, 55068). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Clarify what each "version" means.
Some folks were getting confused by this output.
Fixes#54821
```release-note
NONE
```
/area conformance
/sig architecture
/assign @timothysc @WilliamDenniss
This commit fixes an issue where in clusters which have FQDN as the node names,
one of the scheduling predicates tests will fail because it will try and run a
pod with a name that violates DNS-1123 rules. As an example, one such pod name
could look like "filler-pod-kube-node-0.kubelet.mesos".
Signed-off-by: Paulo Pires <pjpires@gmail.com>
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Workloads V1
**What this PR does / why we need it**: This PR promotes the Deployment, ReplicaSet, and DaemonSet StatefulSet, ControllerRevision kinds to the apps/v1 group version.
https://github.com/kubernetes/features/issues/353
**Special notes for your reviewer**:
There will be at least two followups to this PR. The first to add a scale sub-resource when the correct location is resolved, and the second to deal with Conditions in the workloads API.
While it would have been preferable to move the kinds individually providing a lesser burden on reviewers, this proved impracticable due to the intricacies of version resolution in kubectl for objects of the different kinds in the same group.
```release-note
DaemonSet, Deployment, ReplicaSet, and StatefulSet have been promoted to GA and are available in the apps/v1 group version.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add prometheus-to-sd-exporter to metadata-proxy addon; bump to v0.1.4
**What this PR does / why we need it**: Add metrics exporters to the metadata-proxy addon for GCE. Work toward #8867.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove version check in kubectl e2e test.
**What this PR does / why we need it**:
We don't need to check these versions for kubectl e2e tests in current cycle.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: #55053
**Special notes for your reviewer**:
/cc @liggitt
since you're also from sig-cli-maintainers :)
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51401, 54056, 54977, 55017, 55052). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
extensions: remove TPR remnants
The extensions group still had the TPR types + generated client. Having this in the codebase doesn't create any problems but would be good to clean up, especially since TPR access has been removed in 1.8.
**Release note**:
```release-note
NONE
```
/assign @sttts @deads2k
Automatic merge from submit-queue (batch tested with PRs 55063, 54523, 55053). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Don't need to check version for auth e2e test
**What this PR does / why we need it**:
In 1.9 cycle, some e2e test don't need to run against so older versions.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: #55050
**Special notes for your reviewer**:
/cc @tallclair @liggitt
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Node autoprovisioning e2e test.
This PR adds test scenario for cluster-autoscaler in GKE for node autoprovisioning.
apps/v1betaX inadventertently contains its own variant of Scale. In
order to support scaling Deployments, ReplicaSets, etc, we need to support
these versions of Scale as well.
Previously, the HPA controller ignored APIVersion when resolving the
scale subresource for a kind, meaning if it was set incorrectly in the
HPA's scaleTargetRef, it would not matter. This was the case for
several of the HPA e2e tests.
Since the polymorphic scale client merged, APIVersion now matters. This
updates the HPA e2es to care about APIVersion, by passing kind as a full
GroupVersionKind, and not just a string.
Automatic merge from submit-queue (batch tested with PRs 52367, 53363, 54989, 54872, 54643). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Basic GCE PodSecurityPolicy Config
**What this PR does / why we need it**:
This PR lays the foundation for enabling PodSecurityPolicy in GCE and other default deployments. The 3 commits are:
1. Add policies, roles & bindings for the default addons on GCE.
2. Enable the PSP admission controller & load the addon policies when the`ENABLE_POD_SECURITY_POLICY=true` environment variable is set.
3. Support the PodSecurityPolicy in the E2E environment & add PSP tests.
NOTES:
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/52301 for privileged capabilities~~
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/52849 for sane mutations~~
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/53479 for aggregator tests to pass~~
- ~~Depends on https://github.com/kubernetes/kubernetes/pull/54175 for dedicated fluentd service~~ account
- This PR is a fork of https://github.com/kubernetes/kubernetes/pull/46064, credit to @Q-Lee
**Which issue this PR fixes**: #43538
**Release note**:
```release-note
Add support for PodSecurityPolicy on GCE: `ENABLE_POD_SECURITY_POLICY=true` enables the admission controller, and installs policies for default addons.
```
Automatic merge from submit-queue (batch tested with PRs 54787, 51940). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate netwrok partition test to sig apps
**What this PR does / why we need it**:
Migrate network partition relevant e2e test to sig-app.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 54895, 54449). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix a bug checking DaemonSet pods are updated in e2e test
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#50586
**Special notes for your reviewer**: @kubernetes/sig-apps-bugs
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add sig-storage prefix for common e2e tests
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54894, 54630, 54828, 54926, 54865). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
DaemonSet e2e should wait for history creation
**What this PR does / why we need it**:
Found a potential test flake while debugging #54575. ControllerRevisions are created separately with DaemonSet pods by controller, so we should wait for its creation in e2e.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**: @kubernetes/sig-apps-bugs
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update volume OWNERS to reflect active sig-storage reviewers
**What this PR does / why we need it**:
Update sig-storage reviewers to add new members and remove those that don't have as much time to review storage PRs. Approvers are unchanged.
**Special notes for your reviewer**:
For all those that have been removed, please approve. If you want to remain as a reviewer, let me know and I will add you back.
**Release note**:
NONE
Automatic merge from submit-queue (batch tested with PRs 53190, 54790, 54445, 52607, 54801). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
remove created-by annotation
**What this PR does / why we need it**:
This PR removes `CreatedByAnnotation`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50720
**Release note**:
```release-note
The `kubernetes.io/created-by` annotation is no longer added to controller-created objects. Use the `metadata.ownerReferences` item that has `controller` set to `true` to determine which controller, if any, owns an object.
```
Automatic merge from submit-queue (batch tested with PRs 54326, 54046). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
convert testOverlappingDeployment e2e test to integration test
**What this PR does / why we need it**:
This PR convert a deployment e2e test named "testOverlappingDeployment" to integration test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54774, 54820, 52192, 54827). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Initial integration test setup for DaemonSet controller
**What this PR does / why we need it**:
This PR setup and added some initial integration tests for the DaemonSet controller. All tests included were ported from their unit test counterparts that currently use fake client, informers, reactor, etc. Future PRs will port more tests over once this PR is approved and merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52191.
**Special notes for your reviewer**:
@kow3ns
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54774, 54820, 52192, 54827). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added a test for proper `%s` handling when display last applied confi…
**What this PR does / why we need it**:
Added missing tests which checks proper handling of `%s` in arguments for command `kubectl apply view-last-applied`
**Which issue this PR fixes**
#54645
**Special notes for your reviewer**:
Added a test case to cover specific issue described.
It fails on version `v1.7.3`..`v1.7.9` and passes since `v1.8.0`.
P.S. Not sure if there is already a lower level test which covers this case in the k8s test suite. I would recommend to add this test, so the issue would not reoccur.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54572, 54686). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix service session affinity e2e failure cases
**What this PR does / why we need it**:
Fix service session affinity e2e failure cases - debuging...
**Which issue this PR fixes**:
xref #54571#54524
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig network
Automatic merge from submit-queue (batch tested with PRs 54728, 54818). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Metadata concealment e2e
**What this PR does / why we need it**: Add e2e for metadata concealment. Ref #8867.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54761, 54748, 53991, 54485, 46951). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
admission: unify plugin constructors
It's common in Go to return the actual object in constructors, not **one interface**
it implements. This allows us to implement multiple interfaces, but only have
one constructor. As having private types in constructors, we export all plugin structs, of course with private fields.
Note: super interfaces do not work if there are overlapping methods.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix subresource discovery and versioning
Fixes https://github.com/kubernetes/kubernetes/issues/54684
Related to https://github.com/kubernetes/kubernetes/pull/54586
Allows distinct subresource group/version/kind to be used for each version (gives us a path to move to autoscaling/v1 for apps, or policy/v1 for eviction, etc)
Added tests to ensure scale subresources have expected discovery info, and that the object returned matches discovery, and that the endpoint accepts the advertised version
```release-note
Fixes discovery information for scale subresources in the apps API group
```
Automatic merge from submit-queue (batch tested with PRs 49762, 52256). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add node e2e tests for pulling images from credential providers
**What this PR does / why we need it**:
Add node e2e tests for pulling images from credential providers.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Refer https://github.com/kubernetes/kubernetes/pull/51870#issuecomment-328234010
**Special notes for your reviewer**:
/assign @yujuhong @Random-Liu
1. We still need to add ResetDefaultDockerProviderExpiration for facilitating tests
2. Do we need a separate image for pulling private image from credential provider?
3. Any suggestion of also adding this for sandbox images? the pause image is a global config of kubelet, but we only need to set a private one for just one test case.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54165, 53909). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance test test.
Add new `test/conformance` subdir, add code to generate a list of conformance tests, and add a test that verifies the list of tests.
The intent is to move management of the definition of conformance to sig-architecture.
```release-note
NONE
```
ref. #54726
Automatic merge from submit-queue (batch tested with PRs 54331, 54655, 54320, 54639, 54288). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Ability to do object count quota for all namespaced resources
**What this PR does / why we need it**:
- Defines syntax for generic object count quota `count/<resource>.<group>`
- Migrates existing objects to support new syntax with old syntax
- Adds support to quota all standard namespace resources
- Updates the controller to do discovery and replenishment on those resources
- Updates unit tests
- Tweaks admission configuration around quota
- Add e2e test for replicasets (demonstrate dynamic generic counting)
```
$ kubectl create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4
resourcequota "test" created
$ kubectl run nginx --image=nginx --replicas=2
$ kubectl describe quota
Name: test
Namespace: default
Resource Used Hard
-------- ---- ----
count/deployments.extensions 1 2
count/pods 2 3
count/replicasets.extensions 1 4
count/secrets 1 4
```
**Special notes for your reviewer**:
- simple object count quotas no longer require writing code
- deferring support for custom resources pending investigation about how to share caches with garbage collector. in addition, i would like to see how this integrates with downstream quota usage in openshift.
**Release note**:
```release-note
Object count quotas supported on all standard resources using `count/<resource>.<group>` syntax
```
This test creates a golden list of existing conformance tests. Efforts
to add or remove conformance tests will require you to rebuild the
golden list, and changes to the golden list will be reviewed by
sig-architecture.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
convert testFailedDeployment e2e test to integration test
**What this PR does / why we need it**:
This PR convert a deployment e2e test named "testFailedDeployment" to integration test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Release note**:
```release-note
NONE
```
/assign
Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add probe, pre_stop, and networking related container annotations.
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add probe, pre_stop, and networking related container annotations.
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds probe, pre_stop, and networking related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance annotations for projected volume tests
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add projected volume related conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds projected volume related related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add Windows support to the system verification check
**What this PR does / why we need it**: This PR (in conjunction with https://github.com/kubernetes/kubernetes/pull/53553 ) adds initial support for adding a Windows worker node to a Kubernetes cluster using
kubeadm. It was suggested on that PR to open a separate PR for the changes in test/e2e_node for review by sig-node devs.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#364 in conjuction with #53553
**Special notes for your reviewer**:
**Release note**:
```release-note
Add Windows support to the system verification check
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add service latency and secret related conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds service latency and secret related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance annotations for expansion and service tests
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds expansion and service test conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove federation
This PR removes the federation codebase and associated tooling from the tree.
The first commit just removes the `federation` path and should be uncontroversial. The second commit removes references and associated tooling and suggests careful review.
Requirements for merge:
- [x] Bazel jobs no longer hard-code federation as a target ([test infra #4983](https://github.com/kubernetes/test-infra/pull/4983))
- [x] `federation-e2e` jobs are not run by default for k/k
**Release note**:
```release-note
Development of Kubernetes Federation has moved to github.com/kubernetes/federation. This move out of tree also means that Federation will begin releasing separately from Kubernetes. The impact of this is Federation-specific behavior will no longer be included in kubectl, kubefed will no longer be released as part of Kubernetes, and the Federation servers will no longer be included in the hyperkube binary and image.
```
cc: @kubernetes/sig-multicluster-pr-reviews @kubernetes/sig-testing-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 54455, 54431). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add conformance annotations for proxy and scheduler predicate tests
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds proxy and scheduler predicate related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54455, 54431). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate cluster dns test to sig network
**What this PR does / why we need it**:
Just migrate dns relevant e2e test files to sig network.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 53000, 52870, 53569). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fallback to internal addrs in e2e tests when no external addrs available
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.
Without this change, the e2e service-related Networking tests will always
panic when config.ExternalAddrs[0] is accessed and the slice is empty.
This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.
fixes#53568
**What this PR does / why we need it**:
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.
Without this change, the e2e service-related Networking tests will always
panic when no cloud provider is being used, or the cloud provider does
not have external addresses configured.
This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53568
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix issue(#52994)kubectl set resource can not update multi resource i…
…n local
**What this PR does / why we need it**:
Fixes#52994
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54593, 54607, 54539, 54105). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Move deployment e2e test for hash label adoption to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52113
**Special notes for your reviewer**: depends on #53918
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53760, 48996, 51267, 54414). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix CI error for service session affinity
**What this PR does / why we need it**:
Fix CI error for service session affinity -- debug
**Which issue this PR fixes**:
fixes#53741
**Special notes for your reviewer**:
I remove the [slow] tag so that these test cases can be run in PR request. We may need to add back the [slow] tag when this PR is ready to get in.
**Release note**:
```release-note
NONE
```
Pulled SysSpecs out of types.go and created two os specific implementations with build tags
Similarly created conditionally compiled implementations of KernelValidationHelper to get Kernel version in os specific manner, as well as os specific docker endpoints (socket vs named pipes)
Automatic merge from submit-queue (batch tested with PRs 52868, 53196, 54207). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
eviction/detach test
**What this PR does / why we need it**:
e2e test for detach after a pod is evicted.
**Which issue this PR fixes** : fixes#52676
**Release note**:
```release-note
NONE
```
cc @jingxu97 @copejon
Automatic merge from submit-queue (batch tested with PRs 53051, 52489, 53920). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix todo
**What this PR does / why we need it**:
fix todo
thanks
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 53474, 54258, 54356). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Directly using std{in,out} for test helper subproc
**What this PR does / why we need it**
This fixes one TODO in the code of a test helper and is an extremely minor improvement
**Which issue this PR fixes**
Fixes issue #54258
**Special notes for your reviewer**
I'm using this to familiarize myself with the Kubernetes contribution process while being helpful in the process.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add e2e tests for NEG
This PR includes tests:
- ingress conformance test
- scaling up and down backends
- switching backend between IG and NEG
- rolling update backend should not cause service disruption
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix detach metric flake by not using exact equals
Also poll for detach value increase.
Fixes https://github.com/kubernetes/kubernetes/issues/52871
I have ran these tests for more than 3 hours in a tight loop and did not see it flake. The changes here include dropping exact equality test and making sure we poll for increase in detach metric count.
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 54270, 54479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delete maxNumPVs and maxNumPVCs const of persistent_volume.go
**What this PR does / why we need it**:
the two const of maxNumPVs and maxNumPVCs haven't used and delete it!
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Speed up volume tests by reducing pod grace period
busybox based pods don't react to docker stop nicely. By reducing the pod grace period we can save ~29 seconds per volume test.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52479, 53956). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update scheduler to use schedulerName selector
**What this PR does / why we need it**:
Update scheduler to use schedulerName selector when select pods in the podInformer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52254
**Special notes for your reviewer**:
**Release note**:
```
None
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Move deployment e2e test for rollback with no revision to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
NetworkPolicy e2e: use ClientSet and update to CoreV1 and NetworkingV1 apis.
**What this PR does / why we need it**:
Update NetworkPolicy e2e test: use the public ClientSet and update to CoreV1 and NetworkingV1 apis.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52556, 52897, 54342). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delete unused yaml files
**What this PR does / why we need it**:
This PR is for removing some of these unused yaml files copied earlier from doc dir.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#54447
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Aggregator test uses framework namespace
**What this PR does / why we need it**:
Remove namespace duplicate in aggregator test, using test framework namespace instead of creating a new one.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53478
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig api-machinery
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add a notice for node e2e config files
ref https://github.com/kubernetes/kubernetes/pull/53542 and patched up with https://github.com/kubernetes/test-infra/pull/5107
So while migrating the jobs to prow, I haven't kill the `*.properties` files yet because some lingering jobs, and possibly local tests are still using them. We have a copy of image-config.yaml in test-infra, and all *.properties file is merged into job configs.
Add a notice to remind people also update the job configs in test-infra. Also add myself as a reviewer here so I can subscribe some notice. I'll remove them once I cleaned up all legacy files here.
/assign @yguo0905 @dashpole @yujuhong
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Updating E2E test for deleting PVC when PVC is in use
**What this PR does / why we need it**:
This test updates an existing e2e test and adds extra verification.
Updated workflow of the test is as below
1. Create PVC, Wait until PV is provisioned. Create POD using PVC.
2. Verify POD is running and PV is attached to the node.
3. Delete PVC.
4. Verify Volume remains attached to the pod after deleting claim.
5. Verify Volume is accessible in the pod after deleting claim.
6. Verify associated PV is present and its status should be failed.
7. Delete Pod and wait until PV is unmounted and detached from the Node.
6. Wait and Verify PV is deleted after POD is deleted.
**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/279
**Special notes for your reviewer**:
Test logs
```
# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build371606839/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:42:40 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:42:40 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:42:40 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:42:40 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:42:40 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod...
2017/10/16 15:42:40 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:42:40 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 293.775296ms
2017/10/16 15:42:40 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.913+297ab03890a6a7-dirty", GitCommit:"297ab03890a6a76f268eb5415e0fb16f20b2309e", GitTreeState:"dirty", BuildDate:"2017-10-16T20:50:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:42:40 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 317.940582ms
2017/10/16 15:42:40 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod
Conformance test: not doing test setup.
Oct 16 15:42:42.327: INFO: Overriding default scale value of zero to 1
Oct 16 15:42:42.327: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:42:42.577720 8325 e2e.go:369] Starting e2e run "51f11717-b2c3-11e7-bd54-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508193761 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:42:42.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 16 15:42:42.686: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:42:42.724: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:42:42.883: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:42:42.883: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:42:42.891: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:42:42.891: INFO: Dumping network health container logs from all nodes...
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere
should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
[BeforeEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:42:42.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:48
Oct 16 15:42:42.994: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[BeforeEach] [sig-storage] persistentvolumereclaim:vsphere
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:56
[It] should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
STEP: running testSetupVSpherePersistentVolumeReclaim
STEP: creating vmdk
STEP: creating the pv
STEP: creating the pvc
Oct 16 15:42:44.595: INFO: Waiting for PV vspherepv-ksccp to bind to PVC pvc-n4rq7
Oct 16 15:42:44.595: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-n4rq7 to have phase Bound
Oct 16 15:42:44.606: INFO: PersistentVolumeClaim pvc-n4rq7 found but phase is Pending instead of Bound.
Oct 16 15:42:47.625: INFO: PersistentVolumeClaim pvc-n4rq7 found and phase=Bound (3.029926391s)
Oct 16 15:42:47.625: INFO: Waiting up to 5m0s for PersistentVolume vspherepv-ksccp to have phase Bound
Oct 16 15:42:47.632: INFO: PersistentVolume vspherepv-ksccp found and phase=Bound (6.598243ms)
STEP: Creating the Pod
STEP: Deleting the Claim
Oct 16 15:42:59.709: INFO: Deleting PersistentVolumeClaim "pvc-n4rq7"
STEP: Verify the volume is attached to the node
STEP: Verify the volume is accessible and available in the pod
Oct 16 15:43:00.076: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/root/.kube/config exec pvc-tester-r9ww9 --namespace=e2e-tests-persistentvolumereclaim-6pfpf -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:43:00.604: INFO: stderr: ""
Oct 16 15:43:00.604: INFO: stdout: ""
Oct 16 15:43:00.604: INFO: Verified that Volume is accessible in the POD after deleting PV claim
Oct 16 15:43:00.610: INFO: Waiting up to 1m0s for PersistentVolume vspherepv-ksccp to have phase Failed
Oct 16 15:43:00.619: INFO: PersistentVolume vspherepv-ksccp found and phase=Failed (9.016306ms)
STEP: Deleting the Pod
Oct 16 15:43:00.619: INFO: Deleting pod pvc-tester-r9ww9
Oct 16 15:43:00.650: INFO: Waiting up to 5m0s for pod "pvc-tester-r9ww9" in namespace "e2e-tests-persistentvolumereclaim-6pfpf" to be "terminated due to deadline exceeded"
Oct 16 15:43:00.668: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 18.507993ms
Oct 16 15:43:02.675: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 2.024854663s
Oct 16 15:43:04.682: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 4.03197856s
Oct 16 15:43:06.688: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 6.037718623s
Oct 16 15:43:08.697: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 8.047192574s
Oct 16 15:43:10.703: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 10.052754761s
Oct 16 15:43:12.708: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 12.057876018s
Oct 16 15:43:14.714: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 14.063962712s
Oct 16 15:43:16.719: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 16.068826626s
Oct 16 15:43:18.725: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 18.074735397s
Oct 16 15:43:20.730: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 20.080498293s
Oct 16 15:43:22.736: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 22.086586123s
Oct 16 15:43:24.742: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 24.092219324s
Oct 16 15:43:26.747: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 26.097385301s
Oct 16 15:43:28.753: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 28.103127591s
Oct 16 15:43:30.758: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 30.108014823s
Oct 16 15:43:32.764: INFO: Pod "pvc-tester-r9ww9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.113847674s
Oct 16 15:43:34.772: INFO: Pod "pvc-tester-r9ww9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.122010171s
Oct 16 15:43:36.787: INFO: Pod "pvc-tester-r9ww9" in namespace "e2e-tests-persistentvolumereclaim-6pfpf" not found. Error: pods "pvc-tester-r9ww9" not found
Oct 16 15:43:36.787: INFO: Ignore "not found" error above. Pod "pvc-tester-r9ww9" successfully deleted
STEP: Verify PV is detached from the node after Pod is deleted
Oct 16 15:43:46.913: INFO: Waiting for Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" to detach from "kubernetes-node2".
Oct 16 15:43:56.918: INFO: Waiting for Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" to detach from "kubernetes-node2".
Oct 16 15:44:06.905: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Verify PV should be deleted automatically
Oct 16 15:44:06.905: INFO: Waiting up to 30s for PersistentVolume vspherepv-ksccp to get deleted
Oct 16 15:44:06.909: INFO: PersistentVolume vspherepv-ksccp was removed
[AfterEach] [sig-storage] persistentvolumereclaim:vsphere
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:62
STEP: running testCleanupVSpherePersistentVolumeReclaim
Oct 16 15:44:06.962: INFO: Deleting PersistentVolume "vspherepv-ksccp"
[AfterEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 16 15:44:06.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-persistentvolumereclaim-6pfpf" for this suite.
Oct 16 15:44:15.325: INFO: namespace: e2e-tests-persistentvolumereclaim-6pfpf, resource: bindings, ignored listing per whitelist
Oct 16 15:44:15.638: INFO: namespace e2e-tests-persistentvolumereclaim-6pfpf deletion completed in 8.651759385s
• [SLOW TEST:92.734 seconds]
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
[sig-storage] persistentvolumereclaim:vsphere
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:44:15.651: INFO: Running AfterSuite actions on all node
Oct 16 15:44:15.651: INFO: Running AfterSuite actions on node 1
Ran 1 of 706 Specs in 92.974 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS
Ginkgo ran 1 suite in 1m33.830856163s
Test Suite Passed
2017/10/16 15:44:15 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod' finished in 1m34.75838192s
2017/10/16 15:44:15 e2e.go:81: Done
```
VVMware Reviewers: @rohitjogvmw @BaluDontu @tusharnt
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 53903, 53914, 54374). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate resource relevant e2e test files to sig scheduling
**What this PR does / why we need it**:
Migrate resource relevant e2e test files to sig scheduling. Not fully sure whether these e2e files belong to sig-node or sig-scheduling, feel free to contact me if you have better solution.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 53903, 53914, 54374). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add PodDisruptionBudget to scheduler cache.
**What this PR does / why we need it**:
This is the first step to add support for PodDisruptionBudget during preemption. This PR adds PDB to scheduler cache.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: None
**Release note**:
```release-note
Add PodDisruptionBudget to scheduler cache.
```
ref/ #53913
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add timothysc to test approvers
I've avoided this responsibility and leveraged super-powers, but I should own up and make it more legit. I've been working on the testing jiggery since epoch.
/cc @spiffxp @ixdy @fejta
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Try in-cluster config before using localhost:8080
**What this PR does / why we need it**:
When starting an e2e test in a pod in a cluster, if the host is
not specified in the command line, we default to using
'http://127.0.0.1:8080' currently. We should be discovering the
host/port using the in-cluster config and using that if
possible.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#53894
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make scheduler integration test faster
Not to wait for 30 seconds for every negative test case. This commit
also organizes the test code to make it more readable.
It cuts the test time from 450s to 125s.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53302
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52794, 54243, 54248, 53491, 53841). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
revamp deployment upgrade test
**What this PR does / why we need it**:
This PR revamps existing deployment upgrade test, removing redundant steps that is covered by replicaset upgrade test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Special notes for your reviewer**:
The replicaset upgrade test PR is here: #52449
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e test for statefulsets for vsphere cloud provider
**What this PR does / why we need it**:
This PR adds a new e2e test for statefulsets for vSphere cloud Provider.
Test does following tasks.
- Create a storage class with thin diskformat.
- Create nginx service.
- Create nginx statefulsets with 3 replicas.
- Wait until all Pods are ready and PVCs are bounded with PV.
- Verify volumes are accessible in all statefulsets pods with creating empty file.
- Scale down statefulsets to 2 replicas.
- Scale up statefulsets to 3 replicas.
- Scale down statefulsets to 0 replicas and delete all pods.
- Delete all PVCs from the test namespace.
- Delete the storage class.
**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/275
**Special notes for your reviewer**:
Test Logs
```
root@k8s-dev-vm-02:~/divyenp/kubernetes# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=vsphere\sstatefulset\stesting'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build247641121/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/18 19:24:33 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/18 19:24:33 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/18 19:24:33 e2e.go:57: The separator is required to use --get or --old flags
2017/10/18 19:24:33 e2e.go:58: The -- flag separator also suppresses this message
2017/10/18 19:24:33 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=vsphere\sstatefulset\stesting...
2017/10/18 19:24:33 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/18 19:24:34 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 290.682219ms
2017/10/18 19:24:34 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1217+8b041da0f996c1-dirty", GitCommit:"8b041da0f996c185438a7ed8282f92734a2ed0e7", GitTreeState:"dirty", BuildDate:"2017-10-19T00:46:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1293+d462bac7805f53", GitCommit:"d462bac7805f536a43c7d5fb98aca138ba1237eb", GitTreeState:"clean", BuildDate:"2017-10-18T07:07:08Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/18 19:24:34 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 305.965323ms
2017/10/18 19:24:34 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstatefulset\stesting
Conformance test: not doing test setup.
Oct 18 19:24:35.808: INFO: Overriding default scale value of zero to 1
Oct 18 19:24:35.808: INFO: Overriding default milliseconds value of zero to 5000
I1018 19:24:36.073718 7768 e2e.go:383] Starting e2e run "a63561de-b474-11e7-8f6b-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508379875 - Will randomize all specs
Will run 1 of 713 specs
Oct 18 19:24:36.132: INFO: >>> kubeConfig: /root/.kube/config
Oct 18 19:24:36.139: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 18 19:24:36.177: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 18 19:24:36.321: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 18 19:24:36.321: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 18 19:24:36.326: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 18 19:24:36.326: INFO: Dumping network health container logs from all nodes...
Oct 18 19:24:36.338: INFO: Client version: v1.9.0-alpha.1.1217+8b041da0f996c1-dirty
Oct 18 19:24:36.340: INFO: Server version: v1.9.0-alpha.1.1293+d462bac7805f53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vsphere statefulset
vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
[BeforeEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 19:24:36.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:63
[It] vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
STEP: Creating StorageClass for Statefulset
STEP: Creating statefulset
Oct 18 19:24:36.489: INFO: Parsing statefulset from test/e2e/testing-manifests/statefulset/nginx/statefulset.yaml
Oct 18 19:24:36.503: INFO: Parsing service from test/e2e/testing-manifests/statefulset/nginx/service.yaml
Oct 18 19:24:36.514: INFO: creating web service
Oct 18 19:24:36.527: INFO: creating statefulset e2e-tests-vsphere-statefulset-gnfmp/web with 3 replicas and selector &LabelSelector{MatchLabels:map[string]string{app: nginx,},MatchExpressions:[],}
Oct 18 19:24:36.561: INFO: Found 0 stateful pods, waiting for 3
Oct 18 19:24:46.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:24:56.568: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:06.568: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:16.566: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:26.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:36.567: INFO: Found 1 stateful pods, waiting for 3
Oct 18 19:25:46.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:25:56.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:06.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:16.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:26.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:36.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:46.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:26:56.571: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:06.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:16.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:26.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:36.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:46.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:27:56.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:06.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:16.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:26.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:36.574: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:46.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:28:56.571: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:06.569: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:16.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:26.566: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:36.568: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:46.566: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:29:56.567: INFO: Found 2 stateful pods, waiting for 3
Oct 18 19:30:06.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:06.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:06.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:16.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:16.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:16.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:26.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:26.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:36.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:36.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:36.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:46.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:46.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:46.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:30:56.566: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:56.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:30:56.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:06.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:06.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:06.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:16.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:16.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:16.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:26.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:26.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:36.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:36.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:36.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:46.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:46.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:46.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:31:56.568: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:56.568: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:31:56.568: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:06.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:06.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:06.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:16.571: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:16.571: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:16.571: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 18 19:32:26.567: INFO: Waiting for pod web-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for pod web-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for pod web-2 to enter Running - Ready=true, currently Running - Ready=true
Oct 18 19:32:26.567: INFO: Waiting for statefulset status.replicas updated to 3
Oct 18 19:32:26.605: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:27.170: INFO: stderr: ""
Oct 18 19:32:27.170: INFO: stdout of ls -idlh /usr/share/nginx/html on web-0: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:25 /usr/share/nginx/html
Oct 18 19:32:27.171: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:27.687: INFO: stderr: ""
Oct 18 19:32:27.688: INFO: stdout of ls -idlh /usr/share/nginx/html on web-1: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:29 /usr/share/nginx/html
Oct 18 19:32:27.688: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c ls -idlh /usr/share/nginx/html'
Oct 18 19:32:28.177: INFO: stderr: ""
Oct 18 19:32:28.177: INFO: stdout of ls -idlh /usr/share/nginx/html on web-2: 2 drwxr-xr-x 3 root root 4.0K Oct 19 02:32 /usr/share/nginx/html
Oct 18 19:32:28.183: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:28.690: INFO: stderr: ""
Oct 18 19:32:28.690: INFO: stdout of find /usr/share/nginx/html on web-0: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:28.690: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:29.166: INFO: stderr: ""
Oct 18 19:32:29.166: INFO: stdout of find /usr/share/nginx/html on web-1: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:29.166: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c find /usr/share/nginx/html'
Oct 18 19:32:29.696: INFO: stderr: ""
Oct 18 19:32:29.696: INFO: stdout of find /usr/share/nginx/html on web-2: /usr/share/nginx/html
/usr/share/nginx/html/lost+found
Oct 18 19:32:29.707: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-0 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:30.171: INFO: stderr: ""
Oct 18 19:32:30.171: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-0:
Oct 18 19:32:30.171: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-1 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:30.653: INFO: stderr: ""
Oct 18 19:32:30.653: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-1:
Oct 18 19:32:30.654: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.38.85 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-vsphere-statefulset-gnfmp web-2 -- /bin/sh -c touch /usr/share/nginx/html/1508380346587629054'
Oct 18 19:32:31.149: INFO: stderr: ""
Oct 18 19:32:31.150: INFO: stdout of touch /usr/share/nginx/html/1508380346587629054 on web-2:
STEP: Scaling down statefulsets to number of Replica: 2
Oct 18 19:32:31.263: INFO: Scaling statefulset web to 2
Oct 18 19:32:51.314: INFO: Waiting for statefulset status.replicas updated to 2
STEP: Verify Volumes are detached from Nodes after Statefulsets is scaled down
Oct 18 19:32:51.524: INFO: Waiting for Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" to detach from Node: "kubernetes-node2"
Oct 18 19:33:01.657: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Scaling up statefulsets to number of Replica: 3
Oct 18 19:33:01.657: INFO: Scaling statefulset web to 3
Oct 18 19:33:11.731: INFO: Waiting for statefulset status.replicas updated to 3
Oct 18 19:33:11.747: INFO: Waiting for statefulset status.replicas updated to 3
STEP: Verify all volumes are attached to Nodes after Statefulsets is scaled up
Oct 18 19:33:13.823: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-a6cf15ef-b474-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node4"
Oct 18 19:33:15.990: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-cfb65f92-b474-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node3"
Oct 18 19:33:18.154: INFO: Verify Volume: "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-67b7e88c-b475-11e7-a38c-0050569c555f.vmdk" is attached to the Node: "kubernetes-node2"
[AfterEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 19:33:18.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vsphere-statefulset-gnfmp" for this suite.
Oct 18 19:33:44.960: INFO: namespace: e2e-tests-vsphere-statefulset-gnfmp, resource: bindings, ignored listing per whitelist
Oct 18 19:33:44.960: INFO: namespace e2e-tests-vsphere-statefulset-gnfmp deletion completed in 26.620223678s
[AfterEach] [sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:67
Oct 18 19:33:44.960: INFO: Deleting all statefulset in namespace: e2e-tests-vsphere-statefulset-gnfmp
• [SLOW TEST:548.654 seconds]
[sig-storage] vsphere statefulset
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
vsphere statefulset testing
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_statefulsets.go:155
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 18 19:33:45.006: INFO: Running AfterSuite actions on all node
Oct 18 19:33:45.006: INFO: Running AfterSuite actions on node 1
Ran 1 of 713 Specs in 548.875 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 712 Skipped PASS
Ginkgo ran 1 suite in 9m9.728218415s
Test Suite Passed
2017/10/18 19:33:45 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstatefulset\stesting' finished in 9m10.656371481s
2017/10/18 19:33:45 e2e.go:81: Done
```
VMware Reviewers: @rohitjogvmw @BaluDontu @tusharnt
**Release note**:
```release-note
NONE
```
Ceph server needs to create our "foo" volume on startup. It keeps the image
small, however it makes the server container start slow.
Add sleep before the server is usable. Without this PR, all pods that use Ceph
fail to start for couple of seconds with cryptic "image foo not found" error
and it clutters logs and pod logs and makes it harder to spot real errors.
Automatic merge from submit-queue (batch tested with PRs 52753, 54034, 53982, 54209). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use multi-arch busybox for e2e
**What this PR does / why we need it**:
Since [multi-arch is supported already for Official images on Dockerhub](https://blog.docker.com/2017/09/docker-official-images-now-multi-platform/), we can use `busybox` directly instead of having our own `GetBusyBoxImage` for multi-arch.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
xref #53958
**Special notes for your reviewer**:
/assign @mkumatag @ixdy
**Release note**:
```release-note
Use multi-arch busybox image for e2e
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Replace storage-class annotations with field in examples
**What this PR does / why we need it**:
storage class is already GA. Replace annotations with field `StorageClassName` in examples.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51435 (update: thanks @gyliu513 for the issue)
ref: https://github.com/kubernetes/kubernetes/pull/50654#discussion_r134954171
**Special notes for your reviewer**:
We may also want to remove the beta annotations in 1.8 since the field will have already been in two releases. If @kubernetes/sig-storage-api-reviews confirm this, I'd like to help remove it.
/cc @liggitt @jsafrane @msau42
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
pkg/api: extract Scheme/Registry/Codecs into pkg/api/legacyscheme
This serves as
- a preparation for the pkg/api->pkg/apis/core move
- and makes the dependency to the scheme explicit when vizualizing
left depenncies.
The later helps with our our efforts to split up the monolithic repo
into self-contained sub-repos, e.g. for kubectl, controller-manager
and kube-apiserver in the future.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add pod related conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds pod related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
**Special notes for your reviewer**:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54045, 51375). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Upgrade to go1.9
**What this PR does / why we need it**:
Upgrade to go1.9. Upgrading is good. It's "the best golang release ever"!
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49484
**Special notes for your reviewer**:
**Release note**:
```release-note
Upgrade to go1.9
```
/assign @luxas @ixdy @wojtek-t
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
refactor pd.go for future tests
**What this PR does / why we need it**:
Refactored _test/e2e/storage/pd.go_ so that it will be easier to add new tests, which I plan on doing to address issue 52676
1. Condenses 8 `It` blocks into 3 table driven tests.
2. Adds several `By` descriptions and `Logf` messages.
3. provides more consistent formatting and messages.
**Special notes for your reviewer**:
The diff is large but mostly I've not altered any test. The one semantic change I made was to remove the call to verify a write to a PD when, in fact, nothing had been written yet. This was essentially a no-op since the verify code returned immediately if the passed-in map was empty (which it was since nothing had been written).
```release-note
NONE
```
cc @jingxu97 @copejon
Automatic merge from submit-queue (batch tested with PRs 54036, 53739). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove Sprintf when there are no placeholders in the formatting.
**What this PR does / why we need it**:
Minor cleanup.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53978, 54008, 53037). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix expected result in Custom Metrics - Stackdriver Adapter e2e test
**What this PR does / why we need it**:
This PR fixes a bug in e2e tests for Custom Metrics - Stackdriver Adapter
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix kubemark, juju, and libvirt-coreos README (from minions to nodes)
**What this PR does / why we need it**:
This PR will fix old name(minison) to new name(node) in kubemark README.md.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 53575, 53794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add e2e test case for downward API exposing pod UID
**What this PR does / why we need it**:
Pod UID is added to downward API env var in #48125 for 1.8. This PR adds a e2e test case for it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: #48125
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48665, 52849, 54006, 53755). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add named-port ingress test
**What this PR does / why we need it**:
Validate correct behavior when a `NetworkPolicyIngressRule` refers to a named port rather than a numerical port, e.g. `serve-80` rather than `80`.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53106, 52193, 51250, 52449, 53861). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add replicaset upgrade test
**What this PR does / why we need it**:
This PR adds existing replicaset upgrade test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52118
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53106, 52193, 51250, 52449, 53861). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
bump CNI to v0.6.0
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49480
**Special notes for your reviewer**:
/assign @luxas @bboreham @feiskyer
**Release note**:
```release-note
bump CNI to v0.6.0
```
When starting an e2e test in a pod in a cluster, if the host is
not specified in the command line, we default to using
'http://127.0.0.1:8080' currently. We should try the in-cluster
config, save it to a temporary file and use that with kubectl
Automatic merge from submit-queue (batch tested with PRs 53507, 53772, 52903, 53543). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Split downward API e2e test case for pod/host IP into two
**What this PR does / why we need it**:
Split the test case in order to avoid version block pod IP e2e test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: https://github.com/kubernetes/kubernetes/pull/42717#discussion_r144026427
**Special notes for your reviewer**:
/cc @timothysc @andrewsykim
Automatic merge from submit-queue (batch tested with PRs 53507, 53772, 52903, 53543). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e tests to verify vsphere volume lifecycle on a clustered datastore
**What this PR does / why we need it**:
This PR introduces tests for volume provisioning on a clustered datastore. It does so in three ways
1. Static provisioning (create vsphere volume and then create a pod with it)
2. Dynamic provisioning (specify clustered datastore in storage class parameters)
3. Dynamic provisioning with spbm policy (specify storage policy name in storage class parameters. This policy is a tag based policy and tagged to a clustered datastore)
**Which issue this PR fixes** :
fixes vmware#278
**Special notes for your reviewer**:
Set env as per following example due to the need mentioned in description
```
export CLUSTER_DATASTORE="dscl1/sharedVmfs-1"
export VSPHERE_SPBM_POLICY_DS_CLUSTER="gold_cluster"
```
Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 51840, 53542, 53857, 53831, 53702). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Kubelet Evictions take Priority into account
Issue: https://github.com/kubernetes/kubernetes/issues/22212
This implements the eviction strategy documented here: https://github.com/kubernetes/community/pull/1162, and discussed here: https://github.com/kubernetes/community/pull/846.
When priority is not enabled, all pods are treated as equal priority.
This PR makes the following changes:
1. Changes the eviction ordering strategy to (usage < requests, priority, usage - requests)
2. Changes unit testing to account for this change in eviction strategy (including tests where priority is disabled).
3. Adds a node e2e test which tests the eviction ordering of pods with different priorities.
/assign @dchen1107 @vishh
cc @bsalamat @derekwaynecarr
```release-note
Kubelet evictions take pod priority into account
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use gcloud for enabling/disabling autoscaling in e2e tests
This removes temporary solution added in #28011 as it's no longer necessary. Should reduce flakes caused by not waiting for master restart after disabling autoscaling.
Automatic merge from submit-queue (batch tested with PRs 47039, 53681, 53303, 53181, 53781). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use bazel to build/push kubemark image
try to get some proof of concept, kubemark image is probably simple enough to get converted to bazel. (me bazel noob still trying it out locally)
cc @BenTheElder @ixdy @shyamjvs
/release-note-none
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improve e2e tests of audit logging.
Now test includes:
* Verbs: create, list, watch, delete, get, update, patch.
* Resources: pods, deployments, secrets, config maps, custom resource
definition.
* More fields: user, resource, level, stage, presence of request and
response objects.
Fixes#49653
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[GCE kube-up] Allow creating/deleting custom network
**What this PR does / why we need it**:
From https://github.com/kubernetes/test-infra/issues/4472.
This is the first step to make PR jobs use custom network instead of auto network (so that we will be less likely hitting subnetwork quota issue).
The last commit is purely for testing out the changes on PR jobs. It will be removed after review.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #NONE.
**Special notes for your reviewer**:
/assign @bowei @nicksardo
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53668, 53624, 52639, 53581, 51215). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add extra log and node env metadata support.
This PR:
1) Make log collection logic extensible via flags, so that we could collect more daemon logs in this PR. (e.g. `containerd.log` and `cri-containerd.log`)
2) Add extra node metadata from specified environment variable. (e.g. `PULL_REFS` in prow).
@krzyzacy I'll change the test-infra side soon. Let's discuss whether we should move/copy this code to test infra in your refactoring.
/cc @dchen1107 @yujuhong @abhi @mikebrow
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53668, 53624, 52639, 53581, 51215). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Local e2e test fixes
**What this PR does / why we need it**:
1. Remove tests using TestContainerOutput because they don't wait for unmount
2. Fix scheduling error test to handle updated event msgs.
@kubernetes/sig-storage-pr-reviews
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53597
**Release note**:
NONE
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bump kube-dns version used in e2e
**What this PR does / why we need it**: Updates the version of kube-dns used in the e2e network tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #53153
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53204, 53364, 53559, 53589, 53088). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Mulligan: Remove deprecated and experimental fields from KubeletConfiguration
Revert "Merge pull request #51857 from kubernetes/revert-51307-kc-type-refactor"
This reverts commit 9d27d92420, reversing
changes made to 2e69d4e625.
See original: #51307
We punted this from 1.8 so it could go through an API review. The point
of this PR is that we are trying to stabilize the kubeletconfig API so
that we can move it out of alpha, and unblock features like Dynamic
Kubelet Config, Kubelet loading its initial config from a file instead
of flags, kubeadm and other install tools having a versioned API to rely
on, etc.
We shouldn't rev the version without both removing all the deprecated
junk from the KubeletConfiguration struct, and without (at least
temporarily) removing all of the fields that have "Experimental" in
their names. It wouldn't make sense to lock in to deprecated fields.
"Experimental" fields can be audited on a 1-by-1 basis after this PR,
and if found to be stable (or sufficiently alpha-gated), can be restored
to the KubeletConfiguration without the "Experimental" prefix.
Related issue: https://github.com/kubernetes/kubernetes/issues/53084
**Release note**:
```release-note
NONE
```
/cc @kubernetes/api-reviewers
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Skip podpreset test if the alpha feature setttings/v1alpha1 is disabled
**What this PR does / why we need it**: Skip this test if it is not able to find the requested resource, so the test does not consistently fail.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53079
**Special notes for your reviewer**:
**Release note**:
```release-note
Skip podpreset test if the alpha feature setttings/v1alpha1 is disabled
```
Revert "Merge pull request #51857 from kubernetes/revert-51307-kc-type-refactor"
This reverts commit 9d27d92420, reversing
changes made to 2e69d4e625.
See original: #51307
We punted this from 1.8 so it could go through an API review. The point
of this PR is that we are trying to stabilize the kubeletconfig API so
that we can move it out of alpha, and unblock features like Dynamic
Kubelet Config, Kubelet loading its initial config from a file instead
of flags, kubeadm and other install tools having a versioned API to rely
on, etc.
We shouldn't rev the version without both removing all the deprecated
junk from the KubeletConfiguration struct, and without (at least
temporarily) removing all of the fields that have "Experimental" in
their names. It wouldn't make sense to lock in to deprecated fields.
"Experimental" fields can be audited on a 1-by-1 basis after this PR,
and if found to be stable (or sufficiently alpha-gated), can be restored
to the KubeletConfiguration without the "Experimental" prefix.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make feature gates loadable from a map[string]bool
Command line flag API remains the same. This allows ComponentConfig
structures (e.g. KubeletConfiguration) to express the map structure
behind feature gates in a natural way when written as JSON or YAML.
For example:
KubeletConfiguration Before:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates: "DynamicKubeletConfig=true,Accelerators=true"
```
KubeletConfiguration After:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates:
DynamicKubeletConfig: true
Accelerators: true
```
Fixes: #53024
```release-note
The Kubelet's feature gates are now specified as a map when provided via a JSON or YAML KubeletConfiguration, rather than as a string of key-value pairs.
```
/cc @mikedanese @jlowdermilk @smarterclayton
Automatic merge from submit-queue (batch tested with PRs 50223, 53205). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Create e2e tests for Custom Metrics - Stackdriver Adapter and HPA based on custom metrics from Stackdriver
**What this PR does / why we need it**:
- Add e2e test for Custom Metrics - Stackdriver Adapter
- Add 2e2 test for HPA based on custom metrics from Stackdriver
- Enable HorizontalPodAutoscalerUseRESTClients option
**Release note**:
```release-note
Horizontal pod autoscaler uses REST clients through the kube-aggregator instead of the legacy client through the API server proxy.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Increase backoffLimit for job that we expect to fail several times
**What this PR does / why we need it**:
Since the introduction of `backoffLimit` for a job that single test failed majority of times on: `BackoffLimitExceeded: Job has reach the specified backoff limit`.
I'm bumping this to 999, so that it has enough room to fail several times.
**Which issue this PR fixes**:
Fixes#35507.
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 53678, 53677, 53682, 53673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix to prevent downward api change break on older versions
Signed-off-by: Timothy St. Clair <timothysc@gmail.com>
**What this PR does / why we need it**:
Prevents "should provide pod and host IP as an env var [Conformance]" from running on older versions whose api does not have that field and will break on those clusters.
This is not a upstream tested configuration, but downstream folks do this regularly.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
N/A
**Special notes for your reviewer**:
N/A
**Release note**:
```
Prevent downward api-change from breaking on older version
```
/cc @kubernetes/sig-testing-bugs @jpbetz @marun
Automatic merge from submit-queue (batch tested with PRs 53678, 53677, 53682, 53673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix typo in StatefulSet e2e test
Found it while reviewing #53218
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
wait for pod to be fully deleted
**What this PR does / why we need it**:
Fix flaky glusterfs io-streaming tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49529
**Special notes for your reviewer**:
1) max potential wait for complete pod deletion is ~~15m~~ 5m.
2) ~~removed [Flaky] from HostCleanup, _e2e/node/kubelet.go_ since pod deletion is reliable now.~~
3) ~~added tag [Slow] to HostCleanup due to long max wait for pod deletion.~~
After all CI tests run reliably we can consider removing the [Flaky] tag (2, above), or do that in a separate pr.
```release-note
NONE
```
cc @msau42
Automatic merge from submit-queue (batch tested with PRs 52354, 52949, 53551). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add client and server versions to the e2e.test output.
Fixes#53502.
```release-note
NONE
```
Sample output:
```
Oct 6 15:02:44.001: INFO: Client version: v1.9.0-alpha.1.737+3b1b19a1e2a9a4-dirty
Oct 6 15:02:44.039: INFO: Server version: v1.8.0
```
/assign @timothysc
Automatic merge from submit-queue (batch tested with PRs 52354, 52949, 53551). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable API chunking and promote to beta for 1.9
All list watchers default to using chunking. The server by default fills pages to avoid low cardinality filters from making excessive numbers of requests. Fix an issue with continuation tokens where a `../` could be used if the feature was enabled.
```release-note
API chunking via the `limit` and `continue` request parameters is promoted to beta in this release. Client libraries using the Informer or ListWatch types will automatically opt in to chunking.
```
Command line flag API remains the same. This allows ComponentConfig
structures (e.g. KubeletConfiguration) to express the map structure
behind feature gates in a natural way when written as JSON or YAML.
For example:
KubeletConfiguration Before:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates: "DynamicKubeletConfig=true,Accelerators=true"
```
KubeletConfiguration After:
```
apiVersion: kubeletconfig/v1alpha1
kind: KubeletConfiguration
featureGates:
DynamicKubeletConfig: true
Accelerators: true
```
Automatic merge from submit-queue (batch tested with PRs 53525, 53652). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
apimachinery: remove ObjectCopier interface(s)
The big commit is a mechanical, transitive removal of the copier interfaces in all structs and function calls.
The etcd3 storage now attempts to fill partial pages to prevent clients
having to make more round trips (latency from server to etcd is lower
than client to server). The server makes repeated requests to etcd of
the current page size, then uses the filter function to eliminate any
matches. After this change the apiserver will always return full pages,
but we leave the language in place that clients must tolerate it.
Reduces tail latency of large filtered lists, such as viewing pods
assigned to a node.
Automatic merge from submit-queue (batch tested with PRs 53444, 52067, 53571, 53182). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
revamp replicaset integration tests
**What this PR does / why we need it**:
This PR revamps existing replicaset integration tests. Some unit tests have been converted to integration tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51484
**Release note**:
```release-note
NONE
```
**TODO List**:
- [x] add an integration test to verify scale endpoint works
- [ ] convert testReplicaSetConditionCheck() to integration test, and modify the test as replicaset's condition has been removed
- [ ] ~~HPA-related replicaset integration test (may be better suited under HPA integration tests)~~
- [x] verify all tests from "Suggested unit tests to retain" list of the internal doc will not be converted to integration tests, or convert the tests accordingly
- [ ] ~~refactor sync call tree (refer deployment and daemonset PRs)~~
- [x] further improve written integration tests (revise test strategies, remove redundant GET / UPDATE calls, add more relevant sub-tests)
- [x] remove unit tests that have overlapping testing goals with written integration tests
Automatic merge from submit-queue (batch tested with PRs 53621, 52320, 53625). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
revamp replicaset e2e tests
**What this PR does / why we need it**:
This PR removes some replicaset e2e tests as they will be converted to integration tests:
(1) condition check test
(2) pod adoption test
(3) pod release(orphaning) test
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52118
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53567, 53197, 52944, 49593). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Clean up in `cluster_size_autoscaling.go`
**What this PR does / why we need it**:
Fix `golint` errors.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add launching Cluster Autoscaler in Kubemark
**What this PR does / why we need it**:
Allows to launch Cluster Autoscaler in Kubemark.
To do it, set ENABLE_KUBEMARK_CLUSTER_AUTOSCALER flag to true. This currently only works with one nodegroup, for which you can specify minimum and maximum number of nodes and name. (KUBEMARK_AUTOSCALER_MIN_NODES, KUBEMARK_AUTOSCALER_MAX_NODES, KUBEMARK_AUTOSCALER_MIG_NAME).
Is is important to note that NUM_NODES has a different meaning when launching Cluster Autoscaler - we always start with only one node, but NUM_NODES is used to calculate the size of Kubemark master and addon components.
There are no changes to the current setup if ENABLE_KUBEMARK_CLUSTER_AUTOSCALER is set to false.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50447, 53308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[e2e] add service session affinity test case
**What this PR does / why we need it**:
**Which issue this PR fixes**:
Add service session affinity test case for e2e
fixes#31712
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51771, 52971). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
pass labelSelector to server side opaquely
**What this PR does / why we need it**:
From @smarterclayton
> The server is responsible for handling label selection for the most part. There is some level of client side processing possible, but for the most part `label selector` should be able to be passed opaquely.
xref #50140
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
/assign @smarterclayton @liggitt
**Release note**:
```release-note
None
```
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.
Without this change, the e2e service-related Networking tests would always
panic when config.ExternalAddrs[0] is accessed and the slice is empty.
This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.
fixes#53568
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Let local node e2e return error.
Fixes#52665
Let `make test-e2e-node` return error when it fails. Now it always returns exit code 0, whenever it fails or not.
@yguo0905 Could you help me review this?
Signed-off-by: Lantao Liu <lantaol@google.com>
Automatic merge from submit-queue (batch tested with PRs 53350, 52688, 53531, 52515). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Skip e2e check for logs API path if provider is skeleton
There is a networking e2e test with the It() description:
```
"should provide unchanging, static URL paths for kubernetes api services"
```
This test performs GETs from the Kubernetes API using various paths,
including "/logs". This test for a GET using path "/logs" should be
skipped for provider type "skeleton", since this path is unsupported.
This change adds "skeleton" to the list of providers for which
this test case should be skipped.
fixes#53529
**What this PR does / why we need it**:
This change adds "skeleton" to the list of providers for which
the test for an API GET using the "/logs" path should be skipped.
This is needed because, as far as I can tell, the "skeleton" provider
doesn't support the "/logs" api path.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53529
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53350, 52688, 53531, 52515). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
PodReady should be replaced with podutil.IsPodReady
**What this PR does / why we need it**:
PodReady should be replaced with podutil.IsPodReady.
Thanks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 52768, 51898, 53510, 53097, 53058). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
migration of federation test
**What this PR does / why we need it**:
Migrate federation(multicluster) e2e test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
Ref issue https://github.com/kubernetes/kubernetes/issues/50735
- Move `ubernetes_lite.go` to new created directory named **multicluster**.
**Special notes for your reviewer**:
**Release note**:
none
/cc @quinton-hoole
There is a networking e2e test with the It() description:
```
"should provide unchanging, static URL paths for kubernetes api services"
```
This test performs GETs from the Kubernetes API using various paths,
including "/logs". This test for a GET using path "/logs" should be
skipped for provider type "skeleton", since this path is unsupported.
This change adds "skeleton" to the list of providers for which
this test case should be skipped.
fixes#53529
Automatic merge from submit-queue (batch tested with PRs 53278, 53184). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added integration test for TaintNodeByCondition.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #42001
**Release note**:
```release-note
Added integration test for TaintNodeByCondition.
```
Automatic merge from submit-queue (batch tested with PRs 53278, 53184). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add API version apps/v1, and bump DaemonSet to apps/v1
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: kubernetes/features#484
**Special notes for your reviewer**: This PR targets `master`, as a backup if #53223 (targeting features branch) falls through
@kubernetes/sig-apps-api-reviews
**Release note**:
```release-note
Add API version apps/v1, and bump DaemonSet to apps/v1
```
Automatic merge from submit-queue (batch tested with PRs 53227, 53120). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E test to verify clean up of stale dummy VM for vSphere dynamic provisioning
Verify if the dummy stale VM's created during dynamic provisioning are deleted by the clean up routine in vSphere cloud Provider.
**Testing Done:**
- Create a storage class with invalid policy on a VSAN datastore.
- Create a PVC using the above storage class
- Verify if the PVC is not bound.
- Delete the PVC.
- Sleep for 6 minutes so that vSphere Cloud Provider clean up routine can delete the stale dummy VM's.
- Verify if the VM is not present. Otherwise fail the test.
@rohitjogvmw @divyenpatel
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 51750, 53195, 53384, 53410). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add ping6 option for e2e ext connectivity test for IPv6-only clusters
e2e tests provide only an (IPv4) ping test for external connectivity.
We need a way to conditionally run a ping6 external connectivity check,
and disable the (IPv4) ping-based external connectivity check,
for end-to-end testing on IPv6-only clusters.
This feature will be needed for creating gating IPv6 CI tests.
fixes#53383
**What this PR does / why we need it**:
This adds an IPv6 (ping6) version of the external connectivity ping check to the e2e test suite,
and adds "Feature:" flags for selecting whether the IPv4 or IPv6 (or both) versions
of the connectivity test should be run. We need this change to be able to use the
e2e test suite in upstream gating IPv6 CI tests on IPv6-only clusters (at least until
dual-stack operation is fully supported in Kubernetes).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53383
**Special notes for your reviewer**:
Please let me know if there are better tags to use for selecting IPv4 vs IPv6 testing.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53345, 53389). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add IPv6 option for e2e iPerf test
The e2e iPerf test case currently only runs in IPv4 mode.
This change adds an option to run an iPerf test in IPv6 mode (i.e. by running
iPerf with a "-V" command line flag), so that the test can be run on
IPv6-only clusters.
**What this PR does / why we need it**:
This change adds an option to run an iPerf test in IPv6 mode (i.e. by running
iPerf with a "-V" command line flag), so that the test can be run on
IPv6-only clusters. It also adds a Feature tag to the current IPv4 iPerf test
so that it can be disabled when running e2e tests on an IPv6-only cluster.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53388
**Special notes for your reviewer**:
Please let me know if there are better "Feature:" tags to use for selecting whether to run the IPv4 vs IPv6 test case.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53228, 53232, 53353). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixes a regression introduced by PR 52290 that extended resource
capacity may temporarily drop to zero after kubelet restarts and PODs restarted during
that time window could fail to be scheduled.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/53342
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixes test/e2e_node/gpu_device_plugin.go test failure.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes https://github.com/kubernetes/kubernetes/issues/53354
**Special notes for your reviewer**:
**Release note**:
```release-note
```
e2e tests provide only an (IPv4) ping test for external connectivity.
We need a way to conditionally run a ping6 external connectivity check,
and disable the (IPv4) ping-based external connectivity check,
for end-to-end testing on IPv6-only clusters.
This feature will be needed for creating gating IPv6 CI tests.
fixes#53383
Automatic merge from submit-queue (batch tested with PRs 49826, 53404). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix typo in health check url
fixes the url used to scrape bootstrapping details in a test failure case
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove conformance tag for internet connectivity
**What this PR does / why we need it**:
ICMP ping is not available in many environments, so we should avoid
using a e2e test based the premise as a conformance test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Part of Fix for #52098, Please see #51287 for more discussion
**Release note**:
```release-note
NONE
```
The e2e iPerf test case currently only runs in IPv4 mode.
This change add an option to run an iPerf test in IPv6 mode (i.e. by running
iPerf with a "-V" command line flag), so that the test can be run on
IPv6-only clusters.
Automatic merge from submit-queue (batch tested with PRs 52723, 53271). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update file location in e2e test comment
**What this PR does / why we need it**: The location provided, "docs/design/expansion.md" leads to something saying the file has moved with a link. The link goes to a 404 error. The file was moved out of tree to https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/expansion.md and the comment here should be changed
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53270
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Disable autoscaling before removing autoscaled node pool
This is to prevent flakes due to API calls failing in AfterEach during master restart, which is triggered by deleting an autoscaled node pool. Adding disable call before deleting node pool should prevent this as we'll wait for master restart in disableAutoscaler function.
While it may be faster to wait after deletion of autoscaled node pools, this is less complex and will be easier to remove in the future when changing autoscaling setttings no longer triggers master restart.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix skip condition for autoscaling test of scale to zero
This fixes test running in wrong setup (on single MIG vs multiple MIGs as was intended.)
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Change RBAC storage version to v1 for 1.9
v1 was introduced in 1.8, but storage version remained at v1beta1 to accommodate HA rolling upgrades. in 1.9, we can change the persisted and preferred version to v1
```release-note
RBAC objects are now stored in etcd in v1 format. After completing an upgrade to 1.9, RBAC objects (Roles, RoleBindings, ClusterRoles, ClusterRoleBindings) should be migrated to ensure all persisted objects are written in `v1` format, prior to `v1alpha1` support being removed in a future release.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remake cluster size autoscaling scale to zero test
This PR affects only cluster size autoscaling test suite. Changes:
* check whether autoscaling for is enabled by looking for a node group with a given max number of nodes instead of min as the field is omitted if value is 0
* split scale to zero test into GKE & GCE versions, add GKE-specific setup and verification
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
enable to specific unconfined AppArmor profile
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52370
**Special notes for your reviewer**:
/assign @tallclair @liggitt
**Release note**:
```release-note
enable to specific unconfined AppArmor profile
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
openapi: Validate unregistered type, if they can be found
**What this PR does / why we need it**:
Types that are not registered/hard-coded in kubectl won't be validated, even if they could because they are defined in openapi. If they are neither registered nor in openapi, then skip validation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes nothing
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate sig-ui e2e test
**What this PR does / why we need it**:
Migrate sig-ui e2e tests
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
none
Automatic merge from submit-queue (batch tested with PRs 53234, 53252, 53267, 53276, 53107). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Prepull images after disk eviction tests
Example failure: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-flaky/2855
Disk eviction tests trigger image garbage collection. It can remove images required for subsequent tests.
This results in the error during pod creation:
`timed out waiting for the condition`
You can see in the events after the test:
`I0929 15:47:05.884] I0929 15:17:09.376591 2309 util.go:4734] Event(v1.ObjectReference{Kind:"Pod", Namespace:"e2e-tests-localstorage-eviction-test-mn5v4", Name:"container-disk-hog-pod", UID:"8dba851c-a528-11e7-a9a6-42010a800fd7", APIVersion:"v1", ResourceVersion:"116", FieldPath:"spec.containers{container-disk-hog-container}"}): type: 'Warning' reason: 'ErrImageNeverPull' Container image "busybox" is not present with pull policy of Never`
/assign @Random-Liu
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixes a flakiness in GPUDevicePlugin e2e test.
Waits till nvidia gpu disappears from all nodes after deleting the
device plug DaemonSet to make sure its pods are deleted from all nodes.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/53281
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 53263, 52967, 53262, 52654, 53187). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix sed command to not try shell redirection
RunCmd uses Go's os/exec library to run commands directly. Since these
are not run through a shell, we can't use shell syntax for piping or
file redirection. The proper way to do that is to create a Command
object and set the Std{in,out,err} pipes appropriately. Luckily sed
can handle the behavior we need without having to manually set this up.
fixes https://github.com/kubernetes/kubeadm/issues/472
RunCmd uses Go's os/exec library to run commands directly. Since these
are not run through a shell, we can't use shell syntax for piping for
file redirection. The proper way to do that is to create a Command
object and set the Std{in,out,err} pipes appropriately. Luckily sed
can handle the behavior we need without having to manually set this up.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
In cluster autoscaling tests, improve error logging on enable autoscaling failure
This adds logging command and request output in addition to error.
Automatic merge from submit-queue (batch tested with PRs 51311, 52575, 53169). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix a scheduler flaky e2e test
**What this PR does / why we need it**:
Makes a scheduler e2e test that verifies the resource limit predicate more robust.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53066
**Release note**:
```release-note
NONE
```
@kubernetes/sig-scheduling-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 50280, 52529, 53093, 53108, 53168). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improve PVC ref volume metric test robustness
This test has been flaking. The current working theory is that
volume stats collection didn't run in time to grab the metrics
from the newly created pod.
Made the following changes:
- Added more logs to help debug future failures
- Poll metrics a few additional times before failing the test
fixes#53150
This test has been flaking. The current working theory is that
volume stats collection didn't run in time to grab the metrics
from the newly created pod.
Made the following changes:
- Added more logs to help debug future failures
- Poll metrics a few additional times before failing the test
Automatic merge from submit-queue (batch tested with PRs 52630, 53110, 53136, 53075). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix host network flake tests
**What this PR does / why we need it**:
Fix flaky test "Security Context when creating a pod in the host network namespace should listen on same port in the host network containers".
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53091
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
federation: simplify deepcopy calls
We have static DeepCopy now without the possibility of errors. Makes the code much simpler.
Automatic merge from submit-queue (batch tested with PRs 50988, 50509, 52660, 52663, 52250). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added device plugin e2e kubelet failure test
Signed-off-by: Renaud Gaubert <renaud.gaubert@gmail.com>
**What this PR does / why we need it**:
This is part of issue #52859 (fixes#52859)
This PR adds a e2e_node test for the device plugin.
Specifically it implements testing of failure handling by the device plugin components in case Kubelet restart / crashes.
I might try to refactor the GPU tests in a later PR.
**Special notes for your reviewer**:
@jiayingz @vishh
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52990, 53064, 52686, 52221, 53069). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Allow kubelet metrics tests to run on gke
**What this PR does / why we need it**:
On GKE, you can still access kubelet metrics, so allow the kubelet metrics test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
NONE
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improve HT detection
**What this PR does / why we need it**:
Fix Cpu Manager e2e node tests that fail due to hard-coded `Thread(s) per core` char position.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52988
Automatic merge from submit-queue (batch tested with PRs 52721, 53057, 52493, 52998, 52896). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Move deployment collision avoidance e2e test to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
In autoscaling tests, add waiting for new pool to become ready
This adds missing timeout when adding a node pool in GKE scale to 0 test and improves logging error when enabling autoscaling.
Automatic merge from submit-queue (batch tested with PRs 51648, 53030, 53009). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fixed intermitant e2e aggregator test on GKE.
**What this PR does / why we need it**: Issue was caused by another test cleaning up its namespace.
This caused the namespace controller to try to clean up that namespace.
This involves deleting all flunders under that namespace.
However the sample-apiserver was not honoring the namespace filter.
So the flunders for the test would randomly disappear.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50945
**Special notes for your reviewer**: Requires we fix the container image to contain this fix to work.
**Release note**:
```release-note NONE
```
Fixes issues/50945.
Issue was caused by another test cleaning up its namespace.
This caused the namespace controller to try to clean up that namespace.
This involves deleting all flunders under that namespace.
However the sample-apiserver was not honoring the namespace filter.
So the flunders for the test would randomly disappear.
Fixed image path to pick up newly built fixes from this PR.
Automatic merge from submit-queue (batch tested with PRs 51759, 53001, 52806). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fix broken statefulset e2e test
**What this PR does / why we need it**:
Fixes the CockroachDB statefulset e2e test.
This was broken back in #43637 when the logic in
`(*StatefulSetTester).CreateStatefulSet` switched from using
`generated.ReadOrDie` to read the entire service.yaml file and pass it
to kubectl to using `manifest.SvcFromManifest`, which assumes that the
file contains only a single service.
To fix the test, just remove the second service, which isn't needed to test the Statefulset functionality.
**Which issue this PR fixes**:
Fixes#52750
**Special notes for your reviewer**:
N/A
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51067, 52319, 52803, 52961, 51972). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Add support for skeleton in GetSigner
Adding support for skeleton to GetSigner to be able to run
e2e tests against a bare metal multinode cluster.
Closes#35613
Automatic merge from submit-queue (batch tested with PRs 51067, 52319, 52803, 52961, 51972). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Move prometheus metrics for docker operations into dockershim
Automatic merge from submit-queue (batch tested with PRs 52960, 52373). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Refactor eviction tests
fixes: #52203
We have a bunch of eviction tests, which each break independently, and take a large amount of time to fix.
This refactors these tests to share the core eviction testing logic. Each tests needs only to set kubelet flags, and specify which pods to run.
I decided to omit the memory eviction tests because they work. Best not to disturb them.
A large portion of the code changes are the renaming of inode_eviction_test.go -> eviction_test.go
This should probably wait until after https://github.com/kubernetes/kubernetes/pull/50392
/assign @mtaufen @Random-Liu
Automatic merge from submit-queue (batch tested with PRs 52905, 52766). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Refactor parsing cluster autoscaler status, add logging error
Minor improvements to autoscaling test suite and e2e framework.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fix GCE LB resource cleanup for service e2e tests.
**What this PR does / why we need it**: Fix GCE LB resource cleanup logic.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52347
**Special notes for your reviewer**:
/assign @shyamjvs @nicksardo
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52880, 52855, 52761, 52885, 52929). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Don't need to check useAnnotation in dns e2e test
**What this PR does / why we need it**:
hostname/subdomain annotations were removed in #44137. This PR removes the check.
Also, `var dnsServiceLabelSelector` is not used anymore.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: https://github.com/kubernetes/kubernetes/pull/44137
**Special notes for your reviewer**:
/cc @bowei @MrHohn
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52880, 52855, 52761, 52885, 52929). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Remove cloud provider rackspace
**What this PR does / why we need it**:
For now, we have to implement functions in both `rackspace` and `openstack` packages if we want to add function for cinder, for example [resize for cinder](https://github.com/kubernetes/kubernetes/pull/51498). Since openstack has implemented all the functions rackspace has, and rackspace is considered deprecated for a long time, [rackspace deprecated](https://github.com/rackspace/gophercloud/issues/592) ,
after talking with @mikedanese and @jamiehannaford offline , i sent this PR to remove `rackspace` in favor of `openstack`
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52854
**Special notes for your reviewer**:
**Release note**:
```release-note
The Rackspace cloud provider has been removed after a long deprecation period. It was deprecated because it duplicates a lot of the OpenStack logic and can no longer be maintained. Please use the OpenStack cloud provider instead.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Allow dns e2e test case for ExternalName to run on aws
**What this PR does / why we need it**:
#52840 uses allocated clusterIP instead of hard-coded one. So we don't need to care about the clusterIP range of the CI job config. Let it run on pull-kubernetes-e2e-kops-aws
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47224
**Special notes for your reviewer**:
ref: https://github.com/kubernetes/test-infra/pull/4462
/cc @bowei @MrHohn @justinsb
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
bazel: build/test almost everything
**What this PR does / why we need it**: Miscellaneous cleanups and bug fixes. The main motivating idea here was to make `bazel build //...` and `bazel test //...` mostly work. (There's a few reasons these still don't work, but we're a lot closer.)
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @BenTheElder @mikedanese @spxtr
Automatic merge from submit-queue (batch tested with PRs 52469, 52574, 52330, 52689, 52829). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fixing E2E Test - After restarting kubelet test expects node's status to be NotReady
**What this PR does / why we need it**:
This PR is fixing the e2e tests involves restarting the kubelets. After the kubelet is restarted, test expect the desired state to be NotReady.
After restarting the kubelet we should wait for some time and then check nodes status to be Ready.
Node should not be checked for NotReady state, after restarting kubelet.
**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/285
**Special notes for your reviewer**:
@BaluDontu @rohitjogvmw @tusharnt
Test logs before fix
-----
STEP: Restarting kubelet
Sep 15 11:26:32.768: INFO: Attempting sudo systemctl restart kubelet
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: command: sudo systemctl restart kubelet
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: stdout: ""
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: stderr: ""
Sep 15 11:26:33.001: INFO: ssh root@10.162.22.205:22: exit code: 0
Sep 15 11:26:33.002: INFO: Waiting up to 1m0s for node kubernetes-node2 condition Ready to be false
Sep 15 11:26:33.012: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:35.023: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:37.032: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:39.041: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:41.051: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:43.061: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:45.070: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:47.080: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:49.093: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:51.105: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:53.117: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:55.128: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:57.140: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:26:59.151: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:01.158: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:03.167: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:05.180: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:07.188: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:09.210: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:11.221: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:13.231: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:15.240: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:17.249: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:19.263: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:21.272: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:23.283: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:25.309: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:27.317: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:29.327: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:31.342: INFO: Condition Ready of node kubernetes-node2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
Sep 15 11:27:33.343: INFO: Node kubernetes-node2 didn't reach desired Ready condition status (false) within 1m0s
Sep 15 11:27:33.343: INFO: Node kubernetes-node2 failed to enter NotReady state
[AfterEach] [sig-storage] PersistentVolumes:vsphere
Test logs after fix
-----
STEP: Restarting kubelet
Sep 18 15:40:49.066: INFO: Checking if sudo command is present
Sep 18 15:40:49.342: INFO: Checking if systemctl command is present
Sep 18 15:40:49.573: INFO: Attempting `sudo systemctl status kubelet | grep 'Main PID'`
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: command: sudo systemctl status kubelet | grep 'Main PID'
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: stdout: " Main PID: 19715 (docker)\n"
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: stderr: ""
Sep 18 15:40:49.733: INFO: ssh root@10.162.16.97:22: exit code: 0
Sep 18 15:40:49.733: INFO: Attempting `sudo systemctl restart kubelet`
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: command: sudo systemctl restart kubelet
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: stdout: ""
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: stderr: ""
Sep 18 15:40:49.986: INFO: ssh root@10.162.16.97:22: exit code: 0
Sep 18 15:40:49.988: INFO: Attempting `sudo systemctl status kubelet | grep 'Main PID'`
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: command: sudo systemctl status kubelet | grep 'Main PID'
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: stdout: " Main PID: 25021 (docker)\n"
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: stderr: ""
Sep 18 15:40:50.158: INFO: ssh root@10.162.16.97:22: exit code: 0
Sep 18 15:40:50.158: INFO: Noticed that kubelet PID is changed. Waiting for 30 Seconds for Kubelet to come back
Sep 18 15:41:20.159: INFO: Waiting up to 1m0s for node kubernetes-node4 condition Ready to be true
STEP: Testing that written file is accessible.
Sep 18 15:41:20.191: INFO: Running '/Users/divyenp/github/vmware/kubernetes/_output/dockerized/bin/darwin/amd64/kubectl --server=https://10.162.0.45 --kubeconfig=/Users/divyenp/.kube/config exec --namespace=e2e-tests-pv-9j8j0 pvc-tester-3t9ds -- /bin/sh -c cat /mnt/_SUCCESS'
Sep 18 15:41:20.855: INFO: stderr: ""
Sep 18 15:41:20.855: INFO:
Sep 18 15:41:20.855: INFO: Volume mount detected on pod pvc-tester-3t9ds and written file /mnt/_SUCCESS is readable post-restart.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51902, 52718, 52687, 52137, 52697). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Multi-arch allowPrivilegeEscalation tests
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52698
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 52485, 52443, 52597, 52450, 51971). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Removing PrometheusPushGateway --prom-push-gateway flag from e2e tests.
**What this PR does / why we need it**: Removing obsolete PrometheusPushGateway --prom-push-gateway flag from e2e tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45947
**Special notes for your reviewer**:
**Release note**:
```release-note
Removing `--prom-push-gateway` flag from e2e tests
```
Automatic merge from submit-queue (batch tested with PRs 50392, 52108, 52083, 52134, 51526). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
e2e: minor changes to network/service testing utils
Add more logging to help debug. Also refactor several functions to improve
reusability.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
inode eviction tests fill a constant number of inodes
Issue: #52203
inode eviction tests pass often on some OS distributions, and almost never on others. See [these testgrid tests](https://k8s-testgrid.appspot.com/sig-node#kubelet-flaky-gce-e2e&include-filter-by-regex=Inode)
These differences are most likely because different images have fewer or greater inode capacity, and thus percentage based rules (e.g. inodesFree<50%) make the test more stressful for some OS distributions than others.
This changes the test to require that a constant number of inodes are consumed, regardless of the number of inodes in the filesystem, by setting the new threshold to:
nodefs.inodesFree<(current_inodes_free - 200k)
so that after pods consume 200k inodes, they will be evicted. It requires querying the summary API until we successfully determine the current number of free Inodes.
Automatic merge from submit-queue (batch tested with PRs 51929, 52015, 51906, 52069, 51542). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
move specialDefaultResourcePrefixes out of vendor/k8s.io/apiserver
just a clean-up, fixes TODO: move out of this package, it is not generic
@sttts PTAL
/assign @sttts
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
should use time.Since instead of time.Now().Sub
**What this PR does / why we need it**:
should use time.Since instead of time.Now().Sub
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
```
NONE
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Debug for issues #50945
Aggregator e2e test is intermittantly failing on GKE but not GCE.
Adding the following debugging for help trace issue.
Make sure we always use the same rest client.
Randomly generate the flunder resource name to detect parallel tests.
Print endpoints for sample-system in case multiple instances.
Print original and new pods in case the pod has been restarted.
**What this PR does / why we need it**: Adds debugging for aggregator e2e test to track down GKE flakiness.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50945
**Special notes for your reviewer**: This is primarily additional debugging information.
**Release note**:
```release-note NONE
```
Aggregator e2e test is intermittantly failing on GKE but not GCE.
Adding the following debugging for help trace issue.
Make sure we always use the same rest client.
Randomly generate the flunder resource name to detect parallel tests.
Print endpoints for sample-system in case multiple instances.
Print original and new pods in case the pod has been restarted.
Fixed import list.
Remove rand seed.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Don't specify clusterIP in dns e2e test
**What this PR does / why we need it**:
Different upgrade tests may configure different service clusterIP ranges. If we specify the clusterIP in dns e2e test, it will succeed in one upgrade test but fail in another. This PR doesn't specify clusterIP. It just uses the allocated clusterIP.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50274
**Special notes for your reviewer**:
Hope this can really fixes that issue.
/cc @thockin @MrHohn
**Release note**:
```release-note
NONE
```
This was broken back in #43637 when the logic in
`(*StatefulSetTester).CreateStatefulSet` switched from using
`generated.ReadOrDie` to read the entire service.yaml file and pass it
to kubectl to using `manifest.SvcFromManifest`, which assumes that the
file contains only a single service.
Fixes#52750
Automatic merge from submit-queue (batch tested with PRs 52843, 52710, 52821, 52844). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
improve retrying logic when checking CA status
This should reduce the flake rate in cluster size autoscaling test suite.
Automatic merge from submit-queue (batch tested with PRs 48406, 52819). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fixed nil dereference in dynamic provisioning e2e tests
**What this PR does / why we need it**: Fixed nil dereference in dynamic provisioning e2e tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52815
**Release note**:
```release-note-none
NONE
```
/sig storage
/assign @saad-ali
/cc @wongma7
/release-note-none
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Retry if possible while creating latency pods in density test
Saw the [last run](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/37) of density test on 5k-node fail due to it:
```
Expected error:
<*errors.StatusError | 0xc44f2fd7a0>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
Status: "Failure",
Message: "timeout",
Reason: "",
Details: nil,
Code: 500,
},
}
timeout
not to have occurred
```
cc @kubernetes/sig-scalability-misc
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Support kubernetes-anywhere provider
**What this PR does / why we need it**:
Implements a new `kubernetes-anywhere` provider to allow upgrade testing in the e2e binary. This is the final step to allow https://github.com/kubernetes/test-infra/pull/4495 and https://github.com/kubernetes/kubernetes-anywhere/pull/450.
**Which issue this PR fixes**:
https://github.com/kubernetes/kubeadm/issues/311
**Special notes for your reviewer**:
Some questions I had
- Does the `--provider` flag specified [here](dbbf6261e0/jobs/config.json (L8587)) get sent to the flag defined [here](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/test_context.go#L219)? Or should I add another `--provider` flag inside `--upgrade_args` like this: `--upgrade_args=... --provider=kubernetes-anywhere`?
- Is it necessary to add waiting logic after the `make` command, or will it implicitly handle that by itself?
Some other points:
- I chose `sed` to manipulate the current kubernetes-anywhere `.config` rather than duplicating another [`anywhere.go`](https://github.com/kubernetes/test-infra/blob/master/kubetest/anywhere.go). One suggestion was to use `jq` but since the config on disk is not serialized to JSON yet, I'm not sure how that'd work.
- Since I don't have a GCE/GKE account or vCenter, I can't actually verify the e2e binary works. I've managed to build it, but if somebody could quickly run a smoke test, I'd appreciate it. This is my first poke around test-infra and e2e, so there might be some plumbing missing
/cc @jessicaochen @luxas @pipejakob @roberthbailey
Automatic merge from submit-queue (batch tested with PRs 52500, 52533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Add mount options e2e test
**What this PR does / why we need it**: A test for newly added StorageClass.mountOptions and PV.mountOptions: provision a pv using a class with its storageclass.mountoptions set, and the end result should be that the mount options can be seen from the mounter.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: Fixes#52138
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
In autoscaling integration test, use allocatable instead of capacity for node memory
This makes the remaining cluster autoscaling test (integration test of HPA and CA working together to scale up the cluster) use node allocatable resources when computing how much memory we need to consume in order to trigger scale up/prevent scale down. Follow up to #52650 as that one is already merging.
cc @wasylkowski
Automatic merge from submit-queue (batch tested with PRs 51337, 47080, 52646, 52635, 52666). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fix: update system spec to support Docker 17.03
Docker 17.03 is 1.13 with bug fixes so they are of the same minor version release. We've validated them both in https://github.com/kubernetes/kubernetes/issues/42926. This PR changes the system spec to support Docker 17.03.
**This should be in 1.8.**
**Release note**:
```
Kubernetes 1.8 supports docker version 17.03.x.
```
/assign @Random-Liu
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
In cluster size autoscaling tests, use allocatable instead of capacity for node memory
This makes cluster size autoscaling e2e tests use node allocatable resources when computing how much memory we need to consume in order to trigger scale up/prevent scale down. It should fix failing tests in GKE.
Automatic merge from submit-queue (batch tested with PRs 52350, 52659). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Add e2e test for storageclass.reclaimpolicy
**What this PR does / why we need it**: Adds another dynamic provisioning test where the storageclass.reclaimpolicy == retain. Have to manually delete the PV at the end of the test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: https://github.com/kubernetes/kubernetes/issues/52138
**Special notes for your reviewer**: I have not tested it but it's ready for review, I will comment and edit this when i've verified it actually works.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51796, 52223)
Add bsalamat to sig-scheduling-maintainers
**What this PR does / why we need it**:
Adds bsalamat to sig-scheduling-maintainers.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A
**Release note**:
```release-note
NONE
```
@kubernetes/sig-scheduling-pr-reviews @davidopp @timothysc @k82cn @wojtek-t
Automatic merge from submit-queue
Bugfix: Fix e2e Flaky Apps/Job BackoffLimit test
This fix is linked to the PR #51153 that introduce the `JobSpec.BackoffLimit`.
Previously the Timeout used in the test was too aggressive and generates flaky test execution. Now it used the default `framework.JobTimeout` used in others tests.
**What this PR does / why we need it**:
This PR should fix flaky "[sig-apps] Job should exceed backoffLimit" test, due to a too short timeout duration.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#51153
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 51824, 50476, 52451, 52009, 52237)
Improve apiserver metrics reporting
Normalize "WATCHLIST" to "WATCH", add "scope" to the other metrics (listing 50k pods is != listing pods in a namespace), and add a new scope "resource" to cover individual resource calls.
This roughly aligns metrics with our ACL model (technically resource scope is GET, but POST to a subresource and POST to a namespace are not the same thing).
```release-note
WATCHLIST calls are now reported as WATCH verbs in prometheus for the apiserver_request_* series. A new "scope" label is added to all apiserver_request_* values that is either 'cluster', 'resource', or 'namespace' depending on which level the query is performed at.
```
Automatic merge from submit-queue (batch tested with PRs 52442, 52247, 46542, 52363, 51781)
Add more tests for pod preemption
**What this PR does / why we need it**:
Adds more e2e and integration tests for pod preemption.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
This PR is based on #50949. Only the last commit is new.
**Release note**:
```release-note
NONE
```
ref/ #47604
@kubernetes/sig-scheduling-pr-reviews @davidopp
Automatic merge from submit-queue
[fluentd-gcp addon] Remove some e2e tests out of blocking suites
Fixes https://github.com/kubernetes/kubernetes/issues/52433
Some Stackdriver Logging e2e tests are broken in release-blocking suites:
- Due to the change in Docker 1.13, on some systems logs are automatically split by 16K chunks. This PR removes an e2e test that assumes otherwise
- In large clusters, it's not possible to ingest system logs from all nodes
Since it's not a Kubernetes problem per se, mitigating this by removing these tests from blocking suites.
Automatic merge from submit-queue
Fix failing autoscaling test in GKE
This should fix `[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]` by getting a list of nodes from GKE nodepool in a different way (filtering nodes by labels.) Currently, gcloud command used for it is failing, as we only have GKE node pool name in the test and not the actual MIG name.
Automatic merge from submit-queue (batch tested with PRs 52376, 52439, 52382, 52358, 52372)
Remove the conversion of client config
It was needed because the clientset code in client-go was a copy of the clientset code in Kubernetes.. client-go is authoritative now, so we can remove the nasty copy.
This fix is linked to the PR #51153 that introduce the
JobSpec.BackoffLimit.
Previously the Timeout used in the test was too agressive and generates
flaky test execution. Now it used the default framework.JobTimeout used
in others tests.
Automatic merge from submit-queue
Make CPU constraint for l7-lb-controller in density test scale with #nodes
Just noticed that we changed the memory last time, but didn't change cpu. From the last run:
```
Sep 13 04:25:03.360: INFO: Unexpected error occurred: Container l7-lb-controller-v0.9.6-gce-scale-cluster-master/l7-lb-controller is using 0.642709233/0.15 CPU
```
Automatic merge from submit-queue (batch tested with PRs 52316, 52289, 52375)
Extends GPUDevicePlugin e2e test to exercise device plugin restarts.
**What this PR does / why we need it**:
This is part of issue #52189 but does not fix it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 52316, 52289, 52375)
[fluentd-gcp addon] Trim too long log entries due to Stackdriver limitations
Stackdriver doesn't support log entries bigger than 100KB, so by default fluentd plugin just drops such entries. To avoid that and increase the visibility of this problem it's suggested to trim long lines instead.
/cc @igorpeshansky
```release-note
[fluentd-gcp addon] Fluentd will trim lines exceeding 100KB instead of dropping them.
```
Automatic merge from submit-queue
Version gates the ephemeral storage e2e test
Version gates the ephemeral storage e2e test.
**Release note**:
```
NONE
```
@kubernetes/sig-testing-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
StatefulSet: Deflake e2e RunHostCmd more.
It turns out that at some points while the Node is recovering from a reboot, we get a different kind of error ("unable to upgrade connection"). Since we can't distinguish these transient errors from an error encountered after successfully executing the remote command, let's just retry all errors for 5min. If this doesn't work, I'm gonna blame it on sig-node.
ref #48031
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
Port Guestbook tests to mutiarch
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52232
**Special notes for your reviewer**:
**Release note**:
```NONE
NONE
```
Automatic merge from submit-queue
Node e2e tests for the CPU Manager.
**What this PR does / why we need it**:
- Adds node e2e tests for the CPU Manager implementation in https://github.com/kubernetes/kubernetes/pull/49186.
**Special notes for your reviewer**:
- Previous PR in this series: #51180
- Only `test/e2e_node/cpu_manager_test.go` must be reviewed as a part of this PR (i.e., the last commit). Rest of the comments belong in #51357 and #51180.
- The tests have been on run on `n1-standard-n4` and `n1-standard-n2` instances on GCE.
To run this node e2e test, use the following command:
```sh
make test-e2e-node TEST_ARGS='--feature-gates=DynamicKubeletConfig=true' FOCUS="CPU Manager" SKIP="" PARALLELISM=1
```
CC @ConnorDoyle @sjenning
It turns out that at some points while the Node is recovering from a
reboot, we get a different kind of error ("unable to upgrade
connection"). Since we can't distinguish these transient errors from an
error encountered after successfully executing the remote command,
let's just retry all errors for 5min. If this doesn't work, I'm gonna
blame it on sig-node.
Automatic merge from submit-queue (batch tested with PRs 52007, 52196, 52169, 52263, 52291)
Summary tests should expect rss usage now
**What this PR does / why we need it**:
Fixes summary test to expect rss usage now.
Previously, cAdvisor reported rss and not total_rss, but that has now been fixed in most recent version of cAdvisor now in the project.
See: https://github.com/kubernetes/kubernetes/pull/43399#issuecomment-287858599
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50289, 52106)
Fix AppArmor test at scale
**What this PR does / why we need it**:
The AppArmor test only runs on a single node, but previously was loading the necessary profiles to every node. This caused unnecessary churn in very large clusters, so this PR updates the test to only load the profiles to a single node, and ensure the test pod is run on that node (using pod affinity).
**Which issue this PR fixes**: fixes#51791
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52227, 52120)
Fix discovery restmapper finding resources in non-preferred versions
Fixes: #52219
Also reverts behavioral changes to tests that version-qualified cronjobs to work around this issue.
The discovery rest mapper was only populating the priority rest mapper's search list with preferred groupversions.
That meant that if a resource existed in multiple non-preferred versions, AND did not exist in the preferred version (like cronjob, which only exists in v1beta2.batch and v2alpha1.batch, but not v1.batch), the priority restmapper would not find it in its group/version priority list, and would return an error.
```release-note
Fixed an issue looking up cronjobs when they existed in more than one API version
```
Automatic merge from submit-queue
Extend nvidia-gpus e2e test to include a device plugin based test
**What this PR does / why we need it**:
This is needed to verify device plugin feature.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/features/issues/368
**Special notes for your reviewer**:
Related test_infra PR: https://github.com/kubernetes/test-infra/pull/4265
**Release note**:
Add an e2e test for nvidia gpu device plugin
Automatic merge from submit-queue (batch tested with PRs 52047, 52063, 51528)
Improve dynamic kubelet config e2e node test and fix bugs
Rather than just changing the config once to see if dynamic kubelet
config at-least-sort-of-works, this extends the test to check that the
Kubelet reports the expected Node condition and the expected configuration
values after several possible state transitions.
Additionally, this adds a stress test that changes the configuration 100
times. It is possible for resource leaks across Kubelet restarts to
eventually prevent the Kubelet from restarting. For example, this test
revealed that cAdvisor's leaking journalctl processes (see:
https://github.com/google/cadvisor/issues/1725) could break dynamic
kubelet config. This test will help reveal these problems earlier.
This commit also makes better use of const strings and fixes a few bugs
that the new testing turned up.
Related issue: #50217
I had been sitting on this until the cAdvisor fix merged in #51751, as these tests fail without that fix.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Add pod preemption to the scheduler
**What this PR does / why we need it**:
This is the last of a series of PRs to add priority-based preemption to the scheduler. This PR connects the preemption logic to the scheduler workflow.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48646
**Special notes for your reviewer**:
This PR includes other PRs which are under review (#50805, #50405, #50190). All the new code is located in 43627afdf9.
**Release note**:
```release-note
Add priority-based preemption to the scheduler.
```
ref/ #47604
/assign @davidopp
@kubernetes/sig-scheduling-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 51900, 51782, 52030)
apiservers: stratify versioned informer construction
The versioned share informer factory has been part of the GenericApiServer config,
but its construction depended on other fields of that config (e.g. the loopback
client config). Hence, the order of changes to the config mattered.
This PR stratifies this by moving the SharedInformerFactory from the generic Config
to the CompleteConfig struct. Hence, it is only filled during completion when it is
guaranteed that the loopback client config is set.
While doing this, the CompletedConfig construction is made more type-safe again,
i.e. the use of SkipCompletion() is considereably reduced. This is archieved by
splitting the derived apiserver Configs into the GenericConfig and the ExtraConfig
part. Then the completion is structural again because CompleteConfig is again
of the same structure: generic CompletedConfig and local completed ExtraConfig.
Fixes#50661.
Automatic merge from submit-queue
fix format of forbidden messages
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51813
**Special notes for your reviewer**:
/assign @deads2k @liggitt
**Release note**:
```release-note
None
```
Automatic merge from submit-queue
Multiarch support for pets images
**What this PR does / why we need it**:
This PR is for multiarch support for pets image
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52133
**Special notes for your reviewer**:
Copied over the `contrib/pets/peer-finder` as this one is heavily used in many docker images under `test/images`. After this PR I'll submit the PR in contrib project to remove it.
**Release note**:
```NONE
```
Automatic merge from submit-queue
Pipe in upgrade image target for kube-proxy migration tests
**What this PR does / why we need it**:
https://k8s-testgrid.appspot.com/sig-network#gci-gce-latest-upgrade-kube-proxy-ds&width=20
and
https://k8s-testgrid.appspot.com/sig-network#gci-gce-latest-downgrade-kube-proxy-ds&width=20
are still failing.
Reproduced it locally and found node image is being default to debian during upgrade (it was gci before upgrade) because we don't pass in `gci` via `--upgrade--target`. And for some reasons (haven't figured out yet), the upgraded node uses debian image with gci startupscripts...
This PR pipes in `--upgrade-target` for kube-proxy migration tests, hopefully in conjunction with https://github.com/kubernetes/test-infra/pull/4447 it will bring the tests back to normal.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #NONE
**Special notes for your reviewer**:
Sorry for bothering again.
/assign @krousey
**Release note**:
```release-note
NONE
```
Rather than just changing the config once to see if dynamic kubelet
config at-least-sort-of-works, this extends the test to check that the
Kubelet reports the expected Node condition and the expected configuration
values after several possible state transitions.
Additionally, this adds a stress test that changes the configuration 100
times. It is possible for resource leaks across Kubelet restarts to
eventually prevent the Kubelet from restarting. For example, this test
revealed that cAdvisor's leaking journalctl processes (see:
https://github.com/google/cadvisor/issues/1725) could break dynamic
kubelet config. This test will help reveal these problems earlier.
This commit also makes better use of const strings and fixes a few bugs
that the new testing turned up.
Related issue: #50217
Automatic merge from submit-queue (batch tested with PRs 52097, 52054)
Move paused deployment e2e tests to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
StatefulSet: Deflake e2e RunHostCmd.
The initial retry up to 20s was giving up too soon. I'm seeing this test flake because the Node rebooted and it takes ~2min to recover. Now StatefulSet RunHostCmd calls will use the same 5min timeout as with other Pod state checks.
ref #48031
Automatic merge from submit-queue (batch tested with PRs 51728, 49202)
Enable CRI-O stats from cAdvisor
**What this PR does / why we need it**:
cAdvisor may support multiple container runtimes (docker, rkt, cri-o, systemd, etc.)
As long as the kubelet continues to run cAdvisor, runtimes with native cAdvisor support may not want to run multiple monitoring agents to avoid performance regression in production. Pending kubelet running a more light-weight monitoring solution, this PR allows remote runtimes to have their stats pulled from cAdvisor when cAdvisor is registered stats provider by introspection of the runtime endpoint.
See issue https://github.com/kubernetes/kubernetes/issues/51798
**Special notes for your reviewer**:
cAdvisor will be bumped to pick up https://github.com/google/cadvisor/pull/1741
At that time, CRI-O will support fetching stats from cAdvisor.
**Release note**:
```release-note
NONE
```
The initial retry up to 20s was giving up too soon.
I'm seeing this test flake because the Node rebooted and it takes ~2min
to recover.
Now StatefulSet RunHostCmd calls will use the same 5min timeout as with
other Pod state checks.
Automatic merge from submit-queue (batch tested with PRs 51956, 50708)
Move autoscaling/v2 from alpha1 to beta1
This graduates autoscaling/v2alpha1 to autoscaling/v2beta1. The move is more-or-less just a straightforward rename.
Part of kubernetes/features#117
```release-note
v2 of the autoscaling API group, including improvements to the HorizontalPodAutoscaler, has moved from alpha1 to beta1.
```
Automatic merge from submit-queue (batch tested with PRs 51839, 51987)
Disable rbac/v1alpha1, settings/v1alpha1, and scheduling/v1alpha1 by default
**What this PR does / why we need it**: Disables alpha features which were previously enabled by default. Also changes tests which relied on these alpha features being enabled by default.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47691
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixed a bug where some alpha features were enabled by default.
The feature is still Alpha and at times, the IP address previously used
by the load balancer in the test will not completely freed even after
the load balancer is long gone. In this case, the test URL with the IP
would return a 404 response. Tolerate this error and retry until the new
load balancer is fully established.
Charge object count when object is created, no matter if the object is
initialized or not.
Charge the remaining quota when the object is initialized.
Also, checking initializer.Pending and initializer.Result when
determining if an object is initialized. We didn't need to check them
because before 51082, having 0 pending initializer and nil
initializers.Result is invalid.
Automatic merge from submit-queue (batch tested with PRs 51733, 51838)
Decouple kube-proxy upgrade/downgrade tests from upgradeTests
**What this PR does / why we need it**:
Fixes the failing kube-proxy migration CI jobs:
- https://k8s-testgrid.appspot.com/sig-network#gci-gce-latest-upgrade-kube-proxy-ds
- https://k8s-testgrid.appspot.com/sig-network#gci-gce-latest-downgrade-kube-proxy-ds
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51729
**Special notes for your reviewer**:
/assign @krousey @nicksardo
Could you please take a look post code-freeze (I believe it is fixing things)? Thanks!
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51733, 51838)
Relax update validation of uninitialized pod
Split from https://github.com/kubernetes/kubernetes/pull/50344
Fix https://github.com/kubernetes/kubernetes/issues/47837
* Let the podStrategy to only call `validation.ValidatePod()` if the old pod is not initialized, so fields are mutable.
* Let the podStatusStrategy refuse updates if the old pod is not initialized.
cc @smarterclayton
```release-note
Pod spec is mutable when the pod is uninitialized. The apiserver requires the pod spec to be valid even if it's uninitialized. Updating the status field of uninitialized pods is invalid.
```
Automatic merge from submit-queue (batch tested with PRs 51984, 51351, 51873, 51795, 51634)
Revert to using isolated PID namespaces in Docker
**What this PR does / why we need it**: Reverts to the previous docker default of using isolated PID namespaces for containers in a pod. There exist container images that expect always to be PID 1 which we want to support unmodified in 1.8.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48937
**Special notes for your reviewer**:
**Release note**:
```release-note
Sharing a PID namespace between containers in a pod is disabled by default in 1.8. To enable for a node, use the --docker-disable-shared-pid=false kubelet flag. Note that PID namespace sharing requires docker >= 1.13.1.
```
Automatic merge from submit-queue (batch tested with PRs 51186, 50350, 51751, 51645, 51837)
Enabling aggregator functionality on kubemark, gce
Enabling full functionality aggregator functionality in kubemark tests.
This includes configuring it to work in gce (we seem to assume gce in our kubemark tests)
It also includes setting up the relevant security and auth config.
**What this PR does / why we need it**: Configure aggregator properly on kubemark tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48428
**Special notes for your reviewer**:
**Release note**:
```release-note NONE
```
Enabling full functionality aggregator functionality in kubemark tests.
This includes configuring it to work in gce (we seem to assume gce in our kubemark tests)
It also includes setting up the relevant security and auth config.
Removing unneeded reference to CA key for MHBauer.
Fixed to pull the "parsed" values for the certs.
Fix from shyamjvs.
Automatic merge from submit-queue (batch tested with PRs 51915, 51294, 51562, 51911)
Remove OutOfDisk from controllers
This is one of the working items for #48843 for 1.8.
This changes the scheduler and daemonset controllers to no longer respect the OutOfDisk condition. The kubelet has not published OutOfDisk=True since 1.5.
This still preserves the Toleration for the OutOfDisk condition, as (I think?) this is required for backwards compatibility. I added TODOs to remove this in 1.10.
Automatic merge from submit-queue (batch tested with PRs 51833, 51936)
Changed volume IO e2e test to verify file hash instead of content.
**What this PR does / why we need it**: The existing way of verifying file content takes too much memory, causing processes to be OOM killed.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/51717
**Release note**:
```release-note
NONE
```
/sig storage
/release-note-none
/assign @jeffvance @rootfs
/cc @msau42
Automatic merge from submit-queue (batch tested with PRs 51845, 51868, 51864)
Update sys spec to support docker 1.11-1.13 and overlay2.
Fixes https://github.com/kubernetes/kubernetes/issues/32536.
Update docker spec to:
1) Support overlay2;
2) Support docker version 1.11-1.13.
@dchen1107 @yguo0905 @luxas
/cc @kubernetes/sig-node-pr-reviews
```release-note
Kubernetes 1.8 supports docker version 1.11.x, 1.12.x and 1.13.x. And also supports overlay2.
```
Automatic merge from submit-queue
Enable batch/v1beta1.CronJobs by default
This PR re-applies the cronjobs->beta back (https://github.com/kubernetes/kubernetes/pull/51720) with the fix from @shyamjvs.
Fixes#51692
@apelisse @dchen1107 @smarterclayton ptal
@janetkuo @erictune fyi
Automatic merge from submit-queue
Remove deprecated init-container in annotations
fixes#50655fixes#51816closes#41004fixes#51816
Builds on #50654 and drops the initContainer annotations on conversion to prevent bypassing API server validation/security and targeting version-skewed kubelets that still honor the annotations
```release-note
The deprecated alpha and beta initContainer annotations are no longer supported. Init containers must be specified using the initContainers field in the pod spec.
```
Automatic merge from submit-queue
Job failure policy controller support
**What this PR does / why we need it**:
Start implementing the support of the "Backoff policy and failed pod limit" in the ```JobController``` defined in https://github.com/kubernetes/community/pull/583.
This PR depends on a previous PR #48075 that updates the K8s API types.
TODO:
* [X] Implement ```JobSpec.BackoffLimit``` support
* [x] Rebase when #48075 has been merged.
* [X] Implement end2end tests
implements https://github.com/kubernetes/community/pull/583
**Special notes for your reviewer**:
**Release note**:
```release-note
Add backoff policy and failed pod limit for a job
```
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)
Flexvolume dynamic plugin discovery: Prober unit tests and basic e2e test.
**What this PR does / why we need it**: Tests for changes introduced in PR #50031 .
As part of the prober unit test, I mocked filesystem, filesystem watch, and Flexvolume plugin initialization.
Moved the filesystem event goroutine to watcher implementation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51147
**Special notes for your reviewer**:
First commit contains added functionality of the mock filesystem.
Second commit is the refactor for moving mock filesystem into a common util directory.
Third commit is the unit and e2e tests.
**Release note**:
```release-note
NONE
```
/release-note-none
/sig storage
/assign @saad-ali @liggitt
/cc @mtaufen @chakri-nelluri @wongma7
Automatic merge from submit-queue
bump QEMU version to v2.9.1
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
xref #38067
**Special notes for your reviewer**:
/assign @luxas
**Release note**:
```release-note
update QEMU version to v2.9.1
```
Job failure policy integration in JobController. From the
JobSpec.BackoffLimit the JobController will define the backoff
duration between Job retry.
It use the ```workqueue.RateLimitingInterface``` to store the number of
"retry" as "requeue" and the default Job backoff initial duration is set
during the initialization of the ```workqueue.RateLimiter.
Since the number of retry for each job is store in a local structure
"JobController.queue" if the JobController restarts the number of retries
will be lost and the backoff duration will be reset to 0.
Add e2e test for Job backoff failure policy
Automatic merge from submit-queue (batch tested with PRs 50602, 51561, 51703, 51748, 49142)
Use arm32v7|arm64v8 images instead of the deprecated armhf|aarch64 image organizations
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50601
**Special notes for your reviewer**:
/assign @ixdy @jbeda @zmerlynn
**Release note**:
```release-note
Use arm32v7|arm64v8 images instead of the deprecated armhf|aarch64 image organizations
```
Automatic merge from submit-queue (batch tested with PRs 51583, 51283, 51374, 51690, 51716)
Unify initializer name validation
Unify the validation rules on initializer names. Fix https://github.com/kubernetes/kubernetes/issues/51843.
```release-note
Action required: validation rule on metadata.initializers.pending[x].name is tightened. The initializer name needs to contain at least three segments separated by dots. If you create objects with pending initializers, (i.e., not relying on apiserver adding pending initializers according to initializerconfiguration), you need to update the initializer name in existing objects and in configuration files to comply to the new validation rule.
```
Automatic merge from submit-queue (batch tested with PRs 50832, 51119, 51636, 48921, 51712)
add reconcile command to kubectl auth
This pull exposes the RBAC reconcile commands through `kubectl auth reconcile -f FILE`. When passed a file which contains RBAC roles, rolebindings, clusterroles, or clusterrolebindings, it will compute covers and add the missing rules.
The logic required to properly "apply" rbac permissions is more complicated that a json merge since you have to compute logical covers operations between rule sets. This means that we cannot use `kubectl apply` to update rbac roles without risking breaking old clients (like controllers).
To solve this problem, RBAC created reconcile functions to use during startup for "stock" roles. We want to offer this power to users who are running their own controllers and extension servers.
This is an intersection between @kubernetes/sig-auth-misc and @kubernetes/sig-cli-misc
Automatic merge from submit-queue (batch tested with PRs 51335, 51364, 51130, 48075, 50920)
[API] Feature/job failure policy
**What this PR does / why we need it**: Implements the Backoff policy and failed pod limit defined in https://github.com/kubernetes/community/pull/583
**Which issue this PR fixes**:
fixes#27997, fixes#30243
**Special notes for your reviewer**:
This is a WIP PR, I updated the api batchv1.JobSpec in order to prepare the backoff policy implementation in the JobController.
**Release note**:
```release-note
Add backoff policy and failed pod limit for a job
```
Automatic merge from submit-queue
Use the right image for the right platform in the e2e tests
**What this PR does / why we need it**:
This PR is for enabling kubernetes tests for multi architecture platform
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#38067
**Special notes for your reviewer**:
This will enable conformance tests for all the supported architectures.
**Release note**:
```release-note
Make all e2e tests lookup image to use from a centralized place. In that centralized place, add support for multiple platforms.
```
x-ref #38067
Automatic merge from submit-queue (batch tested with PRs 45724, 48051, 46444, 51056, 51605)
Add selfsubjectrulesreview in authorization
**What this PR does / why we need it**:
**Which issue this PR fixes**: fixes#47834#31292
**Special notes for your reviewer**:
**Release note**:
```release-note
Add selfsubjectrulesreview API for allowing users to query which permissions they have in a given namespace.
```
/cc @deads2k @liggitt
Automatic merge from submit-queue (batch tested with PRs 50381, 51307, 49645, 50995, 51523)
Address TestEtcdStoragePath flakes
- Wait for the master to be healthy
- Wait longer for the master to start
- Fail gracefully if starting the master panics
Signed-off-by: Monis Khan <mkhan@redhat.com>
```release-note
NONE
```
Fixes#49423
@kubernetes/sig-api-machinery-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 50381, 51307, 49645, 50995, 51523)
Remove deprecated and experimental fields from KubeletConfiguration
As we work towards providing a stable (v1) kubeletconfig API,
we cannot afford to have deprecated or "experimental" (alpha) fields
living in the KubeletConfiguration struct. This removes all existing
experimental or deprecated fields, and places them in KubeletFlags
instead.
I'm going to send another PR after this one that organizes the remaining
fields into substructures for readability. Then, we should try to move
to v1 ASAP (maybe not v1 in 1.8, given how close we are, but definitely in 1.9).
It makes far more sense to focus on a clean API in kubeletconfig v2,
than to try and further clean up the existing "API" that everyone
already depends on.
fixes: #51657
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51632, 51055, 51676, 51560, 50007)
test/e2e/auth: fix audit log test format parsing
Fixes https://github.com/kubernetes/kubernetes/issues/51556
```release-note
NONE
```
cc @CaoShuFeng
Still need to figure out how to run this test locally.
Automatic merge from submit-queue (batch tested with PRs 51628, 51637, 51490, 51279, 51302)
Fix pod local ephemeral storage usage calculation
We use podDiskUsage to calculate pod local ephemeral storage which is not correct, because podDiskUsage also contains HostPath volume which is considered as persistent storage
This pr fixes it
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51489
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @jingxu97 @vishh
cc @ddysher
Automatic merge from submit-queue (batch tested with PRs 51513, 51515, 50570, 51482, 51448)
Add PVCRef to VolumeStats
**What this PR does / why we need it**:
For pod volumes that reference a PVC, add a PVCRef to the corresponding
volume stat. This allows metrics to be indexed/queried by PVC name
which is more user-friendly than Pod reference
**Which issue this PR fixes** : [#363](https://github.com/kubernetes/features/issues/363)
**Special notes for your reviewer**:
**Release note**:
```
`VolumeStats` reported by the kubelet stats summary API
(http://<node>:10255/stats/summary) now include a PVCRef
field describing the PVC referenced by the volume (if any).
```
Automatic merge from submit-queue (batch tested with PRs 51513, 51515, 50570, 51482, 51448)
Removes redundant prefix in cluster-lifecycle e2e test names
**What this PR does / why we need it**:
Removes redundant prefix in cluster-lifecycle e2e test names
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Umbrella issue #49161
xref: #50054
**Special notes for your reviewer**:
/cc @jbeda
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50719, 51216, 50212, 51408, 51381)
Make selector immutable for v1beta2 deployment, replicaset and daemonset prior update
**What this PR does / why we need it**:
This PR ensures controller selector is immutable for deployment and replicaset prior update by ignoring any change to `Spec`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50808
**Special notes for your reviewer**:
This will be a breaking change.
**Release note**:
```release-note
For Deployment, ReplicaSet, and DaemonSet, selectors are now immutable when updating via the new `apps/v1beta2` API. For backward compatibility, selectors can still be changed when updating via `apps/v1beta1` or `extensions/v1beta1`.
```
Automatic merge from submit-queue (batch tested with PRs 51707, 51662, 51723, 50163, 51633)
update GC controller to wait until controllers have been initialized …
fixes#51013
Alternative to https://github.com/kubernetes/kubernetes/pull/51492 which keeps those few controllers (only one) from starting the informers early.
Automatic merge from submit-queue (batch tested with PRs 51707, 51662, 51723, 50163, 51633)
Change SizeLimit to a pointer
This PR fixes issue #50121
```release-note
The `emptyDir.sizeLimit` field is now correctly omitted from API requests and responses when unset.
```
Automatic merge from submit-queue (batch tested with PRs 51707, 51662, 51723, 50163, 51633)
Adding vishh to test/ reviewers and approvers
Rationale: Reviewing/Shepherding lots of features/PRs around node and resource management.
Automatic merge from submit-queue (batch tested with PRs 50775, 51397, 51168, 51465, 51536)
Enable batch/v1beta1.CronJobs by default
This PR moves to CronJobs beta entirely, enabling `batch/v1beta1` by default.
Related issue: #41039
@erictune @janetkuo ptal
```release-note
Promote CronJobs to batch/v1beta1.
```
Automatic merge from submit-queue (batch tested with PRs 50775, 51397, 51168, 51465, 51536)
Allow bearer requests to be proxied by kubectl proxy
Use a fake transport to capture changes to the request and then surface
them back to the end user.
Fixes#50466
@liggitt no tests yet, but works locally
As we work towards providing a stable (v1) kubeletconfig API,
we cannot afford to have deprecated or "experimental" (alpha) fields
living in the KubeletConfiguration struct. This removes all existing
experimental or deprecated fields, and places them in KubeletFlags
instead.
I'm going to send another PR after this one that organizes the remaining
fields into substructures for readability. Then, we should try to move
to v1 ASAP.
It makes far more sense to focus on a clean API in kubeletconfig v2,
than to try and further clean up the existing "API" that everyone
already depends on.
Automatic merge from submit-queue
e2e: Add tests for network tiers in GCE
This test depends on #51301, which adds the new feature. Only the `e2e: Add tests for network tiers in GCE` commit is new.
#51301 should pass this new test.
Automatic merge from submit-queue (batch tested with PRs 51439, 51361, 51140, 51539, 51585)
Enable alpha GCE disk API
This PR builds on top of #50467 to allow the GCE disk API to use either the alpha or stable APIs.
CC @freehan
Automatic merge from submit-queue (batch tested with PRs 47054, 50398, 51541, 51535, 51545)
Switch away from gcloud deprecated flags in compute resource listings
**What is fixed**
Remove deprecated `gcloud compute` flags, see linked issue.
**Which issue this PR fixes**:
fixes#49673
**Special notes for your reviewer**:
The change in `gcloudComputeResourceList` in `test/e2e/framework/ingress_utils.go` isn't strictly needed as currently no affected resources are called on within that file, however the function has the _potential_ to access affected resources so I covered it as well. Happy to change if deemed unnecessary.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51228, 50185, 50940, 51544, 51543)
Add upgrades tests for kube-proxy daemonset migration path
**What this PR does / why we need it**:
From #23225, this is a part of setting up CIs to validate the kube-proxy migration path (static pods -> daemonset and reverse).
The other part of the works (adding real CIs that run these tests) will be in a separate PR against [kubernetes/test-infra](https://github.com/kubernetes/test-infra).
Though this is currently blocked by #50705.
**Special notes for your reviewer**:
cc @roberthbailey @pwittrock
**Release note**:
```release-note
NONE
```
For pod volumes that reference a PVC, add a PVCRef to the corresponding
volume stat. This allows metrics to be indexed/queried by PVC name
which is more user-friendly than Pod reference
Automatic merge from submit-queue (batch tested with PRs 51377, 46580, 50998, 51466, 49749)
Adding e2e SELinux test for local storage
Adding e2e test for SELinux enabled local storage
/sig storage
Closes#45054
Automatic merge from submit-queue (batch tested with PRs 51377, 46580, 50998, 51466, 49749)
Use the pre-built docker binaries on Ubuntu for benchmark tests
- Tested manually.
- The `ubuntu-init-docker.yaml` is copied from `cos-init-docker.yaml` with the following changes needed by Ubuntu. This change is temporary -- we will remove the script and the tests once we know the performance of using the pre-built Docker 1.12 on Ubuntu.
```
71,72c71,72
< mount --bind "${install_location}"/docker-containerd /usr/bin/docker-containerd
< mount --bind "${install_location}"/docker-containerd-shim /usr/bin/docker-containerd-shim
---
> mount --bind "${install_location}"/docker-containerd /usr/bin/containerd
> mount --bind "${install_location}"/docker-containerd-shim /usr/bin/containerd-shim
75c75
< mount --bind "${install_location}"/docker-runc /usr/bin/docker-runc
---
> mount --bind "${install_location}"/docker-runc /usr/sbin/runc
88c88
< local requested_version="$(get_metadata "gci-docker-version")"
---
> local requested_version="$(get_metadata "ubuntu-docker-version")"
93,98d92
< # Check if we have the requested version installed.
< if check_installed /usr/bin/docker "${requested_version}"; then
< echo "Requested version already installed. Exiting."
< exit 0
< fi
<
100c94
< /usr/bin/systemctl stop docker
---
> systemctl stop docker
106c100
< /usr/bin/systemctl start docker && exit $rc
---
> systemctl start docker && exit $rc
```
- Updated all tests to use the latest Ubuntu image.
**Release note**:
```
None
```
/assign @Random-Liu
Automatic merge from submit-queue (batch tested with PRs 49961, 50005, 50738, 51045, 49927)
Add cluster e2es to verify scheduler local storage support
Add cluster e2es to verify scheduler local storage support and remove some unused private functions
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
part of #50818
**Release note**:
```release-note
Add cluster e2es to verify scheduler local ephemeral storage support
```
/assign @jingxu97
/cc @ddysher
Automatic merge from submit-queue (batch tested with PRs 44719, 48454)
check job ActiveDeadlineSeconds
**What this PR does / why we need it**:
enqueue a sync task after ActiveDeadlineSeconds
**Which issue this PR fixes** *:
fixes#32149
**Special notes for your reviewer**:
**Release note**:
```release-note
enqueue a sync task to wake up jobcontroller to check job ActiveDeadlineSeconds in time
```
Automatic merge from submit-queue
Added an end-to-end test ensuring that Cluster Autoscaler does not scale up when all pending pods are unschedulable
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50919, 51410, 50099, 51300, 50296)
GCE: Read networkProjectID param
Fixes#48515
/assign bowei
The first commit is the original PR cherrypicked. The master's kubelet isn't provided a cloud config path, so the project is retrieved via instance metadata. In the GKE case, this project cannot be retrieved by the master and caused an error.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51471, 50561, 50435, 51473, 51436)
Feature gate initializers field
The metadata.initializers field should be feature gated and disabled by default while in alpha, especially since enforcement of initializer permission that keeps users from submitting objects with their own initializers specified is done via an admission plugin most clusters do not enable yet.
Not gating the field and tests caused tests added in https://github.com/kubernetes/kubernetes/issues/51429 to fail on clusters that don't enable the admission plugin.
This PR:
* adds an `Initializers` feature gate, auto-enables the feature gate if the admission plugin is enabled
* clears the `metadata.initializers` field of objects on create/update if the feature gate is not set
* marks the e2e tests as feature-dependent (will follow up with PR to test-infra to enable the feature and opt in for GCE e2e tests)
```release-note
Use of the alpha initializers feature now requires enabling the `Initializers` feature gate. This feature gate is auto-enabled if the `Initialzers` admission plugin is enabled.
```
Automatic merge from submit-queue
Moved node condition filter into a predicates.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50360
**Release note**:
```release-note
A new predicates, named 'CheckNodeCondition', was added to replace node condition filter. 'NetworkUnavailable', 'OutOfDisk' and 'NotReady' maybe reported as a reason when failed to schedule pods.
```
Automatic merge from submit-queue (batch tested with PRs 50953, 51082)
Fix mergekey of initializers; Repair invalid update of initializers
Fix https://github.com/kubernetes/kubernetes/issues/51131
The PR did two things to make parallel patching `metadata.initializers.pending` possible:
* Add mergekey to initializers.pending
* Let the initializer admission plugin set the `metadata.intializers` to nil if an update makes the `pending` and the `result` both nil, instead of returning a validation error. Otherwise if multiple initializer controllers sending the patch removing themselves from `pending` at the same time, one of them will get a validation error.
```release-note
The patch to remove the last initializer from metadata.initializer.pending will result in metadata.initializer to be set to nil (assuming metadata.initializer.result is also nil), instead of resulting in an validation error.
```
Automatic merge from submit-queue
Fix forbidden message format
Before this change:
$ kubectl get pods --as=tom
Error from server (Forbidden): pods "" is forbidden: User "tom" cannot list pods in the namespace "default".
After this change:
$ kubectl get pods --as=tom
Error from server (Forbidden): pods is forbidden: User "tom" cannot list pods in the namespace "default".
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
Fix forbidden message format, remove extra ""
```
Automatic merge from submit-queue
Let the quota evaluator handle mutating specs of pod & pvc
### Background
The final goal is to address https://github.com/kubernetes/kubernetes/issues/47837, which aims to allow more mutation for uninitialized objects.
To do that, we [decided](https://github.com/kubernetes/kubernetes/issues/47837#issuecomment-321462433) to let the admission controllers to handle mutation of uninitialized objects.
### Issue
#50399 attempted to fix all admission controllers so that can handle mutating uninitialized objects. It was incomplete. I didn't realize although the resourcequota admission plugin handles the update operation, the underlying evaluator didn't. This PR updated the evaluators to handle updates of uninitialized pods/pvc.
### TODO
We still miss another piece. The [quota replenish controller](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/resourcequota/replenishment_controller.go) uses the sharedinformer, which doesn't observe the deletion of uninitialized pods at the moment. So there is a quota leak if a pod is deleted before it's initialized. It will be addressed with https://github.com/kubernetes/kubernetes/issues/48893.
Automatic merge from submit-queue
Make coreos test images sshd not allow password login.
This will prevent security scanners from triggering.
Configuration is verbatim from:
https://coreos.com/os/docs/latest/customizing-sshd.html
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)
Dynamic Flexvolume plugin discovery, probing with filesystem watch.
**What this PR does / why we need it**: Enables dynamic Flexvolume plugin discovery. This model uses a filesystem watch (fsnotify library), which notifies the system that a probe is necessary only if something changes in the Flexvolume plugin directory.
This PR uses the dependency injection model in https://github.com/kubernetes/kubernetes/pull/49668.
**Release Note**:
```release-note
Dynamic Flexvolume plugin discovery. Flexvolume plugins can now be discovered on the fly rather than only at system initialization time.
```
/sig-storage
/assign @jsafrane @saad-ali
/cc @bassam @chakri-nelluri @kokhang @liggitt @thockin
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)
Change eviction manager to manage one single local storage resource
**What this PR does / why we need it**:
We decided to manage one single resource name, eviction policy should be modified too.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #50818
**Special notes for your reviewer**:
**Release note**:
```release-note
Change eviction manager to manage one single local ephemeral storage resource
```
/assign @jingxu97
Before this change:
# kubectl get pods --as=tom
Error from server (Forbidden): pods "" is forbidden: User "tom" cannot list pods in the namespace "default".
After this change:
# kubectl get pods --as=tom
Error from server (Forbidden): pods is forbidden: User "tom" cannot list pods in the namespace "default".
Automatic merge from submit-queue
Fixed gke auth update wait condition.
Lookup whoami on gke using gcloud auth list.
Make sure we do not run the test on any cluster older than 1.7.
**What this PR does / why we need it**: Fixes issue with aggregator e2e test on GKE
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50945
**Special notes for your reviewer**: There is a TODO, follow up will be provided when the immediate problem is resolved.
**Release note**: ```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
Made the tests ensure that Cluster Autoscaler is on before running.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Configuration is based on:
https://coreos.com/os/docs/latest/customizing-sshd.html
The specific SSHD config is:
# Use most defaults for sshd configuration.
UsePrivilegeSeparation sandbox
Subsystem sftp internal-sftp
ClientAliveInterval 180
UseDNS no
UsePAM yes
PrintLastLog no # handled by PAM
PrintMotd no # handled by PAM
AuthenticationMethods publickey
This will prevent security scanners from triggering.
Automatic merge from submit-queue
AllowedNotReadyNodes allowed to be not ready for absolutely *any* reason
It's as good as we allow those many nodes to be not part of the cluster at all, ever.
Btw - currently our 5k-node correctness test fails if "kubelet stopped posting node status" or "route not created", etc (ref: https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/3/build-log.txt)
cc @kubernetes/sig-scalability-misc
Automatic merge from submit-queue (batch tested with PRs 51244, 50559, 49770, 51194, 50901)
Distribute pods efficiently in CA scalability tests
**What this PR does / why we need it**:
Instead of using runReplicatedPodOnEachNode method
which is suited to a small number of nodes,
distribute pods on the nodes with desired load
using RCs that eat up all the space we want to be
empty after distribution.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50213, 50707, 49502, 51230, 50848)
StatefulSet: Deflake e2e `kubectl exec` commands.
This may help with another source of flakiness found while investigating #48031.
We seem to get a lot of flakes due to "connection refused" while running `kubectl exec`. I can't find any reason this would be caused by the test flow, so I'm adding retries to see if that helps.
Automatic merge from submit-queue (batch tested with PRs 51224, 51191, 51158, 50669, 51222)
Enable overlay2 on cos-m60 in node e2e tests
Ref: https://github.com/kubernetes/kubernetes/issues/42926
- Restart docker with `-s overlay2` in cloud-init before running all node e2e tests. I have to copy the systemd unit file to `/etc/systemd/system` because the `/usr/lib/systemd/system/` is read only.
- Updated node e2e tests to use the new cos-m60 image.
- The name of the cloud init file (`cos-init-live-restore.yaml`) does not indicate overlay2 will be enabled, but I can't just change the name in this PR, since it's referenced in test-infra.
**Release note**:
```
None
```
/assign @Random-Liu
Automatic merge from submit-queue (batch tested with PRs 51224, 51191, 51158, 50669, 51222)
StatefulSet: Deflake e2e "restart" phase.
This addresses another source of flakiness found while investigating #48031.
The test used to scale the StatefulSet down to 0, wait for ListPods to return 0 matching Pods, and then scale the StatefulSet back up.
This was prone to a race in which StatefulSet was told to scale back up before it had observed its own deletion of the last Pod, as evidenced by logs showing the creation of Pod ss-1 prior to the creation of the replacement Pod ss-0.
Instead, we now wait for the controller to observe all deletions before scaling it back up. This should fix flakes of the form:
```
Too many pods scheduled, expected 1 got 2
```
We seem to get a lot of flakes due to "connection refused" while running
`kubectl exec`. I can't find any reason this would be caused by the test
flow, so I'm adding retries to see if that helps.
Instead of using runReplicatedPodOnEachNode method
which is suited to a small number of nodes,
distribute pods on the nodes with desired load
using RCs that eat up all the space we want to be
empty after distribution.
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200)
Re-enable OIR e2e tests.
Re-enabling test skeleton for opaque integer resources originally submitted as part of #41870. The e2e was disabled since it was flaky. This is the first step toward re-enabling them. Currently all cases are skipped, so this exercises only the BeforeEach behavior and the deferred removal of OIRs from a node.
cc @timothysc
Automatic merge from submit-queue (batch tested with PRs 51108, 51035, 50539, 51160, 50947)
Auto-calculate CLUSTER_IP_RANGE based on cluster size
In preparation for eliminating CLUSTER_IP_RANGE env var from job configs, making it less error prone while folks try to start their own large cluster tests (https://github.com/kubernetes/kubernetes/issues/50907).
/cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 51113, 46597, 50397, 51052, 51166)
Add statefulset upgrade tests to cluster_upgrade
**What this PR does / why we need it**:
Adds already created statefulset upgrade tests to cluster_upgrade.go. With further test infra changes, this will allow them to be continuously run, giving better signals.
Detect and prevent issues like https://github.com/kubernetes/kubernetes/issues/48327
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51113, 46597, 50397, 51052, 51166)
implement proposal 34058: hostPath volume type
**What this PR does / why we need it**:
implement proposal #34058
**Which issue this PR fixes** : fixes#46549
**Special notes for your reviewer**:
cc @thockin @luxas @euank PTAL
Automatic merge from submit-queue (batch tested with PRs 50489, 51070, 51011, 51022, 51141)
update to rbac v1 in yaml file
**What this PR does / why we need it**:
ref to https://github.com/kubernetes/kubernetes/pull/49642
ref https://github.com/kubernetes/features/issues/2
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
cc @liggitt
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Skip "Simple pod should support exec through kubectl proxy" test
As reported in https://github.com/kubernetes/kubernetes/issues/50466,
this test doesn't work in GKE because it uses a bearer token and the feature only works with client certs.
As the feature that is broken in GKE is new and didn't work before, it
is safe to juste ignore the test and consider the feature as "still not
working" in GKE.
**What this PR does / why we need it**: Fixes the broken test in https://k8s-testgrid.appspot.com/release-master-blocking#gke
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: works-around #50466
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
The test used to scale the StatefulSet down to 0, wait for ListPods to
return 0 matching Pods, and then scale the StatefulSet back up.
This was prone to a race in which StatefulSet was told to scale back up
before it had observed its own deletion of the last Pod, as evidenced by
logs showing the creation of Pod ss-1 prior to the creation of the
replacement Pod ss-0.
We now wait for the controller to observe all deletions before
scaling it back up. This should fix flakes of the form:
```
Too many pods scheduled, expected 1 got 2
```
Automatic merge from submit-queue (batch tested with PRs 50257, 50247, 50665, 50554, 51077)
Replace hard-code "cpu" and "memory" to consts
**What this PR does / why we need it**:
There are many places using hard coded "cpu" and "memory" as resource name. This PR replace them to consts.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
/kind cleanup
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50980, 46902, 51051, 51062, 51020)
Remove seemingly obsolete binaries
It's hard to tell if these are safe to remove. Let CI tell me.
Automatic merge from submit-queue (batch tested with PRs 51039, 50512, 50546, 50965, 50467)
Kubectl: Plumb openapi validation (disabled by default)
**What this PR does / why we need it**: Creates a new flag '--openapi' and plumb in the validation code so that it can be used by default to validate objects against the openapi schema.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: partially https://github.com/kubernetes/kubectl/issues/49
**Special notes for your reviewer**:
This is not complete, the name of the variable must change for example.
**Release note**:
```release-note
Kubectl uses openapi for validation. If OpenAPI is not available on the server, it defaults back to the old Swagger.
```
Automatic merge from submit-queue
StatefulSet: Deflake e2e "Saturate" phase.
This should reduce one source of flakiness found while investigating #48031.
The "Saturate" phase of StatefulSet e2e tests verifies orderly startup by controlling when each Pod is allowed to report Ready. If a Pod unexepectedly goes down during the test, the replacement Pod
created by the controller will forget if it was already allowed to report Ready.
After this change, the signal that allows each Pod to report Ready is persisted in the Pod's PVC. Thus, the replacement Pod will remember that it was already told to proceed to a Ready state.
Automatic merge from submit-queue (batch tested with PRs 51102, 50712, 51037, 51044, 51059)
[sig-network-e2e] Remove redundant sig prefix from tests
**What this PR does / why we need it**:
Remove redundant sig prefix from:
```
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should function for endpoint-Service: http
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should function for endpoint-Service: udp
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should function for node-Service: http
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should function for node-Service: udp
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should function for pod-Service: http
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should function for pod-Service: udp
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should update endpoints: http
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should update endpoints: udp
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should update nodePort: http [Slow]
[sig-network] Networking [sig-network] Granular Checks: Services [Slow] should update nodePort: udp [Slow]
[sig-network] Loadbalancing: L7 [sig-network] GCE [Slow] [Feature:Ingress] should conform to Ingress spec
[sig-network] Loadbalancing: L7 [sig-network] GCE [Slow] [Feature:Ingress] should create ingress with given static-ip
```
Umbrella issue #49161
**Special notes for your reviewer**:
cc @xiangpengzhao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50967, 50505, 50706, 51033, 51028)
Fix GC integration test race
During TestCreateWithNonExistentOwner, when creating a pod with a
non-existent owner, assume it's possible the pod will be deleted before
we start checking for the pod's existence. Assuming that the pod still
exists immediately after Create returns is flaky if the GC reacts very
quickly.
```release-note
NONE
```
Might fix https://github.com/kubernetes/kubernetes/issues/50943; without the additional test context provided by this PR, it's not entirely possible to assess the root cause of the reported failure (as we don't know whether the original assertion failure was due to there being 0 or >1 pods).
/cc @caesarxuchao
Automatic merge from submit-queue (batch tested with PRs 50967, 50505, 50706, 51033, 51028)
Revert "Merge pull request #51008 from kubernetes/revert-50789-fix-scheme"
I'm spinning up a cluster right now to test this fix, but I'm pretty sure this was the problem.
There doesn't seem to be a way to confirm from logs, because AFAICT the logs from the hollow kubelet containers are not collected as part of the kubemark test.
**What this PR does / why we need it**:
This reverts commit f4afdecef8, reversing
changes made to e633a1604f.
This also fixes a bug where Kubemark was still using the core api scheme
to manipulate the Kubelet's types, which was the cause of the initial
revert.
**Which issue this PR fixes**: fixes#51007
**Release note**:
```release-note
NONE
```
/cc @shyamjvs @wojtek-t
As reported in https://github.com/kubernetes/kubernetes/issues/50466,
this test doesn't work in GKE because the transport layer doesn't work
with dialing.
As the feature that is broken in GKE is new and didn't work before, it
is safe to juste ignore the test and consider the feature as "still not
working" in GKE.
Automatic merge from submit-queue
Should generate files before scheduler perf
**What this PR does / why we need it**:
For a newly cloned project, generated files are not included. Then scheduler_perf will fail:
```
undefined: openapi.GetOpenAPIDefinitions
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
fixes: #51090
**Special notes for your reviewer**:
Automatic merge from submit-queue (batch tested with PRs 50893, 50913, 50963, 50629, 50640)
Increase latency threshold for list api calls
This is only a short-term solution to make our density test green. In the long-term, we should measure as per our new SLIs.
From @wojtek-t's [doc](https://docs.google.com/document/d/1Q5qxdeBPgTTIXZxdsFILg7kgqWhvOwY8uROEf0j5YBw) on the new SLIs/SLOs, we have the following SLO for list calls:
```
SLO1: In default Kubernetes installation, 99th percentile of SLI2 per cluster-day:
<= 1s if total number of objects of the same type as resource in the system <= X
<= 5s if total number of objects of the same type as resource in the system <= Y
<= 30s if total number of objects of the same types as resource in the system <= Z
```
I would guess that 170,000 pods would fall into the 2nd bracket (at least) and hence the new value of 5s. WDYT?
cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
Automatic merge from submit-queue
Revert #50362.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #50884
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 50693, 50831, 47506, 49119, 50871)
Add instance metadata from flag even when using image config.
Also add instance metadata from flag even when we are using image config.
* Sometimes we need to dynamically generate instance metadata, it's troublesome to put them into image config.
* Sometimes we want to apply instance metadata to all images, it's duplicated to add them to each image in the image config.
/assign @yguo0905 Could you help me review this?
The "Saturate" phase of StatefulSet e2e tests verifies orderly startup
by controlling when each Pod is allowed to report Ready.
If a Pod unexepectedly goes down during the test, the replacement Pod
created by the controller will forget if it was already allowed to
report Ready.
After this change, the signal that allows each Pod to report Ready is
persisted in the Pod's PVC. Thus, the replacement Pod will remember that
it was already told to proceed to a Ready state.
This reverts commit f4afdecef8, reversing
changes made to e633a1604f.
This also fixes a bug where Kubemark was still using the core api scheme
to manipulate the Kubelet's types, which was the cause of the initial
revert.
Automatic merge from submit-queue (batch tested with PRs 47896, 50678, 50620, 50631, 51005)
Made the difference between scale-up timeout and cluster set-up timeout explicit.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
During TestCreateWithNonExistentOwner, when creating a pod with a
non-existent owner, assume it's possible the pod will be deleted before
we start checking for the pod's existence. Assuming that the pod still
exists immediately after Create returns is flaky if the GC reacts very
quickly.
- Wait for the master to be healthy
- Wait longer for the master to start
- Fail gracefully if starting the master panics
Signed-off-by: Monis Khan <mkhan@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 46512, 50146)
Make metav1.(Micro)?Time functions take pointers
Is there any reason for those functions not to be on pointers?
Automatic merge from submit-queue (batch tested with PRs 50904, 50691)
Stackdriver Logging e2e: Explicitly check for docker and kubelet logs presence
Check for kubelet and docker logs explicitly in the Stackdriver Logging e2e tests
Automatic merge from submit-queue
Add enj to OWNERS for test/integration/etcd/etcd_storage_path_test.go
@deads2k is the bot smart enough to not spam me with every test change? Perhaps I should create an `OWNERS` file in `test/integration/etcd`?
**Release note**:
```release-note
NONE
```
@kubernetes/sig-api-machinery-pr-reviews
Automatic merge from submit-queue
CollisionCount should have type int32 across controllers that use it for collision avoidance
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50530
**Special notes for your reviewer**:
/cc @liyinan926
/assign @kow3ns @thockin @janetkuo
**Release note**:
```release-note
Change CollisionCount from int64 to int32 across controllers
```
Automatic merge from submit-queue (batch tested with PRs 50277, 50823, 50376, 50867)
Move e2e taints test file to sig-scheduling
**What this PR does / why we need it**:
Move taint test file to e2e scheduling and add sig-scheduling prefix.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
none
Automatic merge from submit-queue
Change API version of statefulset scale subresource e2e test to v1beta2
**What this PR does / why we need it**:
This PR changes API version of statefulset scale subresource e2e test from `v1beta1` to `v1beta2`.
`apps/v1beta2` has been enabled.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #50109
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50281, 50747, 50347, 50834, 50852)
Add e2e aggregator test.
What this PR does / why we need it:
This adds an e2e test for aggregation based on the sample-apiserver.
Currently is uses a sample-apiserver built as of 1.7.
This should ensure that the aggregation system works end-to-end.
It will also help detect if we break "old" extension api servers.
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes
fixes#43714
**Special notes for your reviewer**:
**Release note**: NONE
Automatic merge from submit-queue (batch tested with PRs 50563, 50698, 50796)
Add ControllerRevision to apps/v1beta2
**What this PR does / why we need it**:
This PR added `ControllerRevision` currently in `apps/v1beta1` to `apps/v1beta2`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50696.
**Special notes for your reviewer**:
@kow3ns @janetkuo
**Release note**:
```release-note
Add ControllerRevision to apps/v1beta2
```
What this PR does / why we need it:
This adds an e2e test for aggregation based on the sample-apiserver.
Currently is uses a sample-apiserver built as of 1.7.
This should ensure that the aggregation system works end-to-end.
It will also help detect if we break "old" extension api servers.
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes
fixes#43714
Fixed bazel for the change.
Fixed # of args issue from govet.
Added code to test dynamic.Client.
Automatic merge from submit-queue
Migrate sig-apimachinery and sig-servicecatalog e2e tests
**What this PR does / why we need it**:
Migrate sig-apimachinery and sig-servicecatalog e2e tests
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
1. Move generated_clientset.go to sig-apimachinary
2. Move podpreset.go to sig-servicecatalog by creating new directory.
**Special notes for your reviewer**:
**Release note**:
none
/cc @liggitt
Automatic merge from submit-queue (batch tested with PRs 50550, 50768)
Don't SSH to master for metrics in case of GKE
cc @kubernetes/sig-scalability-misc @crassirostris
Automatic merge from submit-queue
add some e2e for node authz
**What this PR does / why we need it**:
fix#47174
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47174
**Special notes for your reviewer**:
**Release note**:
```
None
```
Automatic merge from submit-queue
Promote CronJobs to batch/v1beta1 - just the API
This PR promotes CronJobs to beta.
@erictune @kubernetes/sig-apps-api-reviews @kubernetes/api-approvers ptal
This builds on top of #41890 and needs #40932 as well
```release-note
Promote CronJobs to batch/v1beta1.
```
Automatic merge from submit-queue (batch tested with PRs 50711, 50742, 50204)
fix panic in e2e
**What this PR does / why we need it**:
fix#50660
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
no
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 50670, 50332)
e2e test for local storage mount point
**What this PR does / why we need it**:
We discovered that kubernetes can treat local directories and actual mountpoints differently. For example, https://github.com/kubernetes/kubernetes/issues/48331. The current local storage e2e tests use directories.
This PR introduces a test that creates a tmpfs and mounts it, and runs one of the local storage e2e tests.
**Which issue this PR fixes**: fixes https://github.com/kubernetes/kubernetes/issues/49126
**Special notes for your reviewer**:
I cherrypicked PR https://github.com/kubernetes/kubernetes/pull/50177, since local storage e2e tests are broken in master on 2017-08-08 due to "no such host" error. This PR replaces NodeExec with SSH commands.
You can run the tests using the following commands:
```
$ NUM_NODES=1 KUBE_FEATURE_GATES="PersistentLocalVolumes=true" go run hack/e2e.go -- -v --up
$ go run hack/e2e.go -- -v --test --test_args="--ginkgo.focus=\[Feature:LocalPersistentVolumes\]"
```
Here are the summary of results from my test run:
```
Ran 9 of 651 Specs in 387.905 seconds
SUCCESS! -- 9 Passed | 0 Failed | 0 Pending | 642 Skipped PASS
Ginkgo ran 1 suite in 6m29.369318483s
Test Suite Passed
2017/08/08 11:54:01 util.go:133: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:LocalPersistentVolumes\]' finished in 6m32.077462612s
```
**Release note**:
`NONE`
Automatic merge from submit-queue (batch tested with PRs 50198, 49051, 48432)
move KubeletConfiguration out of componentconfig API group
I'm splitting #44252 into more manageable steps. This step moves the types and updates references.
To reviewers: the most important changes are the removals from pkg/apis/componentconfig and additions to pkg/kubelet/apis/kubeletconfig. Almost everything else is an import or name update.
I have one unanswered question: Should I create a whole new api scheme for Kubelet APIs rather than register e.g. a kubeletconfig group with the default runtime.Scheme instance? This feels like the right thing, as the Kubelet should be exposing its own API, but there's a big fat warning not to do this in `pkg/api/register.go`. Can anyone answer this?
Automatic merge from submit-queue (batch tested with PRs 50198, 49051, 48432)
Add prefix to common networking e2e tests
**What this PR does / why we need it**:
Common networking e2e tests shared by node and cluster suites should also have prefix `[sig-network]`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Umbrella issue #49161
**Special notes for your reviewer**:
/cc @bowei
**Release note**:
```release-note
NONE
```
LocalVolumeType tmpfs added
Added checks to ensure tha volume created during setup contains expected testFileContent
Refactored tests out to avoid code duplication
Two different tests are performed with tmpfs:
-serial write and read in two different pods
-write and read in two different pods mounted at the same time
Fixed local storage test failures by integrating https://github.com/kubernetes/kubernetes/pull/50177
Switched NodeExec to SSH
Automatic merge from submit-queue (batch tested with PRs 49904, 50484, 50214)
Adding support for internal IP for e2e tests
Currently IssueSSHComand in util.go only checks for External IP address
to ssh, this PR adds check for internal IP too.
Closes#50630
Automatic merge from submit-queue (batch tested with PRs 50094, 48966, 49478, 50593, 49140)
Migrate sig-auth e2e tests.
**What this PR does / why we need it:** This PR adds [sig-auth] prefix to
workload e2e tests in accord to requirements of adding a SIG dashboard
to testgrid. Refer PR #48781 for guidelines.
**Release note**:
```release-note
```
**What this PR does / why we need it:** This PR adds [sig-auth] prefix to
workload e2e tests in accord to requirements of adding a SIG dashboard
to testgrid. Refer PR #48781 for guidelines.
Automatic merge from submit-queue
Moved node condition filter into a predicates.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50360
**Release note**:
```release-note
A new predicates, named 'CheckNodeCondition', was added to replace node condition filter. 'NetworkUnavailable', 'OutOfDisk' and 'NotReady' maybe reported as a reason when failed to schedule pods.
```
Automatic merge from submit-queue (batch tested with PRs 49847, 49743, 49853, 50225, 50479)
Add node benchmark tests for cos-m60 with docker 1.12.6
Ref: https://github.com/kubernetes/kubernetes/issues/42926
This PR adds a benchmark tests against cos-m60 with docker 1.12.6 on http://node-perf-dash.k8s.io. This test is useful for docker validation -- we can compare the performance of different dockers on the same OS.
cos-m60 comes with docker 1.13.1 by default, so we need to use cloud-init to downgrade the version to 1.12.6.
**Release note**:
```
None
```
/assign @dchen1107
Automatic merge from submit-queue (batch tested with PRs 50485, 49951, 50508, 50511, 50506)
Pass config to external Kubemark cluster in e2e tests
When cluster autoscaler is used in kubemark tests,
pass default kubeconfig as external cluster config.
@shyamjvs @gmarek
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50485, 49951, 50508, 50511, 50506)
fix a typo
**What this PR does / why we need it**:
fix a small typo
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
verions->versions
**Special notes for your reviewer**:
**Release note**:
NONE
```release-note
```NONE
Automatic merge from submit-queue (batch tested with PRs 50485, 49951, 50508, 50511, 50506)
Multiarch nonewprivs test image
**What this PR does / why we need it**:
This PR is for converting nonewprivs image which pushed very recently part of https://github.com/kubernetes/kubernetes/pull/47019.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#50498
**Special notes for your reviewer**:
**Release note**:
```NONE```
Automatic merge from submit-queue (batch tested with PRs 50537, 49699, 50160, 49025, 50205)
When not using a CloudProvider, set both InternalIP and ExternalIP on Nodes
#36095 changed all of the cloudproviders to set both InternalIP and ExternalIP on Nodes, but the non-cloudprovider fallback code now only sets InternalIP.
This causes the test "should be able to create a functioning NodePort service" in test/e2e/service.go to fail on cloud-provider-less clusters, because (with LegacyHostIP gone), it now will only try to work with ExternalIPs, and will fail if the node has only an InternalIP.
There isn't much other code that assumes that ExternalIP will always be set (there's something in pkg/master/master.go, but I don't know what it's doing, so maybe it's only useful in the case where InternalIP != ExternalIP anyway). But given that several of the cloudproviders (mesos, ovirt, rackspace) now explicitly set both InternalIP and ExternalIP to the same value always, it seemed right to do that in the fallback case too.
@deads2k FYI
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
code format in master_utils.go
**What this PR does / why we need it**:
code format
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #N/A
**Release note**:
```release-note
None
```
Automatic merge from submit-queue
move logs to kubectl/util
Move `pkg/util/logs` to `pkg/kubectl/util/logs` per https://github.com/kubernetes/kubernetes/issues/48209#issuecomment-311730681
This will make kubeadm, kubefed, gke-certificates-controller and e2e have dependency on kubectl, which should be fine.
partially addresses: kubernetes/community#598
```release-note
NONE
```
/assign @apelisse @monopole
Automatic merge from submit-queue
Remove deprecated ESIPP beta annotations
**What this PR does / why we need it**:
Remove deprecated ESIPP beta annotations.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50187
**Special notes for your reviewer**:
/assign @MrHohn
/sig network
**Release note**:
```release-note
Beta annotations `service.beta.kubernetes.io/external-traffic` and `service.beta.kubernetes.io/healthcheck-nodeport` have been removed. Please use fields `service.spec.externalTrafficPolicy` and `service.spec.healthCheckNodePort` instead.
```
Automatic merge from submit-queue
Migrate to controller references helpers in meta/v1
**What this PR does / why we need it**:
This is a follow up for #48319 that migrates all method usages to new methods in meta/v1.
**Special notes for your reviewer**:
Looking at each commit individually might be easier.
**Release note**:
```release-note
NONE
```
/sig api-machinery
/kind cleanup
Automatic merge from submit-queue
Add Cluster Autoscaler scalability test suite
This suite is intended for manually testing Cluster Autoscaler on large clusters. It isn't supposed to be run automatically (at least for now).
It can be run on Kubemark (with #50440) with the following setup:
- start Kubemark with NUM_NODES=1 (as we require there to be exactly 1 replica per hollow-node replication controller in this setup)
- set kubemark-master machine type manually to appropriate type for the Kubemark cluster size. Maximum Kubemark cluster size reached in test run is defined by maxNodes constant, so for maxNodes=1000, please upgrade to n1-standard-32. Adjust if modifying maxNodes.
- start Cluster Autoscaler pod in the external cluster using image built from version with Kubemark cloud provider (release pending)
- for grabbing metrics from ClusterAutoscaler (with #50382), add "--include-cluster-autoscaler=true" parameter in addition to regular flags for gathering components' metrics/resource usage during e2e tests
cc @bskiba
Automatic merge from submit-queue (batch tested with PRs 45186, 50440)
Add functionality needed by Cluster Autoscaler to Kubemark Provider.
Make adding nodes asynchronous. Add method for getting target
size of node group. Add method for getting node group for node.
Factor out some common code.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45186, 50440)
Retry fed-svc creation on diff NodePort during e2e tests
**What this PR does / why we need it**:
Currently in federated end2end tests, the creation of services are
done with a randomize NodePort selection take is causing e2e test
flakes if the creation of a federated service failed if the port is
not available.
Now the util.CreateService(...) function is retrying to create the
service on different nodePort in case of error. The method retry until
success or all possible NodePorts have been tested and also failed.
**Which issue this PR fixes**
fixes#44018
Make adding nodes asynchronous. Add method for getting target
size of node group. Add method for getting node group for node.
Factor out some common code.
Automatic merge from submit-queue (batch tested with PRs 50386, 50374, 50444, 50382)
Add grabbing Cluster Autoscaler metrics in e2e tests
This adds:
- collecting metrics from Cluster Autoscaler before & after e2e test run
- --include-cluster-autoscaler opt-in flag
- passing external cluster client to MetricsGrabber (required for Kubemark setup, as Cluster Autoscaler doesn't run on master in this case)
Most types now have valid rest mappings because
NewDefaultRESTMapperFromScheme no longer ignores certain import
paths. Thus we can no longer use the lack of a valid REST mapping
as an indicator for when to use kindWhiteList. Thus kindWhiteList
now serves as a whitelist for all kinds and not just those that
formally had no mapping. This does mean that we could whitelist
kinds due to a name conflict, but that is unlikely as names such as
GetOptions are not appropriate for new objects.
Signed-off-by: Monis Khan <mkhan@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 49725, 50367, 50391, 48857, 50181)
Add e2e test for privileged containers
**What this PR does / why we need it**:
This PR adds node e2e test for privileged containers.
**Which issue this PR fixes**
Part of #44118.
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @Random-Liu
Automatic merge from submit-queue (batch tested with PRs 49642, 50335, 50390, 49283, 46582)
Improve GC discovery sync performance
Improve GC discovery sync performance by only syncing when discovered
resource diffs are detected. Before, the GC worker pool was shut down
and monitors resynced unconditionally every sync period, leading to
significant processing delays causing test flakes where otherwise
reasonable GC timeouts were being exceeded.
Related to https://github.com/kubernetes/kubernetes/issues/49966.
/cc @kubernetes/sig-api-machinery-bugs
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49642, 50335, 50390, 49283, 46582)
Add rbac.authorization.k8s.io/v1
xref https://github.com/kubernetes/features/issues/2
Promotes the rbac.authorization.k8s.io/v1beta1 API to v1 with no changes
```release-note
The `rbac.authorization.k8s.io/v1beta1` API has been promoted to `rbac.authorization.k8s.io/v1` with no changes.
The `rbac.authorization.k8s.io/v1alpha1` version is deprecated and will be removed in a future release.
```
Automatic merge from submit-queue (batch tested with PRs 50300, 50328, 50368, 50370, 50372)
Reduce hollow-kubelet cpu request
Fixes https://github.com/kubernetes/kubernetes/issues/50366
This should make kubemark-500 fit in 6 nodes again. Checked that it should be enough.
cc @kubernetes/sig-scalability-misc
Automatic merge from submit-queue (batch tested with PRs 50418, 49830, 49206, 49061, 49912)
add LocalZone into gce.conf and refactor gce cloud provider configura…
The main goal of this PR is to make gce cloud provider able to run locally.
1. added a LocalZone parameter into gce.conf.
2. refactor `newGCECloud` to avoid contacting metadata server if configuration is already available.
```release-note
None
```
Automatic merge from submit-queue
remove apps/v1beta2 defaulting codes for obj.Spec.Selector and obj.Labels
**What this PR does / why we need it**:
This PR removes defaulting codes for `obj.Spec.Selector`. Currently, `obj.Spec.Selector.MatchLabels` is set to `obj.Spec.Template.Labels` if `obj.Spec.Template.Labels != nil && obj.Spec.Selector == nil`. We should not perform this defaulting operation as controllers selectors are immutable.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50339
**Special notes for your reviewer**:
This PR removes defaulting codes for `apps/v1beta2` only. The defaulting codes for validation will be removed in another PR.
**Release note**:
```NONE
```
Automatic merge from submit-queue
VSphere cloud provider code refactoring
The current PR tracks the vSphere Cloud Provider code refactoring which includes the following changes.
- VCLib Package - A framework used by vSphere cloud provider for managing the vSphere entities. VCLib package mainly does the following:
- Volume management on datastore (Create/Delete)
- Volume management on Virtual Machines (Attach/Detach)
- Storage Policy Management
- vSphere Cloud Provider changes to implement the cloud provider interfaces by calling into VCLib package.
- Modifications to e2e tests to accomodate the latest design changes.
@divyenpatel @rohitjogvmw @luomiao
```release-note
vSphere cloud provider: vSphere cloud provider code refactoring
```
Automatic merge from submit-queue (batch tested with PRs 50016, 49583, 49930, 46254, 50337)
Alpha Dynamic Kubelet Configuration
Feature: https://github.com/kubernetes/features/issues/281
This proposal contains the alpha implementation of the Dynamic Kubelet Configuration feature proposed in ~#29459~ [community/contributors/design-proposals/dynamic-kubelet-configuration.md](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/dynamic-kubelet-configuration.md).
Please note:
- ~The proposal doc is not yet up to date with this implementation, there are some subtle differences and some more significant ones. I will update the proposal doc to match by tomorrow afternoon.~
- ~This obviously needs more tests. I plan to write several O(soon). Since it's alpha and feature-gated, I'm decoupling this review from the review of the tests.~ I've beefed up the unit tests, though there is still plenty of testing to be done.
- ~I'm temporarily holding off on updating the generated docs, api specs, etc, for the sake of my reviewers 😄~ these files now live in a separate commit; the first commit is the one to review.
/cc @dchen1107 @vishh @bgrant0607 @thockin @derekwaynecarr
```release-note
Adds (alpha feature) the ability to dynamically configure Kubelets by enabling the DynamicKubeletConfig feature gate, posting a ConfigMap to the API server, and setting the spec.configSource field on Node objects. See the proposal at https://github.com/kubernetes/community/blob/master/contributors/design-proposals/dynamic-kubelet-configuration.md for details.
```
Automatic merge from submit-queue (batch tested with PRs 50016, 49583, 49930, 46254, 50337)
Remove scheduledjobs
This is a prerequisite for promoting CronJobs to beta.
**Release note**:
```release-note
Remove deprecated ScheduledJobs endpoints, use CronJobs instead.
```
Automatic merge from submit-queue (batch tested with PRs 50016, 49583, 49930, 46254, 50337)
[Federation] Make the hpa scale time window configurable
This PR is on top of open pr https://github.com/kubernetes/kubernetes/pull/45993.
Please review only the last commit in this PR.
This adds a config param to controller manager, the value of which gets passed to hpa adapter via sync controller.
This is needed to reduce the overall time limit of the hpa scaling window to much lesser (then the default 2 mins) to get e2e tests run faster. Please see the comment on the newly added parameter.
**Special notes for your reviewer**:
@kubernetes/sig-federation-pr-reviews
@quinton-hoole
@marun to please validate the mechanism used to pass a parameter from cmd line to adapter.
**Release note**:
```
federation-controller-manager gets a new flag --hpa-scale-forbidden-window.
This flag is used to configure the duration used by federation hpa controller to determine if it can move max and/or min replicas
around (or not), of a cluster local hpa object, by comparing current time with the last scaled time of that cluster local hpa.
Lower value will result in faster response to scalibility conditions achieved by cluster local hpas on local replicas, but too low
a value can result in thrashing. Higher values will result in slower response to scalibility conditions on local replicas.
```
Pods associated with the test JobTemplate should use a zero
TerminationGracePeriodSeconds to ensure they're deleted immediately.
This should improve test timing assumption consistency.
Automatic merge from submit-queue
Support exec/attach/portforward in `kubectl proxy`
Use the UpgradeAwareProxy shared code in kubectl proxy. Provide a separate transport for those requests that does not have HTTP/2 enabled. Refactor the code to be a bit cleaner in places and to better separate changes.
Fixes#32026
```release-note
`kubectl proxy` will now correctly handle the `exec`, `attach`, and `portforward` commands. You must pass `--disable-filter` to the command in order to allow these endpoints.
```
Improve GC discovery sync performance by only syncing when discovered
resource diffs are detected. Before, the GC worker pool was shut down
and monitors resynced unconditionally every sync period, leading to
significant processing delays causing test flakes where otherwise
reasonable GC timeouts were being exceeded.
Related to https://github.com/kubernetes/kubernetes/issues/49966.
Automatic merge from submit-queue (batch tested with PRs 50173, 50324, 50288, 50263, 50333)
Add blank import for node tests
The node tests weren't being run because the weren't imported in the test/e2e/e2e_test.go file.
Thanks to @abgworrall for sounding the alarm (he noticed [sig-node] wasn't in the test results)!
/assign @yujuhong
/cc @abgworrall
Automatic merge from submit-queue
Fix local storage test failures
**What this PR does / why we need it**:
Fixed a few issues:
- CI environment on GCE cannot resolve node names, need to use IPs. Use a different SSH wrapper that will get the IPs from the node object.
- Use hostdir instead of containerdir now that commands are executed directly on the host, instead of through a container.
- Get the PVC object again after it is bound so that it has the PV name.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50128
**Release note**:
NONE
/release-note-none
/sig storage
Automatic merge from submit-queue
Add waitForFailure for e2e test framework
**What this PR does / why we need it**:
Add waitForFailure for e2e test framework, this could reduce the reliance on logs.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Part of #44118. Refer https://github.com/kubernetes/kubernetes/pull/48858#discussion_r128331726
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Deprecate Deployment .spec.rollbackTo field
~Depends on #48746~ (merged)
xref: #46934, #49135
1. Deprecate Deployment field `.spec.rollbackTo` in `extensions/v1beta1` and `apps/v1beta1`, and remove the same field and `/rollback` endpoint from `apps/v1beta2` Deployment.
1. Add an annotation `deprecated.deployment.rollback.to` in `apps/v1beta2` for conversion to/from other versions.
Note: `apps/v1beta2` is new in 1.8 (and WIP), so it is okay to make breaking changes to it.
```release-note
Deprecate Deployment .spec.rollbackTo field
```
Currently, in federated end2end tests, the creation of services are
done with a randomize NodePort selection. It causing e2e test
flakes if the creation of a federated service failed if the port is
not available.
Now the util.CreateService(...) function is re trying to create the
service on different nodePort in an error case. The method retries until
success or 10 creation retry with other random NodePorts.
If never the service has not been created properly on one of the
federated cluster, a Service shards cleanup is executed before retrying
again the federated service creation.
fixes#44018
Automatic merge from submit-queue
Add a simple cloud provider for e2e tests on kubemark
**What this PR does / why we need it**:
Adds a simplified cloud provider for kubemark. This enables us to add and
remove nodes and operate on nodegroups while running tests on kubemark.
This is needed to run scalability tests for cluster autoscaler on kubemark.
See https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/kubemark_integration.md
**Release note**:
```
NONE
```
Automatic merge from submit-queue
Add e2e test for cronjob chained removal
This is test proving https://github.com/kubernetes/kubernetes/pull/44058 works with cronjobs. This will fail until the aforementioned PR merges.
@caesarxuchao ptal
Automatic merge from submit-queue (batch tested with PRs 50208, 50259, 49702, 50267, 48986)
Move ownership of proxy test to sig-network directory
```release-note
None
```
1. Deprecate `.spec.rollbackTo` field in extensions/v1beta1 and
apps/v1beta1 Deployments
2. Remove the same field from apps/v1beta2 Deployment, and remove
its rollback subresource and endpoint
Automatic merge from submit-queue
StatefulSet scale subresource
**What this PR does / why we need it**: This PR implements scale subresource for StatefulSet.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46005
**Special notes for your reviewer**:
**Release note**:
```release-note
StatefulSet uses scale subresource when scaling in accord with ReplicationController, ReplicaSet, and Deployment implementations.
```
**Feature Checklist**:
- [x] Introduce Registry interface for storage purpose
- [x] Introduce `ScaleREST New(), Get() and Update()` utility functions
- [x] Create a `ScaleREST` object at `NewREST()` and return it
- [x] Enable scale subresource by adding `/scale` field to the storage map
**Testing Checklist**:
- Unit testing
- [x] Modify `newStorage()` to call `NewStorage()`, and change all unit tests accordingly
- [x] Add unit tests for `ScaleREST Get() and Update()` utility functions
- [x] Add missing unit test for `ShortNames`
- Manual testing
- [x] Verify existence of the subresource using `kubectl proxy` command
- [x] Modify the subresource using `curl` via `POST`
- e2e testing
- [x] Add e2e tests using `RESTClient`
Automatic merge from submit-queue
Federated Job controller implementation
Note that job re-balance is not there yet as it's difficult to honor job deadline
requires #35945 and 35943
fixes#34261
@quinton-hoole @nikhiljindal @deepak-vij
**Release note**:
```release-note
Federated Job feature. It is now possible to create a Federated Job
that is automatically deployed to one or more federated clusters
(as Jobs in those clusters). Job parallelism and completions are
spread across clusters according to cluster selection and weighting
preferences. Federated Job status reflects the aggregate status
across all underlying cluster Jobs.
```
Automatic merge from submit-queue (batch tested with PRs 49524, 46760, 50206, 50166, 49603)
Handled taints on node in batch.
**What this PR does / why we need it**:
Enhanced helpers to handled taints on node in batch.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49522
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 49885, 49751, 49441, 49952, 49945)
Rename e2e sig framework files
**What this PR does / why we need it**:
make files be consistent across all sig e2e tests dir.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914)
Add node e2e test for Docker's shared PID namespace
Ref: https://github.com/kubernetes/kubernetes/issues/42926
This PR adds a simple test for the shared PID namespace that's enabled when Docker is 1.13.1+.
/sig node
/area node-e2e
/assign @yujuhong
**Release note**:
```
None
```
This is needed for cluster autoscaler e2e test to
run on kubemark. We need the ability to add and
remove nodes and operate on nodegroups. Kubemark
does not provide this at the moment.
Automatic merge from submit-queue (batch tested with PRs 50091, 50231, 50238, 50236, 50243)
Fix storage tests for multizone test configuration.
**What this PR does / why we need it**:
This PR modifies "[sig-storage] Volumes PD should be mountable with (ext3|ext4)" tests to schedule pods in zone, where PD is created.
This is to make the test work in multizone environment.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 50091, 50231, 50238, 50236, 50243)
Move the sig-instrumentation test to a dedicated folder
Move the last remaining test to sig-instrumentation folder, also move "metrics" package to the "framework" folder
Related issue: https://github.com/kubernetes/kubernetes/issues/49161
/cc @xiangpengzhao @piosz
Automatic merge from submit-queue (batch tested with PRs 50091, 50231, 50238, 50236, 50243)
Modify e2e.go to arbitrarily pick one of zones we have nodes in for multizone tests.
**What this PR does / why we need it**:
When e2e runs in multizone configuration, zone config property can be empty.
This PR, in that case, overrides an empty value with arbitrarily chosen zone that we have nodes in.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 49855, 49915)
Let controllers ignore initialization timeout when creating pods
Partially address https://github.com/kubernetes/kubernetes/issues/48893#issuecomment-318540129.
This only updates the controllers that create pods with `GenerateName`.
The controllers ignore the timeout error when creating the pods, depending on how the initialization progress:
* If the initialization is successful in less than 5 mins, the controller will observe the creation via the informer. All is good.
* If the initialization fails, server will delete the pod, but the controller won't receive any event. The controller will not create new pod until the Creation expectation expires in 5 min.
* If the initialization takes too long (> 5 mins), the Creation expectation expires and the controller will create extra pods.
I'll send follow-up PRs to fix the latter two cases, e.g., by refactoring the sharedInformer.
Automatic merge from submit-queue
Fix typo in test/images/port-forward-tester/Makefile
**What this PR does / why we need it**: the image build fails due to this typo:
```console
$ make WHAT=port-forward-tester
./image-util.sh build port-forward-tester
Building image for port-forward-tester ARCH: ppc64le...
make[1]: Entering directory '[home]/src/k8s.io/kubernetes/test/images/port-forward-tester'
../image-util.sh bin
../image-util.sh: line 22: $2: unbound variable
```
Images already pushed.
**Release note**:
```release-note
NONE
```
/approve no-issue
/assign @mkumatag
Automatic merge from submit-queue (batch tested with PRs 48532, 50054, 50082)
Remove [k8s.io] tag and redundant [sig-storage] tags from volume tests
**What this PR does / why we need it**:
Removes redundant tags from storage e2e test names
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50178
**Release note**:
/release-note-none
Automatic merge from submit-queue (batch tested with PRs 48532, 50054, 50082)
Correcting two spelling mistakes
**What this PR does / why we need it**:
Correcting two spelling mistakes.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
NONE
```release-note
```
Automatic merge from submit-queue
Update OWNERS to correct members' handles
**What this PR does / why we need it**:
Fix some typos of members' handles as per https://github.com/kubernetes/kubernetes/issues/50048#issuecomment-319831957.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Associated with: #50048
**Special notes for your reviewer**:
/cc @madhusudancs @sebgoa @liggitt @saad-ali
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50119, 48366, 47181, 41611, 49547)
Add basic install and mount flexvolumes e2e tests
fixes https://github.com/kubernetes/kubernetes/issues/47010
These two tests install a skeleton "dummy" flex driver, attachable and non-attachable respectively, then test that a pod can successfully use the flex driver. They are labeled disruptive because kubelet and controller-manager get restarted as part of the flex install. IMO it's important to keep this install procedure as part of the test to isolate any bugs with the startup plugin probe code.
There is a bit of an ugly dependency on cluster/gce/config-test.sh because --flex-volume-plugin-dir must be set to a dir that's readable from controller-manager container and writable by the flex e2e test. The default path is not writable on GCE masters with read-only root so I picked a location that looks okay.
In the "dummy" drivers I trick kubelet into thinking there is a mount point by doing "mount -t tmpfs none ${MNTPATH} >/dev/null 2>&1", hope that is okay.
I have only tested on GCE and theoretically they may work on AWS but I don't think there is a need to test on multiple cloudproviders.
-->
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50119, 48366, 47181, 41611, 49547)
increase the GC e2e test timeout
Fix https://github.com/kubernetes/kubernetes/issues/50047.
The root cause is #50046. See log analysis in #50047. For now, we just increase the timeout.
Automatic merge from submit-queue
Fix pointer bug in local volume e2e test
**What this PR does / why we need it**:
Fix pointer bug in local volume e2e test
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/50043
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48237, 50084, 50019, 50069, 50090)
Allow for some pods not to get scheduled in CA tests.
This will allow us to ignore long tail node creation or failure
to create some nodes when running scalability tests on kubemark.
**Release note**:
```
NONE
```
Automatic merge from submit-queue
Moves left networking e2e tests to test/e2e/network
**What this PR does / why we need it**:
#48784 forgot to move some networking e2e tests. This PR moves them.
It also move the networking tests from within `test/e2e/common/networking.go` to `test/e2e/network/networking.go`
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Associated PR #48784
Umbrella issue #49161
**Special notes for your reviewer**:
/assign @wojtek-t @bowei
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49916, 50050)
Update images used in the node e2e benchmark tests
Ref: https://github.com/kubernetes/kubernetes/issues/42926
- Update the cosbeta image since the new version contains a 'du' command fix that affects Docker performance.
- Add the coreos and ubuntu image that run Docker 1.12.6 so that we will have more data to compare.
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 50000, 49954, 49943, 50018, 49607)
Add [sig-autoscaling] prefix to autoscaling e2e tests
**What this PR does / why we need it**:
Add [sig-autoscaling] prefix to autoscaling e2e tests
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50000, 49954, 49943, 50018, 49607)
Update the DeleteReplicaSet in rs_util.go to use server side reaper
**What this PR does / why we need it**:
fix#47832
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47832
**Special notes for your reviewer**:
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 50000, 49954, 49943, 50018, 49607)
Add [sig-scalability] prefix to scalability e2e tests
**What this PR does / why we need it**:
Add [sig-scalability] prefix to scalability e2e tests
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Add ubuntu to gluster and nfs tests
**What this PR does / why we need it**:
Enable gluster and nfs tests for ubuntu distro
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50039
**Special notes for your reviewer**:
**Release note**:
/release-note-none
/sig storage
Automatic merge from submit-queue (batch tested with PRs 49990, 49997, 44278, 49936, 49891)
Allow mode in e2e-framework to gather metrics only from master
This should enable getting metrics for our 5k-node clusters.
cc @kubernetes/sig-scalability-misc @gmarek
Automatic merge from submit-queue (batch tested with PRs 49992, 48861, 49267, 49356, 49886)
remove unused function
**What this PR does / why we need it**:
remove unused function which is not used months ago.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49871, 49422, 49092, 49858, 48999)
Refactor logging e2e tests, add new checks
I split existing code into smaller files to simplify the review and future changes
Also, there are new tests for stackdriver logging:
- ingesting system logs from all nodes
- ingesting logs in json format
- ingesting logs in glog format
Automatic merge from submit-queue (batch tested with PRs 49898, 49897, 49919, 48860, 49491)
Fix usage a make(struct, len()) followed by append()
A couple of places in the code we allocate with make() but then use
append(), instead of copy() or direct assignment. This results in a
slice with len() zero elements at the front followed by the expected
data. The correct form for such usage is `make(struct, 0, len())`.
I found these by running:
```
$ git grep -EI -A7 'make\([^,]*, len\(' | grep 'append(' -B7 | grep -v vendor
```
And then manually looking through the results. I'm sure something better
could exist.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49898, 49897, 49919, 48860, 49491)
Add basic local volume provisioner e2e tests
**What this PR does / why we need it**:
Adds e2e tests to test local volume provisioner.
**Which issue this PR fixes**: fixes https://github.com/kubernetes/kubernetes/issues/48832
**Special notes for your reviewer**:
- bring up local volume provisioner using bootstrapper
- have provisioner create a volume by creating a directory under discovery path.
- check persistent volume is created
- make a claim on the PV, write some data then delete the claim. Verify volume is cleaned up.
**Release note**:
```release-note
```
@ianchakeres @msau42
Automatic merge from submit-queue (batch tested with PRs 49651, 49707, 49662, 47019, 49747)
Add support for `no_new_privs` via AllowPrivilegeEscalation
**What this PR does / why we need it**:
Implements kubernetes/community#639
Fixes#38417
Adds `AllowPrivilegeEscalation` and `DefaultAllowPrivilegeEscalation` to `PodSecurityPolicy`.
Adds `AllowPrivilegeEscalation` to container `SecurityContext`.
Adds the proposed behavior to `kuberuntime`, `dockershim`, and `rkt`. Adds a bunch of unit tests to ensure the desired default behavior and that when `DefaultAllowPrivilegeEscalation` is explicitly set.
Tests pass locally with docker and rkt runtimes. There are also a few integration tests with a `setuid` binary for sanity.
**Release note**:
```release-note
Adds AllowPrivilegeEscalation to control whether a process can gain more privileges than it's parent process
```
Automatic merge from submit-queue (batch tested with PRs 49651, 49707, 49662, 47019, 49747)
improve detectability of deleted pods
**What this PR does / why we need it**:
Adds comment to `waitForPodTerminatedInNamespace` to better explain how it's implemented.
~~It improves pod deletion detection in the e2e framework as follows:~~
~~1. the `waitForPodTerminatedInNamespace` func looks for pod.Status.Phase == _PodFailed_ or _PodSucceeded_ since both values imply that all containers have terminated.~~
~~2. the `waitForPodTerminatedInNamespace` func also ignores the pod's Reason if the passed-in `reason` parm is "". Reason is not really relevant to the pod being deleted or not, but if the caller passes a non-blank `reason` then it will be lower-cased, de-blanked and compared to the pod's Reason (also lower-cased and de-blanked). The idea is to make Reason checking more flexible and to prevent a pod from being considered running when all of its containers have terminated just because of a Reason mis-match.~~
Releated to pr [49597](https://github.com/kubernetes/kubernetes/pull/49597) and issue [49529](https://github.com/kubernetes/kubernetes/issues/49529).
**Release note**:
```release-note
NONE
```
A couple of places in the code we allocate with make() but then use
append(), instead of copy() or direct assignment. This results in a
slice with len() zero elements at the front followed by the expected
data. The correct form for such usage is `make(struct, 0, len())`.
I found these by running:
```
$ git grep -EI -A7 'make\([^,]*, len\(' | grep 'append(' -B7 | grep -v vendor
```
And then manually looking through the results. I'm sure something better
could exist.
Automatic merge from submit-queue (batch tested with PRs 49538, 49708, 47665, 49750, 49528)
Add ext4 and xfs tests to GCE PD basic mount tests
**What this PR does / why we need it**:
Add ext4 and xfs to basic GCE PD mount tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49511
**Special notes for your reviewer**:
**Release note**:
/release-note-none
/sig storage
Automatic merge from submit-queue (batch tested with PRs 49538, 49708, 47665, 49750, 49528)
Enable garbage collection of custom resources
Enhance the garbage collector to periodically refresh the resources it monitors (via discovery) to enable custom resource definition GC (addressing #44507 and reverting #47432).
This is a replacement for #46000.
/cc @lavalamp @deads2k @sttts @caesarxuchao
/ref https://github.com/kubernetes/kubernetes/pull/48065
```release-note
The garbage collector now supports custom APIs added via CustomeResourceDefinition or aggregated apiservers. Note that the garbage collector controller refreshes periodically, so there is a latency between when the API is added and when the garbage collector starts to manage it.
```
Automatic merge from submit-queue (batch tested with PRs 49538, 49708, 47665, 49750, 49528)
Add a support for GKE regional clusters in e2e tests.
**What this PR does / why we need it**:
Add a support for GKE regional clusters in e2e tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49538, 49708, 47665, 49750, 49528)
Use the core client with version
**What this PR does / why we need it**:
Replace the **deprecated** `clientSet.Core()` with `clientSet.CoreV1()`.
**Which issue this PR fixes**: fixes#49535
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49581, 49652, 49681, 49688, 44655)
Re-enable federated ingress test that was disabled due to a federated service deletion bug.
The details of the bug is described in PR #44626. We believe this bug fixes the flakiness in this test and hence we are re-enabling this test to get some mileage on it. If it turns out to be a problem again we are going to revert this back.
**Release note**:
```release-note
NONE
```
/assign @csbell
cc @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 49581, 49652, 49681, 49688, 44655)
Move the audit e2e test out of the node SIG
It was mistakenly moved to sig-node in https://github.com/kubernetes/kubernetes/pull/48910, but this is an apiserver feature, not a node feature.
/cc @crassirostris
Automatic merge from submit-queue (batch tested with PRs 49581, 49652, 49681, 49688, 44655)
Add sig-testing OWNERS_ALIASES
/sig testing
**What this PR does / why we need it**:
follow the sig-foo-{reviewers,approvers} convention
- rename test-infra-maintainers to sig-testing-approvers
- copy sig-testing-approvers to sig-testing-reviewers
- remove inviduals in test/OWNERS in favor of new aliases
as a result
- rmmh gets test/ approver privileges
- spiffxp gets hack/jenkins/ approver privileges
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49580
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49712, 49694, 49714, 49670, 49717)
Reduce GC e2e test flakiness
Increase GC wait timeout in a flaky e2e test. The test expects a GC
operation to be performed within 30s, while in practice the operation
often takes longer due to a delay between the enqueueing of the owner's
delete operation and the GC's actual processing of that event. Doubling
the time seems to stabilize the test. The test's assumptions can be
revisited, and the processing delay under load can be investigated in
the future.
Extracted from https://github.com/kubernetes/kubernetes/pull/47665 per https://github.com/kubernetes/kubernetes/pull/47665#issuecomment-318219099.
/cc @sttts @caesarxuchao @deads2k @kubernetes/sig-api-machinery-bugs
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45813, 49594, 49443, 49167, 47539)
Add node e2e tests for GKE environment
Ref: https://github.com/kubernetes/kubernetes/issues/46891
This PR adds node e2e tests for validating images used on GKE.
- We pass the `SYSTEM_SPEC_NAME` to the node e2e test process via the flag `--system-spec-name` so that we can skip the environment specific tests using `RunIfSystemSpecNameIs()`.
- Also added `SkipIfContainerRuntimeIs()` as the opposite of `RunIfContainerRuntimeIs()`.
**Release note**:
```
None
```
Enhance the garbage collector to periodically refresh the resources it
monitors (via discovery) to enable custom resource definition GC.
This implementation caches Unstructured structs for any kinds not
covered by a shared informer. The existing meta-only codec only supports
compiled types; an improved codec which supports arbitrary types could
be introduced to optimize caching to store only metadata for all
non-informer types.
Automatic merge from submit-queue (batch tested with PRs 49619, 49598, 47267, 49597, 49638)
Remove default binding of system:node role to system:nodes group
part of https://github.com/kubernetes/features/issues/279
deprecation of this automatic binding announced in 1.7 in https://github.com/kubernetes/kubernetes/pull/46076
```release-note
RBAC: the `system:node` role is no longer automatically granted to the `system:nodes` group in new clusters. It is recommended that nodes be authorized using the `Node` authorization mode instead. Installations that wish to continue giving all members of the `system:nodes` group the `system:node` role (which grants broad read access, including all secrets and configmaps) must create an installation-specific `ClusterRoleBinding`.
```
Automatic merge from submit-queue (batch tested with PRs 49619, 49598, 47267, 49597, 49638)
improve log for pod deletion poll loop
**What this PR does / why we need it**:
It improves some logging related to waiting for a pod to reach a passed-in condition. Specifically, related to issue [49529](https://github.com/kubernetes/kubernetes/issues/49529) where better logging may help to debug the root cause.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49619, 49598, 47267, 49597, 49638)
Flag support in kubectl plugins
Adds support to flags in `kubectl` plugins. Flags are declared in the plugin descriptor and are passed to plugins through env vars, similar to global flags (which already works).
Fixes https://github.com/kubernetes/kubernetes/issues/49122
**Release note**:
```release-note
Added flag support to kubectl plugins
```
PTAL @monopole @kubernetes/sig-cli-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 47738, 49196, 48907, 48533, 48822)
Move e2e dependent images from kubernetes/kubernetes.github.io repo
**What this PR does / why we need it**:
Move e2e dependent images from kubernetes/kubernetes.github.io repo
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48530
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 49238, 49595, 43494, 47897, 48905)
Add apps/v1beta2.ReplicaSet
~Depends on #48746~ (merged)
~Depends on #49357~ (merged)
xref: #49135
```release-note
Add a new API object apps/v1beta2.ReplicaSet
```
Automatic merge from submit-queue (batch tested with PRs 49665, 49689, 49495, 49146, 48934)
Add Integration tests for Inter-Pod Affinity(AntiAffinity)
Signed-off-by: vikaschoudhary16 <choudharyvikas16@gmail.com>
**What this PR does / why we need it**:
Adds integration tests for inter-pod affinity(anti-affinity)
**Special notes for your reviewer**:
Once after @bsalamat 's #48847 gets merged, changes in this PR will be restructured according to the merged changes.
ref/ #48176#48847
@kubernetes/sig-scheduling-pr-reviews
@davidopp @k82cn
Increase GC wait timeout in a flaky e2e test. The test expects a GC
operation to be performed within 30s, while in practice the operation
often takes longer due to a delay between the enqueueing of the owner's
delete operation and the GC's actual processing of that event. Doubling
the time seems to stabilize the test. The test's assumptions can be
revisited, and the processing delay under load can be investigated in
the future.
Automatic merge from submit-queue (batch tested with PRs 46358, 49408)
[Federation] Updates to enable hpa controllers test in integration and e2e
Enables the apis on api server in both scenario.
Additional logic to enable and run the crud portion of objects in integration, for controllers which implement additional logic in reconcile.
**Special notes for your reviewer**:
This on top of an existing PR https://github.com/kubernetes/kubernetes/pull/45497.
The last 2 commits are reviewable here
@kubernetes/sig-federation-pr-reviews
cc @marun @perotinus
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 46913, 48910, 48858, 47160)
Add e2e test for readOnlyRootFilesystem containers
**What this PR does / why we need it**:
This PR adds node e2e test for readOnlyRootFilesystem containers.
**Which issue this PR fixes**
Part of #44118.
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46913, 48910, 48858, 47160)
move sig-node related e2e tests to node subdir
I need help making sure I picked the right ones and/or didn't miss anything.
Potential additions include: `logging_soak.go`, `ssh.go`, `kubelet_perf.go`.
/cc @dchen1107 @vishh @tallclair @yujuhong @Random-Liu @abgworrall @dashpole @yguo0905
Automatic merge from submit-queue (batch tested with PRs 48976, 49474, 40050, 49426, 49430)
Use presence of kubeconfig file to toggle standalone mode
Fixes#40049
```release-note
The deprecated --api-servers flag has been removed. Use --kubeconfig to provide API server connection information instead. The --require-kubeconfig flag is now deprecated. The default kubeconfig path is also deprecated. Both --require-kubeconfig and the default kubeconfig path will be removed in Kubernetes v1.10.0.
```
/cc @kubernetes/sig-cluster-lifecycle-misc @kubernetes/sig-node-misc
Automatic merge from submit-queue (batch tested with PRs 48976, 49474, 40050, 49426, 49430)
Remove duplicated import and wrong alias name of api package
**What this PR does / why we need it**:
**Which issue this PR fixes**: fixes#48975
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48224, 45431, 45946, 48775, 49396)
Update cos-dev image in benchmark tests to cos-dev-61-9759-0-0
Ref: https://github.com/kubernetes/kubernetes/issues/42926
`cos-dev-61-9759-0-0` contains a fix in Linux utility `du` that would affect the measurement of docker performance in kubelet. I'd like to update the benchmark to use the new image.
**Release note**:
```
None
```
/assign @tallclair
/cc @kewu1992 @abgworrall
follow the sig-foo-{reviewers,approvers} convention
- rename test-infra-maintainers to sig-testing-approvers
- copy sig-testing-approvers to sig-testing-reviewers
- remove inviduals in test/OWNERS in favor of new aliases
as a result
- rmmh gets test/ approver privileges
- spiffxp gets hack/jenkins/ approver privileges
Automatic merge from submit-queue (batch tested with PRs 49286, 49550)
Remove myself from a bunch of places
I am assigned in reviews which I never get to do. I prefer drive-bys whenever I can do them rather than the bot choosing myself in random, ends up being mere spam.
@smarterclayton please approve.
Automatic merge from submit-queue (batch tested with PRs 48846, 49483, 49341)
Add [sig-network] prefix to network e2e tests
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Associated PR #48784
Umbrella issue #49161
**Special notes for your reviewer**:
/assign @@wojtek-t @freehan
/cc @bowei
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Remove flags low-diskspace-threshold-mb and outofdisk-transition-frequency
issue: #48843
This removes two flags replaced by the eviction manager. These have been depreciated for two releases, which I believe correctly follows the kubernetes depreciation guidelines.
```release-note
Remove depreciated flags: --low-diskspace-threshold-mb and --outofdisk-transition-frequency, which are replaced by --eviction-hard
```
cc @mtaufen since I am changing kubelet flags
cc @vishh @derekwaynecarr
/sig node
Automatic merge from submit-queue (batch tested with PRs 49358, 49253)
Remove hostname label condition in SchedulerPredicates
**What this PR does / why we need it**:
```
validates that NodeSelector is respected if matching [Conformance]
validates that required NodeAffinity setting is respected if matching
```
The two tests above make the assumption that the node names are equal to the `kubernetes.io/hostname` labels. Unfortunately, this is not necessarily true all the time. For instance, when using the AWS Cloud Provider + Container Linux:
- The node name is set using the AWS SDK's `ec2.Instance.PrivateDnsName` and has the form `ip-10-0-35-57.ca-central-1.compute.internal` [[1](https://github.com/kubernetes/kubernetes/blob/v1.7.1/pkg/cloudprovider/providers/aws/aws.go#L3343-L3346)] [[2](https://raw.githubusercontent.com/aws/aws-sdk-go/master/service/ec2/api.go)]
- The node's hostname, however, is a simple call to `os.Hostname()`, itself reading `/proc/sys/kernel/hostname`, which contains what the AWS DHCP assigned to the instance, typically the hostname short-form: `ip-10-0-16-137`. [[1](https://github.com/kubernetes/kubernetes/blob/v1.7.1/pkg/util/node/node.go#L43-L54)]
Consequently, we are trying to assign a pod to a node having the following label: `kubernetes.io/hostname=ip-10-0-35-57.ca-central-1.compute.internal` (in addition to the randomly generated label), whereas the actual label on the node is `kubernetes.io/hostname=ip-10-0-35-57`.
Furthermore, this inaccurate `kubernetes.io/hostname=<nodename>` condition is actually useless given we already match over a random label, that was assigned to that node. Later, the test ensures that the scheduled pod was scheduled to the right node by comparing the pod's node name and the node name we expected the pod to be on:
```
framework.ExpectNoError(framework.WaitForPodNotPending(cs, ns, labelPodName))
labelPod, err := cs.Core().Pods(ns).Get(labelPodName, metav1.GetOptions{})
framework.ExpectNoError(err)
Expect(labelPod.Spec.NodeName).To(Equal(nodeName))
```
The `k8s.io/apimachinery/pkg/types/nodename` data structure actually [warns](55bee3ad21/staging/src/k8s.io/apimachinery/pkg/types/nodename.go (L40-L43)) about the fact that the node name might be different than the hostname on AWS.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Added sig-storage labels to upgrade tests and moved them to appropriate directory
**What this PR does / why we need it**: Adding necessary sig identifier for storage upgrade tests.
/release-note-none
Automatic merge from submit-queue (batch tested with PRs 48636, 49088, 49251, 49417, 49494)
Fix bug in command retrying in kubemark
This should fix some of the flakes mentioned in https://github.com/kubernetes/kubernetes/issues/46195.
It's showing that all subsequent retries have failed if the first one failed due to `ret_val` not being reassigned on success.
@bskiba Thanks for noticing :)
Automatic merge from submit-queue (batch tested with PRs 48636, 49088, 49251, 49417, 49494)
StatefulSet: Remove `pod.alpha.kubernetes.io/initialized` annotation.
The `pod.alpha.kubernetes.io/initialized` annotation was originally a tool for validating StatefulSet's ordered Pod creation guarantees during the feature's alpha phase.
If set to "false" on a given Pod, it would interrupt StatefulSet's normal behavior. In v1.5.0, the annotation was deprecated and the default became "true" as part of StatefulSet's graduation to beta.
The annotation is now ignored, meaning it cannot be used to interrupt StatefulSet Pod management.
```release-note
StatefulSet: The deprecated `pod.alpha.kubernetes.io/initialized` annotation for interrupting StatefulSet Pod management is now ignored. If you were setting it to `true` or leaving it unset, no action is required. However, if you were setting it to `false`, be aware that previously-dormant StatefulSets may become active after upgrading.
```
ref #41605
Automatic merge from submit-queue (batch tested with PRs 48636, 49088, 49251, 49417, 49494)
Fix issues for local storage allocatable feature
This PR fixes the following issues:
1. Use ResourceStorageScratch instead of ResourceStorage API to represent
local storage capacity
2. In eviction manager, use container manager instead of node provider
(kubelet) to retrieve the node capacity and reserved resources. Node
provider (kubelet) has a feature gate so that storagescratch information
may not be exposed if feature gate is not set. On the other hand,
container manager has all the capacity and allocatable resource
information.
This PR fixes issue #47809
Replaces use of --api-servers with --kubeconfig in Kubelet args across
the turnup scripts. In many cases this involves generating a kubeconfig
file for the Kubelet and placing it in the correct location on the node.
Automatic merge from submit-queue
Allow nodes to create evictions for its own pods in NodeRestriction admission controller
**What this PR does / why we need it**: This PR adds support for `pods/eviction` sub-resource to the NodeRestriction admission controller so it allows a node to evict pods bound to itself.
**Which issue this PR fixes**: fixes#48666
**Special notes for your reviewer**: The NodeRestriction already allows nodes to delete pods bound to itself, so allowing nodes to also delete pods via the Eviction API probably makes sense.
```release-note
NodeRestriction allows a node to evict pods bound to itself
```
Automatic merge from submit-queue (batch tested with PRs 49409, 49352, 49266, 48418)
[e2e] Also verify content returned by kube-proxy healthz url
**What this PR does / why we need it**: Enhance kube-proxy url test. This helps to detect the port collision case --- node-problem-detector also serves /healthz to return 200 ok. Verify the content to confirm /healthz is served by kube-proxy.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: From #49263
**Special notes for your reviewer**:
/assign @freehan @nicksardo
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49326, 49394, 49346, 49379, 49399)
Update to version gate CRDs to 1.7 and greater
**What this PR does / why we need it**:
Allows e2e's to be tested on earlier version do to version check.
xref: #49313
**Release note**:
```
NONE
```
/cc @kubernetes/sig-api-machinery-bugs @kubernetes/sig-testing-bugs
Automatic merge from submit-queue (batch tested with PRs 49420, 49296, 49299, 49371, 46514)
Refactoring taint functions to reduce sprawl
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45060
**Special notes for your reviewer**:
@gmarek @timothysc @k82cn @jayunit100 - I moved some fn's to helpers and some to utils. LMK, if you are ok with this change.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Add a new API version apps/v1beta2
xref: #49135
This PR adds a new API version `apps/v1beta2` which contains a copy (of types, conversions, and defaults) of `apps/v1beta1` StatefulSet, Deployment, and their subresources. Note that `apps/v1beta2` is still WIP and we will make breaking changes to it before releasing 1.8.
Moving core controllers (StatefulSet, Deployment, ReplicaSet, DaemonSet) to `apps/v1beta2` is the first step of moving them to `apps/v1` (GA).
This PR is a starting point for DaemonSet and ReplicaSet to move from `/extensions` to `/apps` and for Deployment and StatefulSet to make some breaking changes (e.g. new defaults and/or remove deprecated fields).
```release-note
Add a new API version apps/v1beta2
```
Automatic merge from submit-queue
Reduce hollow proxy mem/node
As likely expected, kubemark-scale failed to even start with n1-standard-8 nodes. Because 1/3rd of our hollow nodes didn't even get scheduled due to their requests:
```
I0720 17:45:08.139] Found only 3325 ready hollow-nodes while waiting for 5000.
I0720 17:45:20.435] 3326 hollow-nodes are reported as 'Running'
I0720 17:45:20.442] 1675 hollow-nodes are reported as NOT 'Running'
```
If we want to experiment with smaller nodes anyway, then this change is needed. Though we most likely will end up OOM'ing.
Explanation for new value:
We have 62.5 hollow-node / real-node
=> mem available per hollow node = 30GB / 62.5 = 480MB
minus 100MB (kubelet)
minus 20MB (npd)
=> 360MB for proxy should be = 100MB + 5000*(mem/node)
=> 50KB mem/node (with some slight slack)
cc @kubernetes/sig-scalability-misc
Automatic merge from submit-queue
Add an integration test library and some integration tests for scheduler
**What this PR does / why we need it**:
1. Add an integration test library (utils.go) for scheduler testing.
2. Cleaned up some of the tests in scheduler_test.go with the new integration test library.
3. Add priority_test.go with a couple of examples on how to test scheduler priority function in integration tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
ref/ #48176
@kubernetes/sig-scheduling-pr-reviews
@davidopp @k82cn @vikaschoudhary16
Automatic merge from submit-queue (batch tested with PRs 49328, 49285, 49307, 49127, 49163)
Cleanup storage e2e test names
**What this PR does / why we need it**:
Some test names had redundant [sig-storage] tags. Also, some tests still had [Volume] tag. This PR removes those tags.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Release note**:
```release-note
NONE
```
/release-note-none
/sig storage
Automatic merge from submit-queue (batch tested with PRs 49330, 49252, 49262, 49278, 49334)
Add project to pd delete node gcloud command
**What this PR does / why we need it**: Add `--project=` to `gcloud compute instances list` calls from `Pod Disks should be able to detach from a node which was deleted`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: Fixes #https://github.com/kubernetes/kubernetes/issues/49185
**Special notes for your reviewer**:
CC @kubernetes/sig-storage-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 49330, 49252, 49262, 49278, 49334)
Enable garbage collector e2e tests
These tests are not running in pre-submit: see 753266cb7d/jobs/config.json (L9207)
Automatic merge from submit-queue
remove redundant param in e2e_node/remote
**What this PR does / why we need it**:
* remove redundant param in e2e_node/remote/remote.go
* fix a small typo
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 49316, 46117, 49064, 48073, 49323)
add e2e tests for the bootstrapsigner and tokencleaner controllers, integration testing for bootstrap token auth
**What this PR does / why we need it**:
Add e2e test for bootstrap signer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 49316, 46117, 49064, 48073, 49323)
Test Ubuntu image using GKE image spec on master
Ref: https://github.com/kubernetes/kubernetes/issues/46891
This PR changes the files referenced in test-infra for running Ubuntu image tests against GKE system spec on master.
The two properties files are shared by the tests against all k8s branches but the `SYSTEM_SPEC_NAME` is only available on master. This should be fine because the tests in the non master branches will just ignore the unknown env variable.
**Release note**:
```
None
```
/assign @yujuhong
Automatic merge from submit-queue (batch tested with PRs 49316, 46117, 49064, 48073, 49323)
Modular extensions for kube scheduler perf testing framework
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45973
**Special notes for your reviewer**:
It is not same as the existing one, the previous one has a single nodeaffinity key with multiple values. This one has multiple keys, values.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49107, 47177, 49234, 49224, 49227)
tighten quota controller interface
While debugging a quota performance problem, I had to chase some references deeper than necessary because the interfaces were overly broad. This tightens them.
```release-note
NONE
```
Automatic merge from submit-queue
Fix too extensive logging in Stackdriver Logging e2e tests
Since logging pod now includes a cache, printing it out makes build-log unusable.
Automatic merge from submit-queue (batch tested with PRs 48377, 48940, 49144, 49062, 49148)
Add cos-dev-61-9733-0-0 to the benchmark tests
Ref: https://github.com/kubernetes/kubernetes/issues/42926
m60 has docker 1.13.1 while m61 has 17.03. This PR adds m61 to the benchmark tests so that we will have more data to compare.
PS: We will support fetching the latest image in an image family in the node e2e tests in the future.
**Release note**:
```
None
```
/assign @yujuhong
/cc @kewu1992 @abgworrall
Automatic merge from submit-queue (batch tested with PRs 48377, 48940, 49144, 49062, 49148)
fixit: break sig-cluster-lifecycle tests into subpackage
this is part of fixit week. ref #49161
@kubernetes/sig-cluster-lifecycle-misc
Automatic merge from submit-queue
Add PriorityClass API object under new "scheduling" API group
**What this PR does / why we need it**: This PR is a part of a series of PRs to add pod priority to Kubernetes. This PR adds a new API group called "scheduling" with a new API object called "PriorityClass". PriorityClass maps the string value of priority to its integer value.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: Given the size of this PR, I will add the admission controller for the PriorityClass in a separate PR.
**Release note**:
```release-note
Add PriorityClass API object under new "scheduling" API group
```
ref/ #47604
ref/ #48646
Automatic merge from submit-queue (batch tested with PRs 47509, 46821, 45319, 49121, 49125)
Tolerate a missing MasterName (for GKE)
When testing, GKE created clusters don't provide a `MasterName`, so don't throw a warning and give up when that happens.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47509, 46821, 45319, 49121, 49125)
volume i/o tests for storage plugins
**What this PR does / why we need it**:
Addresses issues [25268](https://github.com/kubernetes/kubernetes/issues/25268) and [28367](https://github.com/kubernetes/kubernetes/issues/28367), though it may be weak re. the streaming i/o issue. @matchstick
**Special notes for your reviewer**:
This is a new file. Plugins other than NFS, GlusterFS, iSCSI, and Ceph-RBD code will need to be supported in a separate PR.
```release-note
NONE
```
Automatic merge from submit-queue
Bump e2e mounttest image version to 0.8
Reduce the number of image files required for e2e test run
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49143, 49211)
Add more logging to PD node delete test
**What this PR does / why we need it**: Adds more logging to `Pod Disks should be able to detach from a node which was deleted` test which started failing because `gcloud compute instances list` is failing with `exit status 1`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: https://github.com/kubernetes/kubernetes/issues/49185
**Special notes for your reviewer**:
**Release note**: none
Automatic merge from submit-queue (batch tested with PRs 49058, 49072, 49137, 49182, 49045)
*: remove --insecure-allow-any-token option
~Since the authenticator is still used in e2e tests, don't remove
the actual package. Maybe a follow up?~
edit: e2e and integration tests have been switched over to the tokenfile
authenticator instead.
```release-note
The --insecure-allow-any-token flag has been removed from kube-apiserver. Users of the flag should use impersonation headers instead for debugging.
```
closes#49031
cc @kubernetes/sig-auth-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 49116, 49095)
Move pkg/api/v1/ref -> client-go/tools/reference
`pkg/api/v1/ref` is the only remaining package copied from pkg/api/v1 to client-go via staging/copy.sh.
Automatic merge from submit-queue (batch tested with PRs 48043, 48200, 49139, 36238, 49130)
Implement equivalence cache by caching and re-using predicate result
The last part of #30844, I opened a new PR instead of overwrite the old one because we changed some basic assumption by allowing invalidating equivalence cache item by individual predicate.
The idea of this PR is based on discussion in https://github.com/kubernetes/kubernetes/issues/32024
- [x] Pods belong to same controllerRef considered to be equivalent
- [x] ` podFitsOnNode` will use cached predicate result if it's available
- [x] Equivalence cache will be updated when if a fresh new predicate is done
- [x] `factory.go` will invalid specific predicate cache(s) based on the object change
- [x] Since `schedule` and `bind` are async, we need to optimistically invalid affected cache(s) before `bind`
- [x] Fully unit test of affected files
- [x] e2e test to verify cache update/invalid workflow
- [x] performance test results
- [x] Some nits fixes related but expected to result in `needs-rebase` so they are split to: #36060#35968#37512
cc @wojtek-t @davidopp
Automatic merge from submit-queue (batch tested with PRs 49055, 49128, 49132, 49134, 49110)
Remove affinity annotations leftover
**What this PR does / why we need it**:
This is a further cleanup for affinity annotations, following #47869.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: #47869
**Special notes for your reviewer**:
- I remove the commented test cases and just leave TODOs instead. I think converting these untestable test cases for now is not necessary. We can add new test cases in future.
- I remove the e2e test case `validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work` because we have a test case `validates that InterPod Affinity and AntiAffinity is respected if matching` to test the same thing.
/cc @aveshagarwal @bsalamat @gyliu513 @k82cn @timothysc
**Release note**:
```release-note
NONE
```
# The first commit's message is:
Modular extensions for kube scheduler perf testing framework
# This is the 2nd commit message:
Modular extensions for kube scheduler perf testing framework
Automatic merge from submit-queue (batch tested with PRs 48914, 48535, 49099, 48935, 48871)
Adopt debian-base as baseimage
**What this PR does / why we need it**:
Based on discussion from - https://github.com/kubernetes/kubernetes/pull/44910/files#r125150263
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49169
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 48481, 48256)
Refactor: pkg/util into sub-pkgs
**What this PR does / why we need it**:
- move code in pkg/util into sub-pkgs
- delete some unused funcs
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#15634
**Special notes for your reviewer**:
This is the final work of #15634. It will close that issue.
/cc @thockin
**Release note**:
```release-note
NONE
```
Add PriorityClass to pkg/registry
Add PriorityClass to pkg/master/master.go
Add PriorityClass to import_know_versions.go
Update linted packages
minor fix
e2e and integration tests have been switched over to the tokenfile
authenticator instead.
```release-note
The --insecure-allow-any-token flag has been removed from kube-apiserver. Users of the flag should use impersonation headers instead for debugging.
```
Automatic merge from submit-queue (batch tested with PRs 46094, 48544, 48807, 49102, 44174)
Shared Informer Run blocks until all goroutines finish
**What this PR does / why we need it**:
Makes Shared Informer Run method block until all goroutines it spawned finish. See #45454.
**Which issue this PR fixes**
Fixes#45454
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49019, 48919, 49040, 49018, 48874)
correcting spell mistake
**What this PR does / why we need it**:
Correcting spell mistake.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
NONE
```release-note
```
Automatic merge from submit-queue
Sanitize test names before using them as namespaces
**What this PR does / why we need it**: It sanitizes upgrade test names before creating namespaces.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48876
Automatic merge from submit-queue (batch tested with PRs 48231, 47377, 48797, 49020, 49033)
Update yaml and json with multi arch test images
**What this PR does / why we need it**:
This PR is for updating the yaml and json files under test/images folder with multi arch images
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 48494, 48733)
Move test-webserver from contrib/for-demos to kubernetes/test/images
**What this PR does / why we need it**:
This PR is for
- Moving the https://github.com/kubernetes/contrib/tree/master/for-demos/test-webserver to kubernetes/test/images - Refer https://github.com/kubernetes/contrib/pull/2544 for more information
- Multi architecture support for test-webserver image
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 48991, 48908)
Group every two services into one in load test
Ref https://github.com/kubernetes/kubernetes/issues/48938
Following from discussion with @bowei and @freehan .
This reduces #services to 8200 while keeping no. of backends same.
/cc @wojtek-t @gmarek
Automatic merge from submit-queue
Remove old, core/v1 specific constructs from RESTClient
Now that metav1 is abstracted from the APIs, RESTClient should also be agnostic to the core API.
* Remove `LabelSelectorParam` and `FieldSelectorParam` - use `VersionedParams` with `ListOptions`
* Remove `UintParam`
* Remove all legacy field selector logic from `VersionedParams` - ParameterCodec now handles that
* Remove special parameters (like `timeout`) which is no longer set by most clients
Automatic merge from submit-queue (batch tested with PRs 47417, 47638, 46930)
Added scheduler integration test owners.
**What this PR does / why we need it**:
Add OWNER file into scheduler integration test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A
**Release note**:
```release-note-none
```
Automatic merge from submit-queue (batch tested with PRs 48890, 46893, 48872, 48896)
Support customized system spec in the node conformance test and create the GKE system spec
ref: https://github.com/kubernetes/kubernetes/issues/46891
- System specs are located in `test/e2e_node/system/specs`. Created one for validating GKE images in `test/e2e_node/system/specs/gke.yaml`.
- `--image-spec-name` can be used to specify a system spec in node e2e and conformance tests. This option maps to `SYSTEM_SPEC_NAME` in a test properties file, which is the user facing configuration. So, users can specify `SYSTEM_SPEC_NAME=gke` to run the image validation using the GKE system spec.
- If `SYSTEM_SPEC_NAME` is unspecified, the default spec (`system.DefaultSysSpec`) will be used.
- We can also use `make test-e2e-node SYSTEM_SPEC_NAME=gke` to run tests using GKE image spec.
**Release note**:
`None`
Automatic merge from submit-queue
Move api-machinery related e2e tests to a 'api-machinery' e2e test su…
…bdirectory.
**What this PR does / why we need it**:
Moves all e2e tests belonging to sig-api-machinery into a dedicated `test/e2e/apimachinery ` directory and updates the tests to use `SigDescribe` to prepend `[sig-api-machinery]` to the testnames of all apimachinery owned tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48578, 48895, 48958)
move sig-apps upgrade tests to its directory
**What this PR does / why we need it**: This PR moves sig-apps upgrade tests to its directory in accord to fixit requirements.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48839
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Azure PD (Managed/Blob)
This is exactly the same code as this [PR](https://github.com/kubernetes/kubernetes/pull/41950). It has a clean set of generated items. We created a separate PR to accelerate the accept/merge the PR
CC @colemickens
CC @brendandburns
**What this PR does / why we need it**:
1. Adds K8S support for Azure Managed Disks.
2. Adds support for dedicated blob disks (1:1 to storage account) in addition to shared blob disks (n:1 to storage account).
3. Automatically manages the underlying storage accounts. New storage accounts are created at 50% utilization. Max is 100 disks, 60 disks per storage account.
2. Addresses the current issues with Blob Disks:
..* Significantly faster attach process. Disks are now usually available for pods on nodes under 30 sec if formatted, under a min if not formatted.
..* Adds support to move disks between nodes.
..* Adds consistent attach/detach behavior, checks if the disk is leased/attached on a different node before attempting to attach to target nodes.
..* Fixes a random hang behavior on Azure VMs during mount/format (for both blob + managed disks).
..* Fixes a potential conflict by avoiding the use of disk names for mount paths. The new plugin uses hashed disk uri for mount path.
The existing AzureDisk is used as is. Additional "kind" property was added allowing the user to decide if the pd will be shared, dedicated or managed (Azure Managed Disks are used).
Due to the change in mounting paths, existing PDs need to be recreated as PV or PVCs on the new plugin.
Automatic merge from submit-queue (batch tested with PRs 48864, 48651, 47703)
Enable logexporter mechanism to dump logs from k8s nodes to GCS directly
Ref https://github.com/kubernetes/kubernetes/issues/48513
This adds support for logexporter from k8s side. Next I'll send a PR adding support from test-infra side.
/cc @kubernetes/sig-scalability-misc @kubernetes/test-infra-maintainers @fejta @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 48082, 48815, 48901, 48824)
Import kubectl tests in e2e_test.go so they start running.
```release-note
NONE
```
Automatic merge from submit-queue
add dockershim checkpoint node e2e test
Add a bunch of disruptive cases to test kubelet/dockershim's checkpoint work flow.
Some steps are quite hacky. Not sure if there is better ways to do things.
Automatic merge from submit-queue (batch tested with PRs 47999, 47890)
[Federation] Update namespace support to use the sync controller
This PR moves namespaces to use the sync controller.
cc: @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 47999, 47890)
LocalPV e2e test with two pods writing/reading to a shared file on one PVC
**What this PR does / why we need it**:
This PR adds an e2e test where one pod writes to a localPV and another reads from it. This e2e test is for the PersistentLocalVolumes alpha feature.
**Which issue this PR fixes**: fixes https://github.com/kubernetes/kubernetes/issues/48832
**Special notes for your reviewer**:
You need to setup the PersistentLocalVolumes feature-gate and NODE_LOCAL_SSDS environment variable.
For example:
```
KUBE_FEATURE_GATES="PersistentLocalVolumes=true" NODE_LOCAL_SSDS=1 go run hack/e2e.go -- -v --up
go run hack/e2e.go -- -v --test --test_args="--ginkgo.focus=\[Feature:LocalPersistentVolumes\]"
```
**Release note**:
```release-note
NONE
```
This PR fixes the following issues:
1. Use ResourceStorageScratch instead of ResourceStorage API to represent
local storage capacity
2. In eviction manager, use container manager instead of node provider
(kubelet) to retrieve the node capacity and reserved resources. Node
provider (kubelet) has a feature gate so that storagescratch information
may not be exposed if feature gate is not set. On the other hand,
container manager has all the capacity and allocatable resource
information.
Automatic merge from submit-queue (batch tested with PRs 48555, 48849)
GCE: Fix panic when service loadbalancer has static IP address
Fixes#48848
```release-note
Fix service controller crash loop when Service with GCP LoadBalancer uses static IP (#48848, @nicksardo)
```
Automatic merge from submit-queue
add [sig-apps] identifier to relevant upgrade tests
**What this PR does / why we need it**: This PR adds [sig-apps] identifier to relevant upgrade tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #48839
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46738, 48827, 48831)
Moving disruption controller e2es to workload/
Based on #45301
Moving to track sig-apps in a single directory
cc @kubernetes/sig-contributor-experience-misc @kubernetes/sig-apps-misc @erictune @kow3ns @crimsonfaith91
Automatic merge from submit-queue
StatefulSet upgrade test - replicated database (mysql)
**What this PR does / why we need it**:
Adds a new upgrade test. The test creates a statefulset with a replicated mysql database. It populates the database and then continually polls the data while performing an upgrade.
Ultimately, this PR increases confidence of reliability during upgrades. It helps show that StatefulSets and Pod Disruption Budgets are doing what they're supposed to. Code to pay attention to this was added for #38336.
Also vendors in a golang mysql client driver, for use in the test.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48781, 48817, 48830, 48829, 48053)
Move kubectl e2e tests to their own directory and prefix the test nam…
```release-note
NONE
```
Automatic merge from submit-queue
Make storage e2e tests start with [sig-storage] instead of [k8s.io].
This makes understanding sig ownership from a test name very easy for
tools and humans.
- Use a SIGDescribe helper function that adds a [sig-storage] prefix instead of [k8s.io] for tests in storage/
- Move a test that should be in storage into storage.
- Make tests owned by multiple SIGs (configmap test) have [sig-storage] instead of [Volume] labels.
This means that all tests that sig-storage directly owns can be found with a simple regex.
/cc @kubernetes/sig-storage-pr-reviews
**What this PR does / why we need it**:
This will be used to make a testgrid dashboard for sig-storage.
**Release note**:
```release-note
NONE
```
Issue #48779
Automatic merge from submit-queue (batch tested with PRs 48594, 47042, 48801, 48641, 48243)
Prepare to introduce websockets for exec and portforward
Refactor the code in remotecommand to better represent the structure of
what is common between portforward and exec.
Ref #48633
Automatic merge from submit-queue (batch tested with PRs 48425, 41680, 48457, 48619, 48635)
[Federation] Remove redundant e2e
Now that federation of replicasets and deployments is implemented with the sync controller, the previous crud e2e duplicates coverage provided by the crudtester integration and e2e testing.
cc: @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 48405, 48742, 48748, 48571, 48482)
Remove unused parameter
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Move performance tests to test/e2e/scalability subdirectory
Following the move to make e2e tests more organized and accountable to the respective SIGs.
cc @kubernetes/sig-scalability-pr-reviews @wojtek-t @gmarek @grodrigues3
Automatic merge from submit-queue (batch tested with PRs 48698, 48712, 48516, 48734, 48735)
Name change: s/timstclair/tallclair/
I changed my name, and I'm migrating my user name to be consistent.
Automatic merge from submit-queue (batch tested with PRs 46865, 48661, 48598, 48658, 48614)
Move metrics_grabbert to test/e2e
cc @aleksandra-malinowska
Automatic merge from submit-queue
kube-apiserver: tests for aggregation and CRDs via delegation
In our integration tests we do not use the real kube-apiserver setup code, but mock our own. Here I use the actual `cmd/kube-apiserver/app.Run()` func with an testing etcd server. This can test the whole delegation chain of aggregator, apiextensions and kube-apiserver.
Automatic merge from submit-queue (batch tested with PRs 47232, 48625, 48613, 48567, 39173)
Include leaderelection in client-go;
Fix#39117
Fix https://github.com/kubernetes/client-go/issues/28
This PR:
* includes the leaderelection to the staging client-go
* to avoid conflict with golang's testing package, renames package /testing to /testutil, and renames cache/testing to cache/testframework
```release-note
client-go now includes the leaderelection package
```
Automatic merge from submit-queue (batch tested with PRs 48497, 48604, 48599, 48560, 48546)
remove dead code
This removes the dead code cruft since we stopped serving TPRs.
ref #48152
Automatic merge from submit-queue (batch tested with PRs 48497, 48604, 48599, 48560, 48546)
GCE: Use network project id for firewall/route mgmt and zone listing
- Introduces a new environment variable for plumbing the network project id which will be used for firewall and route management. fixes#48515
- onXPN is determined by metadata if config is not specified
- Split `if` conditions: fixes#48521
- Remove `getNetworkNameViaAPICall` which was used as a last resort for the `networkURL` (if empty) which was previously filled with the metadata network project & name.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Delete unused return
**What this PR does / why we need it**:
We do not use the function return, it's better not to write the return.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48374, 48524, 48519, 42548, 48615)
Add a node upgrade test for etcd statefulset with a PDB.
Tests for #38336
Automatic merge from submit-queue
Small fix for number of pods and nodes in test function
**What this PR does / why we need it**:
Small fix to have correct pods and nodes in test function.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Scheduler perf, nodes and pods number fix for 100 nodes 3k pods.
**Release note**:
```release-note
NONE
```
Now that federation of deployments is implemented with the sync
controller, the existing e2e coverage duplicates that provided by the
crudtester integration and e2e testing.
Now that replicaset federation is implemented with the sync
controller, the old replicaset-specific e2e tests duplicate coverage
provided by crudtester integration and e2e testing.
Automatic merge from submit-queue
Add ownership for the future of scheduler_perf and kubemark
**What this PR does / why we need it**:
The scheduler_perf project is cross-cutting with the other goals of the performance and scale initiatives, so, I've put together a list of interested parties who have been running, using, and contributing to it.
cc @kubernetes/sig-scheduling-pr-reviews @ravisantoshgudimetla @sjug
Automatic merge from submit-queue
Launch kubemark with an existing Kubemark master
In order to expand the use of kubemark, allow developers to use kubemark with a pre-existing Kubernetes cluster.
Ref issue #44393
Automatic merge from submit-queue
[e2e-ingress] Get node tag from instance under GKE
**What this PR does / why we need it**: Making ingress CI green again.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48167
**Special notes for your reviewer**:
/assign @nicksardo
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47162, 48444, 48445)
Introducing a cluster-scoped resource in the wardle.k8s.io group.
**What this PR does / why we need it**:
This PR adds a cluster-scoped resource to the wardle.k8s.io group.
The cluster scoped resource has a field that indicates Flunder.Names that are disallowed.
The resource is going to be used by an admission plugin.
The admission plugin will list the cluster-scope resources and check against banned names.
**Special notes for your reviewer**:
Issue: #47868
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48480, 48353)
remove tpr api access
xref https://github.com/kubernetes/kubernetes/issues/48152
TPR tentacles go pretty deep. This gets us started by removing API access and we'll move down from there.
@kubernetes/sig-api-machinery-misc
@ironcladlou this should free up the GC implementation since TPRs will no longer be present and failing.
```release-note
Removing TPR api access per https://github.com/kubernetes/kubernetes/issues/48152
```
Automatic merge from submit-queue (batch tested with PRs 47043, 48448, 47515, 48446)
Fix secret/configmap/projected volume update tests to work for large clusters
Fixes https://github.com/kubernetes/kubernetes/issues/48359
/cc @kubernetes/sig-node-pr-reviews @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 46926, 48468)
Fix typo in cluster size autoscaling tests selector
This caused tests not to be run automatically
The cluster scoped resource has a field that indicates Flunder.Names that are disallowed.
The resource is going to be used by an admission plugin.
The admission plugin will list the cluster-scope resources and check against banned names.
Issue: #47868
Automatic merge from submit-queue (batch tested with PRs 47784, 47793, 48334, 48435, 48354)
Convert Stackdriver Logging load e2e tests to soak tests
Instead of loading logging mechanism for 10 minutes, load for 21 hours to detect regressions that require some time to build up.
Made possible by switching to pub/sub. Only merge after corresponding test suites have appropriate timeouts: https://github.com/kubernetes/test-infra/pull/3119
/cc @piosz @fgrzadkowski
Automatic merge from submit-queue (batch tested with PRs 47918, 47964, 48151, 47881, 48299)
Add ApiEndpoint support to GCE config.
**What this PR does / why we need it**:
Add the ability to change ApiEndpoint for GCE.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 48295, 48298, 47339, 44910, 48037)
Make Makefiles in `test/images/` compatible with multiple architectures
**What this PR does / why we need it**:
This PR is for making test images multi architecture for different platforms like amd64, arm, arm64, ppc64le
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #31331
**Special notes for your reviewer**:
- Actual tests need to be modified to use these images based on the architecture later.
- Not covering the cross building of docker images for `s390x` platform due to problem faced while running containers with `qemu-s390x-static`
- Will submit separate PR for `volume and pet` test images
- This PR depends on - https://github.com/kubernetes/ingress/pull/587
**Release note**:
```NONE```
Automatic merge from submit-queue (batch tested with PRs 46336, 47643)
Add node e2e tests for runAsUser
**What this PR does / why we need it**:
This PR adds node e2e tests for runAsUser.
**Which issue this PR fixes**
Part of #44118.
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47850, 47835, 46197, 47250, 48284)
Do not fail on error when deleting ingress
Fixes#48239
If the api server or master is unavailable, the test should manually teardown load balancer resources.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47850, 47835, 46197, 47250, 48284)
Allocate clusterIP when change service type from ExternalName to ClusterIP
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#35354#46190
**Special notes for your reviewer**:
/cc @smarterclayton @thockin
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48214, 48154)
Adding a retry and traceroute to the master version checking
This is hitting a lot of connection refused errors in the e2e upgrade tests. We should make this more robust in case this is intermittent network errors. In the event of an error, attempt to log a traceroute to the master.
cc @kubernetes/sig-cluster-lifecycle-bugs @dchen1107
#47379
Automatic merge from submit-queue (batch tested with PRs 48168, 48199)
Fix some flakes in autoscaler e2e on gke
This PR should fix some of the flakes we found in e2e runs, while testing for 1.7 release:
- if one of the nodes is unschedulable in set up (causing set up to fail) we used to wait for wrong number of nodes in clean-up, adding unnecessary 20 minute wait to failing test
- we did not check for errors when creating RC in test, leading to tests failing later in hard to debug way (added retry loop and explicit test failure)
Automatic merge from submit-queue (batch tested with PRs 48004, 48205, 48130, 48207)
Add e2e tests for CA scale up when pending pod requests volume
Test verifying pending pods with PVC don't interfere with scale up, issue: kubernetes/autoscaler#22
Automatic merge from submit-queue
Changes node e2e tests to use the new Ubuntu image
ref: https://github.com/kubernetes/kubernetes/issues/46891
`ubuntu-docker10` and `ubuntu-docker12` images are deprecated in favor of the new one.
**Release note**:
```
None
```
/sig node
/area node-e2e
/assign @dchen1107
Automatic merge from submit-queue (batch tested with PRs 48118, 48159)
Ensures node becomes schedulable at the end of tests that delete nodes
**What this PR does / why we need it**: Further fixes the flakiness of "Pod Disk should be able to detach from a node which was deleted". When a node becomes ready but not schedulable, it was not included in the final node count.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48008
**Special notes for your reviewer**: Updated a similar test, "Pod Disk should be able to detach from a node whose api object was deleted", to use an "Expect" instead of a soft error because the test needs to guarantee that the environment is *completely* reset.
**Release note**:
```release-note-none
```
Automatic merge from submit-queue (batch tested with PRs 45610, 47628)
Replace capacity with allocatable to calculate pod resource
It is not accurate to use capacity to do the calculation.
**What this PR does / why we need it**:
The currently cpu resource calculation for a pod in end2end test is incorrect.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#47627
**Special notes for your reviewer**:
More details about capacity and allocatable:
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md
**Release note**:
NONE
Automatic merge from submit-queue (batch tested with PRs 48139, 48042, 47645, 48054, 48003)
Pipe clusterID into gce_loadbalancer_external.go
**What this PR does / why we need it**: Small cleanup for GCE ELB codes.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48002
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Make big clusters work again after introduction of subnets
This PR does two things:
- make IP aliases automatically pick Node IP Range based on number of Nodes,
- fix logic for starting clusters >4095 Nodes that was broken by introduction of subnets,
cc @wojtek-t @shyamjvs
```release-note
Setting env var ENABLE_BIG_CLUSTER_SUBNETS=true will allow kube-up.sh to start clusters bigger that 4095 Nodes on GCE.
```
Ref https://github.com/kubernetes/kubernetes/issues/47344
Automatic merge from submit-queue (batch tested with PRs 48092, 47894, 47983)
Skip Deployment upgrade test on 1.5 and earlier.
The test relies on implementation details and would need a rewrite to work for older clusters.
xref #47685
Automatic merge from submit-queue (batch tested with PRs 48012, 47443, 47702, 47178)
Extending timeout waiting for delete node to become ready before the test ends.
**What this PR does / why we need it**: It seems to take longer than 5 minutes for the node to recover. Changing the timeout to 10 minutes.
This is an extension of PR #46746
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48008
/release-note-none
Automatic merge from submit-queue (batch tested with PRs 48074, 47971, 48044, 47514, 47647)
e2e: bump kubelet's resurce usage limit
We don't have per-OS image limits. Bumping these to more generous
numbers to not fail the tests.
Automatic merge from submit-queue (batch tested with PRs 47656, 47488)
Checked whether balanced Pods were created.
**What this PR does / why we need it**:
In #47382, it seems balanced Pods were not created; this PR will check whether balanced Pods were created, and show logs.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: partially fixes#47382
**Release note**:
```release-note-none
```
Automatic merge from submit-queue
Move the workload e2e tests to it's own package
**What this PR does / why we need it**:
Move the workload e2e tests to it's own package
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 47961, 46276)
Bumped Heapster to v1.4.0-beta.0
Heapster release candidate for Kubernetes 1.7
cc @dchen1107 @caesarxuchao
Automatic merge from submit-queue (batch tested with PRs 47650, 47936, 47939, 47986, 48006)
Remove e2e test that checked scheduler priority function for spreading of ReplicationController pods
**What this PR does / why we need it**:
Remove e2e test that checked scheduler priority function for spreading of ReplicationController pods.
The test is flaky in our test clusters, but passes in our local clusters. It looks like our e2e test clusters are not deterministic enough for checking all of the scheduler priority functions. Besides, this functionality is checked extensively by our unit tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47769
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47650, 47936, 47939, 47986, 48006)
[esipp-e2e] Change service port to avoid collision
**What this PR does / why we need it**: As https://github.com/kubernetes/kubernetes/issues/47745#issuecomment-310750499 indicates, changing service port in test to avoid collision.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47745
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Move e2e fromManifest funcs to manifest package
fixes https://github.com/kubernetes/kubernetes/issues/37007 again
The goal here is to remove e2e/framework's dependence on e2e/generated so that external projects can import e2e/framework without issue. I reorganize & try to make all the 'fromManifest' funcs consistent (to all use generated, return err) to prevent any reintroduction of generated into framework
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47993, 47892, 47591, 47469, 47845)
deprecate created-by annotation for cronjob
**What this PR does / why we need it**: This PR deprecates created-by annotation for cronjob. This is needed as we now have ControllerRef.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #44407
**Special notes for your reviewer**: I will create 3 PRs to fix the issue as the annotation is used in various parts of the codebase: cronjob, pod drain, and e2e test framework. This is the first PR. Other PRs can be found here: #47471, #47475
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 47993, 47892, 47591, 47469, 47845)
Bump up npd version to v0.4.1
```
Bump up npd version to v0.4.1
```
Fixes#47219
Automatic merge from submit-queue (batch tested with PRs 47883, 47179, 46966, 47982, 47945)
Remove e2e test for least requested prioirty function
**What this PR does / why we need it**:
Deletes a flaky e2e test for "least requested" priority function of the scheduler. This priority function is tested more extensively in our unit tests.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47769
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47915, 47856, 44086, 47575, 47475)
deprecate created-by annotation for e2e test framework
**What this PR does / why we need it**: This PR deprecates created-by annotation for e2e test framework. This is needed as we now have ControllerRef.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #44407
**Special notes for your reviewer**: This is the third PR for deprecating created-by annotation. Other PRs can be found here: #47469, #47471
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 47403, 46646, 46906, 46527, 46792)
[Federation] Convert the ReplicaSet controller to a sync controller.
See #45563 for previous discussion: that PR was split into two, with this one containing the actual conversion work and that one containing the implementation of the scheduling methods in the sync controller.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47403, 46646, 46906, 46527, 46792)
E2E:Delete unecessary check
**What this PR does / why we need it**:
Delete meaningless check
The deleted check is redundant.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47922, 47195, 47241, 47095, 47401)
Run cAdvisor on the same interface as kubelet
**What this PR does / why we need it**:
cAdvisor currently binds to all interfaces. Currently the only
solution is to use iptables to block access to the port. We
are better off making cAdvisor to bind to the interface that
kubelet uses for better security.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#11710
**Special notes for your reviewer**:
**Release note**:
```release-note
cAdvisor binds only to the interface that kubelet is running on instead of all interfaces.
```
Automatic merge from submit-queue
Remove initial resources e2e
This is just a cleanup PR. We won't continue initial resources effort. It is dropped in favor of VPA developed in kubernetes/autoscaler.
cc: @piosz @jszczepkowski @kgrygiel
Automatic merge from submit-queue (batch tested with PRs 47878, 47503, 47857)
restore working aggregator and avoid duplicate informers
Fixes https://github.com/kubernetes/kubernetes/issues/47866
This runs the informer all the way through and makes sure its started.
@lavalamp ptal
@kubernetes/sig-api-machinery-bugs
Automatic merge from submit-queue (batch tested with PRs 47851, 47824, 47858, 46099)
Revert "[Federation] Fix federated service reconcilation issue due to addition of External…"
Reverts kubernetes/kubernetes#45798
Reverting the temporary fix as the problem is fixed in #45869.
with that fix federation also can default ExternalTrafficLocalOnly if not set.
Issue: #45812
cc @MrHohn @madhusudancs @kubernetes/sig-federation-bugs
Automatic merge from submit-queue (batch tested with PRs 47851, 47824, 47858, 46099)
Revert 44714 manually
#44714 broke backward compatibility for old swagger spec that kubectl still uses. The decision on #47448 was to revert this change but the change was not automatically revertible. Here I semi-manually remove all references to UnixUserID and UnixGroupID and updated generated files accordingly.
Please wait for tests to pass then review that as there may still be tests that are failing.
Fixes#47448
Adding release note just because the original PR has a release note. If possible, we should remove both release notes as they cancel each other.
**Release note**: (removed by caesarxuchao)
UnixUserID and UnixGroupID is reverted back as int64 to keep backward compatibility.
Automatic merge from submit-queue
Add E2E tests for Azure internal loadbalancer support, fix an issue for public IP resource deletion.
**What this PR does / why we need it**:
- Add E2E tests for Azure internal loadbalancer support: https://github.com/kubernetes/kubernetes/pull/43510
- Fix an issue that public IP resource not get deleted when switching from external loadbalancer to internal static loadbalancer.
**Special notes for your reviewer**:
1. Add new Azure resource tag to Public IP resources to indicate kubernetes managed resources.
Currently we determine whether the public IP resource should be deleted by looking at LoadBalancerIp property on spec. In the scenario 'Switching from external loadbalancer to internal loadbalancer with static IP', that value might have been updated for internal loadbalancer. So here we're to add an explicit tag for kubernetes managed resources.
2. Merge cleanupPublicIP logic into cleanupLoadBalancer
**Release note**:
NONE
CC @brendandburns @colemickens
Automatic merge from submit-queue
Further reduce cluster-autoscaler e2e flakiness
Ref: https://github.com/kubernetes/autoscaler/issues/89
Add pdbs for additional kube-system pod, move adding
pdbs to separate function, as it will need to be reused
in new tests we're working on (ex. scale to 0).
Automatic merge from submit-queue
Added an e2e test timing HPA + CA scaling up from 1 to 8 pods and from 3 to >=4 clusters
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46847
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Add new e2e test for cluster size autoscaler (evicting system pods)
This test verifies that cluster autoscaler drains nodes with system pods running if they have a PDB.
Automatic merge from submit-queue (batch tested with PRs 47726, 47693, 46909, 46812)
Plumb service resolver into webhook AC
This is the last piece of plumbing needed for https://github.com/kubernetes/features/issues/209
Automatic merge from submit-queue (batch tested with PRs 47726, 47693, 46909, 46812)
Additional e2e for StatefulSet Update
**What this PR does / why we need it**:
This PR adds additional e2e tests for StatefulSet update
fixes: #46942
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46327, 47166)
mark --network-plugin-dir deprecated for kubelet
**What this PR does / why we need it**:
**Which issue this PR fixes** : fixes#43967
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Fix flaky cluster-autoscaler e2e
Ref: https://github.com/kubernetes/autoscaler/issues/89
Add pdb to allow cluster-autoscaler to drain nodes with some kube-system components (turns out there can be enough of them to deny scale-down even with 5 healthy nodes). Increased scaleDownTimeout to take into account the time it will take to re-schedule pods running on broken node (this may reset scale-down timer).
Automatic merge from submit-queue (batch tested with PRs 47530, 47679)
Fix failing CassandraStatefulSet test in examples suite
Fix part of: https://github.com/kubernetes/kubernetes/issues/45677
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47530, 47679)
Use cos-stable-59-9460-64-0 instead of cos-beta-59-9460-20-0.
Remove dead code that has now moved to another repo as part of #47467
**Release note**:
```release-note
NONE
```
/sig node
- It contains a fix for ipaliasing.
- It contains a fix which decouples GPU driver installation from kernel
version.
Remove dead code that has now moved to another repo as part of #47467
Automatic merge from submit-queue
Fixed e2e test flake - ClusterDns - should create pod that uses dns
**What this PR does / why we need it**:
String replaced in this test for example pod yaml (dns-frontend-pod.yaml) file is incorrect.
**Which issue this PR fixes** :
fixes#45915
Internal attach/detach controller timers should be configurable and tests
should use much shorter values.
reconcilerSyncDuration is deliberately left out of TimerConfig because it's
the only one that's not a constant one, it's configurable by user.
Automatic merge from submit-queue
Add some debug info for deployment e2e testing
Add some debug info to printout all the ReplicaSets if there is no deployment object created, and add a enhancement to wait the pod to ready
**Release note**:
```
None
```
Automatic merge from submit-queue
Fix flaking cluster-autoscaler e2e
Ref: https://github.com/kubernetes/autoscaler/issues/89
Scale-down e2e require 5 healthy nodes to pass reliably, this PR should fix the flaking "should correctly scale down after a node is not needed and one node is broken" e2e.
Automatic merge from submit-queue (batch tested with PRs 47492, 47542, 46800, 47545, 45764)
delete dependent pods for rs when deleting deployments
Fix#44046, where user reported that the garbage collector didn't delete pods when a deployment was deleted with PropagationPolicy=Background.
Automatic merge from submit-queue (batch tested with PRs 47492, 47542, 46800, 47545, 45764)
separate group and version priority
Fixes https://github.com/kubernetes/kubernetes/issues/46322
This just modifies the API and does the minimal plumbing. I can extend this pull or do another to fix the priority problem.
Automatic merge from submit-queue
Don't test the debug /logs endpoint on GKE.
GKE will not enable the /logs endpoint in 1.7. I'd like this test to still test the other cluster level endpoints.
Automatic merge from submit-queue
Do not add unique label to DaemonSet
**What this PR does / why we need it**:
It's mainly for #46925. DaemonSet controller adds a unique label to DaemonSet, which is unexpected to federation.
The 1st commit addressed #46981 to construct history once and pass it around, so that we can avoid adding that unique label in DaemonSet in the 2nd commit. ~The 3rd commit just reverts the band-aid PR #47103.~
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46925, xref #46981
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47470, 47260, 47411, 46852, 46135)
Write reports for each upgrade test
Due to the way Ginkgo runs individual test cases and the level of coordination required for the upgrade tests, they were all run under a single Ginkgo test case. This PR generates and auxiliary report that break out the results of each upgrade test. This is accomplished by:
1) Wrapping `ginkgo.Fail` and `ginkgo.Skip` to get the actual failure or skip messages.
2) Recovering that info in the upgrade test to generate an auxiliary report.
I suggest reviewing commit by commit.
Sample report: https://storage.googleapis.com/krouseytestreports/logs/results/1/artifacts/junit_upgrades.xmlFixes: #47371
Automatic merge from submit-queue
test/kubemark/resources: configure custom etcd endpoints
We want to stress our own etcd cluster with Kubernetes
workloads, using kubemark e2e tests. This PR adds a new
environment variable 'ETCD_SERVERS' to configure custom
etcd endpoints.
/cc @xiang90 @hongchaodeng
Automatic merge from submit-queue (batch tested with PRs 47302, 47389, 47402, 47468, 47459)
Update to kube-addon-manager:v6.4-beta.2: kubectl v1.6.4 and refreshed base images
**What this PR does / why we need it**: refreshes base images for kube-addon-manager with fixes for CVE-2016-9841 and CVE-2016-9843.
x-ref https://github.com/kubernetes/kubernetes/issues/47386
**Special notes for your reviewer**: the updated images are not yet pushed, so tests will fail until that's done.
**Release note**:
```release-note
```
/assign @MrHohn
Automatic merge from submit-queue
Update GPU e2e tests.
* Use nvidia driver installer from external repo.
That installer decouples itself from COS image version (as long as the
image version is newer than cos-stable-59-9460-60-0).
A separate commit in the test-infra repo will update the cos version
used for this test to cos-stable-59-9460-60-0.
* Use cos-stable-59-9460-60-0 and newer installer for GPU node e2e tests.
This is to enable #47388.
This supercedes #47091.
**Release note**:
```release-note
NONE
```
/sig node
We want to stress our own etcd cluster with Kubernetes
workloads, using kubemark e2e tests. This PR adds a new
environment variable 'ETCD_SERVERS' to configure custom
etcd endpoints.
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
In 1.7, we add controller history to avoid unnecessary DaemonSet pod
restarts during pod adoption. We will not restart pods with matching
templateGeneration for backward compatibility, and will not restart pods
when template hash label matches current DaemonSet history, regardless
of templateGeneration.
Automatic merge from submit-queue (batch tested with PRs 47084, 46016, 46372)
Update adoption/release of DaemonSet controller history, and wait for history store sync
**What this PR does / why we need it**:
~Depends on #47075, so that DaemonSet controller can update history's controller ref. Ignore that commit when reviewing.~ (merged)
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #46981
**Special notes for your reviewer**: @kubernetes/sig-apps-bugs
**Release note**:
```release-note
NONE
```
That installer decouples itself from COS image version (as long as the
image version is newer than cos-stable-59-9460-60-0).
A separate commit in the test-infra repo will update the cos version
used for this test to cos-stable-59-9460-60-0.
Automatic merge from submit-queue (batch tested with PRs 46441, 43987, 46921, 46823, 47276)
Enable Node authorizer and NodeRestriction admission in kubemark
xref https://github.com/kubernetes/features/issues/279
We want to ensure scale testing covers use of the authorizer/admission pair that partitions nodes. This includes enabling the authorizer, which populates a graph of existing nodes and pods.
Kubemark is still running all nodes with a single credential, so a follow-up step is to generate unique credentials per node (or enable TLS bootstrapping) and remove the temporary rolebinding added in this PR so the node authorizer is the one authorizing each call by a hollow node.
Automatic merge from submit-queue
Shorten eviction tests, and increase test suite timeout
After #43590, the eviction manager is less aggressive when evicting pods. Because of that, many runs in the flaky suite time out.
To shorten the inode eviction test, I have lowered the eviction threshold.
To shorten the allocatable eviction test, I now set KubeReserved = NodeMemoryCapacity - 200Mb, so that any pod using 200Mb will be evicted. This shortens this test from 40 minutes, to 10 minutes.
While this should be enough to not hit the flaky suite timeout anymore, it is better to keep lower individual test timeouts than a lower suite timeout, since hitting the suite timeout means that even successful test runs are not reported.
/assign @Random-Liu @mtaufen
issue: #31362