This moves plugin/pkg/scheduler to pkg/scheduler and
plugin/cmd/kube-scheduler to cmd/kube-scheduler.
Bulk of the work was done with gomvpkg, except for kube-scheduler main
package.
Prepatory work fpr removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.
Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
Fixed linter issues.
Factored in feedback from @gmarek.
Factored in feedback from @mtaufen.
Undoing unneeded change.
utils/image/manifest has an additional `arch` parameter, which determines
whether an image ends in `-$ARCH` (like `-amd64`).
All locations that previously had gcr.io urls referenced in costants or inline
have been updated to refere test/utils/image.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Log actual return code, not the default value.
**What this PR does / why we need it**:
Previously the code would log 127, no matter what, which is quite
useless..
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52748, 56623). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add brackets around IPv6 addrs in e2e test IP:port endpoints
There are several locations in the e2e tests where endpoints of the
form IPv6:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.
Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.
fixes#52746
**What this PR does / why we need it**:
This PR adds brackets around IPv6 addresses when they appear as part of an IPv6-addr:port endpoint
in the e2e tests. This is needed because any connections that attempt to use IPv6-addr:port
endpoint without brackets surrounding the IPv6-addr will fail.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52746
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54316, 53400, 55933, 55786, 55794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improve messages around waiting for pods.
**What this PR does / why we need it**:
This is a step towards solving #55785
**Release note**:
```release-note
NONE
```
There are several locations in the e2e tests where endpoints of the
form IP:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.
Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.
fixes#52746
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support collecting log for alternative container runtime in e2e test.
Fixes https://github.com/kubernetes/kubernetes/issues/55629.
Add support to collect logs for alternative container runtime in e2e.
Example for `cri-containerd`:
```
$ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation"
```
```release-note
none
```
/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add ephemeral storage e2e tests
Add e2e tests of limitrange/quota/downward_api for local ephemeral storage
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #52463
**Special notes for your reviewer**:
Add e2e tests of limitrange/quota/downwardapi for local ephemeral storage
**Release note**:
```release-note
Add limitrange/resourcequota/downward_api e2e tests for local ephemeral storage
```
/assign @jingxu97
Automatic merge from submit-queue (batch tested with PRs 53747, 54528, 55279, 55251, 55311). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e test to verify volume attach status after master kubelet restart
**What this PR does / why we need it**:
This PR adds test to verify volume remains attached after the kubelet is restarted on master node.
**Which issue this PR fixes** :
fixes vmware#274
**Special notes for your reviewer**:
This test does not run as part of existing sig-storage test grid. It has been tested internally at VMware.
Test logs
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=Volume\sAttach\sVerify'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build395888807/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/11 12:14:05 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/11 12:14:05 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/11 12:14:05 e2e.go:57: The separator is required to use --get or --old flags
2017/10/11 12:14:05 e2e.go:58: The -- flag separator also suppresses this message
2017/10/11 12:14:05 e2e.go:151: The kubetest binary is older than 24h0m0s.
2017/10/11 12:14:05 e2e.go:156: Updating kubetest binary...
2017/10/11 12:14:13 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=Volume\sAttach\sVerify...
2017/10/11 12:14:13 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/11 12:14:13 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 493.364761ms
2017/10/11 12:14:13 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17307+d274c30f81d1c2", GitCommit:"d274c30f81d1c2d966dc950014ac90f8fad140f7", GitTreeState:"clean", BuildDate:"2017-10-11T18:57:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:03:38Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
2017/10/11 12:14:14 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 352.041653ms
2017/10/11 12:14:14 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sAttach\sVerify
Conformance test: not doing test setup.
Oct 11 12:14:15.478: INFO: Overriding default scale value of zero to 1
Oct 11 12:14:15.478: INFO: Overriding default milliseconds value of zero to 5000
I1011 12:14:15.692022 29999 e2e.go:383] Starting e2e run "5f33ad5b-aeb8-11e7-9f17-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1507749254 - Will randomize all specs
Will run 1 of 709 specs
Oct 11 12:14:15.744: INFO: >>> kubeConfig: /tmp/kube204.json
Oct 11 12:14:15.751: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 11 12:14:15.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 11 12:14:16.067: INFO: 4 / 4 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 11 12:14:16.067: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Oct 11 12:14:16.077: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 11 12:14:16.077: INFO: Dumping network health container logs from all nodes...
Oct 11 12:14:16.083: INFO: Client version: v1.6.0-alpha.0.17307+d274c30f81d1c2
Oct 11 12:14:16.086: INFO: Server version: v1.6.5
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Attach Verify [Feature:vsphere]
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 11 12:14:16.087: INFO: >>> kubeConfig: /tmp/kube204.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:81
Oct 11 12:14:16.265: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
STEP: Creating a test vsphere volume 0
STEP: Creating pod 0 on node kubernetes-node1
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Creating a test vsphere volume 1
STEP: Creating pod 1 on node kubernetes-node2
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Creating a test vsphere volume 2
STEP: Creating pod 2 on node kubernetes-node3
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Creating a test vsphere volume 3
STEP: Creating pod 3 on node kubernetes-node4
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Restarting kubelet on master node
Oct 11 12:16:12.239: INFO: Restarting kubelet via ssh on host 10.192.113.70:22 with command systemctl restart kubelet
STEP: Verifying the kubelet on master node is up
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: command: curl http://localhost:10255/healthz
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stdout: ""
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 10255: Connection refused\n"
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: exit code: 7
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Deleting pod on node kubernetes-node1
Oct 11 12:16:18.538: INFO: Deleting pod "vsphere-e2e-pwjr1" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:16:18.559: INFO: Wait up to 5m0s for pod "vsphere-e2e-pwjr1" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk to be detached from the node kubernetes-node1
Oct 11 12:17:10.686: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Deleting pod on node kubernetes-node2
Oct 11 12:17:11.614: INFO: Deleting pod "vsphere-e2e-vqkbp" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:11.624: INFO: Wait up to 5m0s for pod "vsphere-e2e-vqkbp" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk to be detached from the node kubernetes-node2
Oct 11 12:17:55.748: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Deleting pod on node kubernetes-node3
Oct 11 12:17:56.051: INFO: Deleting pod "vsphere-e2e-fkrzb" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:56.069: INFO: Wait up to 5m0s for pod "vsphere-e2e-fkrzb" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk to be detached from the node kubernetes-node3
Oct 11 12:18:38.199: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Deleting pod on node kubernetes-node4
Oct 11 12:18:38.541: INFO: Deleting pod "vsphere-e2e-4cb0d" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:18:38.556: INFO: Wait up to 5m0s for pod "vsphere-e2e-4cb0d" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk to be detached from the node kubernetes-node4
Oct 11 12:19:22.672: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk
[AfterEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 11 12:19:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-restart-master-j9x0f" for this suite.
Oct 11 12:19:29.544: INFO: namespace: e2e-tests-restart-master-j9x0f, resource: bindings, ignored listing per whitelist
Oct 11 12:19:29.622: INFO: namespace e2e-tests-restart-master-j9x0f deletion completed in 6.156220683s
• [SLOW TEST:313.535 seconds]
[sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 11 12:19:29.666: INFO: Running AfterSuite actions on all node
Oct 11 12:19:29.666: INFO: Running AfterSuite actions on node 1
Ran 1 of 709 Specs in 313.923 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 708 Skipped PASS
```
Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use versiond group clients from client-go
**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.
**Which issue this PR fixes**: fixes#49760
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Incorrect Scale Subresources and HPA e2e ScaleTargetRefs
The HPA e2es failed to actually set `apiVersion` on the created HPAs, which previous was ignored. Since the polymorphic scale client was merged, this behavior is no longer tolerated (it was never correct to begin with, but it accidentally worked).
Additionally, the `apps` resources have their own version of scale. Until `apps/v1beta1` and `apps/v1beta2` go away, we need to support those versions in the scale client.
Together, these broke some of the HPA e2es.
Fixes#54574
```release-note
NONE
```
apps/v1betaX inadventertently contains its own variant of Scale. In
order to support scaling Deployments, ReplicaSets, etc, we need to support
these versions of Scale as well.
Automatic merge from submit-queue (batch tested with PRs 54593, 54607, 54539, 54105). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Move deployment e2e test for hash label adoption to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52113
**Special notes for your reviewer**: depends on #53918
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53350, 52688, 53531, 52515). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
PodReady should be replaced with podutil.IsPodReady
**What this PR does / why we need it**:
PodReady should be replaced with podutil.IsPodReady.
Thanks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
e2e tests provide only an (IPv4) ping test for external connectivity.
We need a way to conditionally run a ping6 external connectivity check,
and disable the (IPv4) ping-based external connectivity check,
for end-to-end testing on IPv6-only clusters.
This feature will be needed for creating gating IPv6 CI tests.
fixes#53383
Automatic merge from submit-queue (batch tested with PRs 50988, 50509, 52660, 52663, 52250). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added device plugin e2e kubelet failure test
Signed-off-by: Renaud Gaubert <renaud.gaubert@gmail.com>
**What this PR does / why we need it**:
This is part of issue #52859 (fixes#52859)
This PR adds a e2e_node test for the device plugin.
Specifically it implements testing of failure handling by the device plugin components in case Kubelet restart / crashes.
I might try to refactor the GPU tests in a later PR.
**Special notes for your reviewer**:
@jiayingz @vishh
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51067, 52319, 52803, 52961, 51972). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Add support for skeleton in GetSigner
Adding support for skeleton to GetSigner to be able to run
e2e tests against a bare metal multinode cluster.
Closes#35613
Automatic merge from submit-queue (batch tested with PRs 52905, 52766). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Refactor parsing cluster autoscaler status, add logging error
Minor improvements to autoscaling test suite and e2e framework.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fix GCE LB resource cleanup for service e2e tests.
**What this PR does / why we need it**: Fix GCE LB resource cleanup logic.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52347
**Special notes for your reviewer**:
/assign @shyamjvs @nicksardo
**Release note**:
```release-note
NONE
```
It turns out that at some points while the Node is recovering from a
reboot, we get a different kind of error ("unable to upgrade
connection"). Since we can't distinguish these transient errors from an
error encountered after successfully executing the remote command,
let's just retry all errors for 5min. If this doesn't work, I'm gonna
blame it on sig-node.
The initial retry up to 20s was giving up too soon.
I'm seeing this test flake because the Node rebooted and it takes ~2min
to recover.
Now StatefulSet RunHostCmd calls will use the same 5min timeout as with
other Pod state checks.
Automatic merge from submit-queue
AllowedNotReadyNodes allowed to be not ready for absolutely *any* reason
It's as good as we allow those many nodes to be not part of the cluster at all, ever.
Btw - currently our 5k-node correctness test fails if "kubelet stopped posting node status" or "route not created", etc (ref: https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/3/build-log.txt)
cc @kubernetes/sig-scalability-misc
We seem to get a lot of flakes due to "connection refused" while running
`kubectl exec`. I can't find any reason this would be caused by the test
flow, so I'm adding retries to see if that helps.
Automatic merge from submit-queue
Migrate to controller references helpers in meta/v1
**What this PR does / why we need it**:
This is a follow up for #48319 that migrates all method usages to new methods in meta/v1.
**Special notes for your reviewer**:
Looking at each commit individually might be easier.
**Release note**:
```release-note
NONE
```
/sig api-machinery
/kind cleanup
Automatic merge from submit-queue (batch tested with PRs 49651, 49707, 49662, 47019, 49747)
improve detectability of deleted pods
**What this PR does / why we need it**:
Adds comment to `waitForPodTerminatedInNamespace` to better explain how it's implemented.
~~It improves pod deletion detection in the e2e framework as follows:~~
~~1. the `waitForPodTerminatedInNamespace` func looks for pod.Status.Phase == _PodFailed_ or _PodSucceeded_ since both values imply that all containers have terminated.~~
~~2. the `waitForPodTerminatedInNamespace` func also ignores the pod's Reason if the passed-in `reason` parm is "". Reason is not really relevant to the pod being deleted or not, but if the caller passes a non-blank `reason` then it will be lower-cased, de-blanked and compared to the pod's Reason (also lower-cased and de-blanked). The idea is to make Reason checking more flexible and to prevent a pod from being considered running when all of its containers have terminated just because of a Reason mis-match.~~
Releated to pr [49597](https://github.com/kubernetes/kubernetes/pull/49597) and issue [49529](https://github.com/kubernetes/kubernetes/issues/49529).
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45813, 49594, 49443, 49167, 47539)
Add node e2e tests for GKE environment
Ref: https://github.com/kubernetes/kubernetes/issues/46891
This PR adds node e2e tests for validating images used on GKE.
- We pass the `SYSTEM_SPEC_NAME` to the node e2e test process via the flag `--system-spec-name` so that we can skip the environment specific tests using `RunIfSystemSpecNameIs()`.
- Also added `SkipIfContainerRuntimeIs()` as the opposite of `RunIfContainerRuntimeIs()`.
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 49619, 49598, 47267, 49597, 49638)
improve log for pod deletion poll loop
**What this PR does / why we need it**:
It improves some logging related to waiting for a pod to reach a passed-in condition. Specifically, related to issue [49529](https://github.com/kubernetes/kubernetes/issues/49529) where better logging may help to debug the root cause.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49420, 49296, 49299, 49371, 46514)
Refactoring taint functions to reduce sprawl
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45060
**Special notes for your reviewer**:
@gmarek @timothysc @k82cn @jayunit100 - I moved some fn's to helpers and some to utils. LMK, if you are ok with this change.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48377, 48940, 49144, 49062, 49148)
fixit: break sig-cluster-lifecycle tests into subpackage
this is part of fixit week. ref #49161
@kubernetes/sig-cluster-lifecycle-misc
Automatic merge from submit-queue (batch tested with PRs 48139, 48042, 47645, 48054, 48003)
Pipe clusterID into gce_loadbalancer_external.go
**What this PR does / why we need it**: Small cleanup for GCE ELB codes.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48002
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47915, 47856, 44086, 47575, 47475)
deprecate created-by annotation for e2e test framework
**What this PR does / why we need it**: This PR deprecates created-by annotation for e2e test framework. This is needed as we now have ControllerRef.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #44407
**Special notes for your reviewer**: This is the third PR for deprecating created-by annotation. Other PRs can be found here: #47469, #47471
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 47470, 47260, 47411, 46852, 46135)
Write reports for each upgrade test
Due to the way Ginkgo runs individual test cases and the level of coordination required for the upgrade tests, they were all run under a single Ginkgo test case. This PR generates and auxiliary report that break out the results of each upgrade test. This is accomplished by:
1) Wrapping `ginkgo.Fail` and `ginkgo.Skip` to get the actual failure or skip messages.
2) Recovering that info in the upgrade test to generate an auxiliary report.
I suggest reviewing commit by commit.
Sample report: https://storage.googleapis.com/krouseytestreports/logs/results/1/artifacts/junit_upgrades.xmlFixes: #47371
Automatic merge from submit-queue (batch tested with PRs 46835, 46856)
Made WaitForReplicas and EnsureDesiredReplicas use PollImmediate and improved logging.
**What this PR does / why we need it**: Most importantly, this results in better logging: timeout is logged at the level of the caller, not the helper function, helping debugging.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46252, 45524, 46236, 46277, 46522)
Make GCE load-balancers create health checks for nodes
From #14661. Proposal on kubernetes/community#552. Fixes#46313.
Bullet points:
- Create nodes health check and firewall (for health checking) for non-OnlyLocal service.
- Create local traffic health check and firewall (for health checking) for OnlyLocal service.
- Version skew:
- Don't create nodes health check if any nodes has version < 1.7.0.
- Don't backfill nodes health check on existing LBs unless users explicitly trigger it.
**Release note**:
```release-note
GCE Cloud Provider: New created LoadBalancer type Service now have health checks for nodes by default.
An existing LoadBalancer will have health check attached to it when:
- Change Service.Spec.Type from LoadBalancer to others and flip it back.
- Any effective change on Service.Spec.ExternalTrafficPolicy.
```
Automatic merge from submit-queue
move from daemon_restart.go to framework/util.go
**What this PR does / why we need it**:
Moves the func `nodeExec` from daemon_restart.go to framework/util.go. This is the correct file for this func and is a more intuitive pkg for other callers to use. This is a small step of the larger effort of restructuring e2e tests to be more logically structured and easier for newcomers to understand.
```release-note
NONE
```
cc @timothysc @copejon
Automatic merge from submit-queue
util.go: format for
**What this PR does / why we need it**:
format for.
delete redundant para.
make code clean.
**Release note**:
```release-note
NONE
```
Not all CI systems support ssh keys to be present on the node. This
supports the case where "local" provider is being used when running
e2e test, but the environment does not have a SSH key.
Automatic merge from submit-queue (batch tested with PRs 44424, 44026, 43939, 44386, 42914)
Try in cluster config when input KubeConfig is empty
**What this PR does / why we need it**:
Allows for downstream providers to run e2es "incluster" sans kubeconfig
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
/cc @kubernetes/sig-testing-pr-reviews @marun
Automatic merge from submit-queue (batch tested with PRs 44447, 44456, 43277, 41779, 43942)
Clean up pre-ControllerRef compatibility logic
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43323
**Special notes for your reviewer**:
No
**Release note**:
```
NONE
```
Automatic merge from submit-queue
fix wrong error return
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
**What this PR does / why we need it**: The err return here is wrong, correct it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 41728, 42231)
Adding new tests to e2e/vsphere_volume_placement.go
**What this PR does / why we need it**:
Adding new tests to e2e/vsphere_volume_placement.go
Below is the tests description and test steps.
**Test Back-to-back pod creation/deletion with different volume sources on the same worker node**
1. Create volumes - vmdk2, vmdk1 is created in the test setup.
2. Create pod Spec - pod-SpecA with volume path of vmdk1 and NodeSelector set to label assigned to node1.
3. Create pod Spec - pod-SpecB with volume path of vmdk2 and NodeSelector set to label assigned to node1.
4. Create pod-A using pod-SpecA and wait for pod to become ready.
5. Create pod-B using pod-SpecB and wait for POD to become ready.
6. Verify volumes are attached to the node.
7. Create empty file on the volume to make sure volume is accessible. (Perform this step on pod-A and pod-B)
8. Verify file created in step 5 is present on the volume. (perform this step on pod-A and pod-B)
9. Delete pod-A and pod-B
10. Repeatedly (5 times) perform step 4 to 9 and verify associated volume's content is matching.
11. Wait for vmdk1 and vmdk2 to be detached from node.
12. Delete vmdk1 and vmdk2
**Test multiple volumes from different datastore within the same pod**
1. Create volumes - vmdk2 on non default shared datastore.
2. Create pod Spec with volume path of vmdk1 (vmdk1 is created in test setup on default datastore) and vmdk2.
3. Create pod using spec created in step-2 and wait for pod to become ready.
4. Verify both volumes are attached to the node on which pod are created. Write some data to make sure volume are accessible.
5. Delete pod.
6. Wait for vmdk1 and vmdk2 to be detached from node.
7. Create pod using spec created in step-2 and wait for pod to become ready.
8. Verify both volumes are attached to the node on which PODs are created. Verify volume contents are matching with the content written in step 4.
9. Delete POD.
10. Wait for vmdk1 and vmdk2 to be detached from node.
11. Delete vmdk1 and vmdk2
**Test multiple volumes from same datastore within the same pod**
1. Create volumes - vmdk2, vmdk1 is created in testsetup
2. Create pod Spec with volume path of vmdk1 (vmdk1 is created in test setup) and vmdk2.
3. Create pod using spec created in step-2 and wait for pod to become ready.
4. Verify both volumes are attached to the node on which pod are created. Write some data to make sure volume are accessible.
5. Delete pod.
6. Wait for vmdk1 and vmdk2 to be detached from node.
7. Create pod using spec created in step-2 and wait for pod to become ready.
8. Verify both volumes are attached to the node on which PODs are created. Verify volume contents are matching with the content written in step 4.
9. Delete POD.
10. Wait for vmdk1 and vmdk2 to be detached from node.
11. Delete vmdk1 and vmdk2
**Which issue this PR fixes**
fixes #
**Special notes for your reviewer**:
Executed tests against K8S v1.5.3 release
**Release note**:
```release-note
NONE
```
cc: @kerneltime @abrarshivani @BaluDontu @tusharnt @pdhamdhere