Automatic merge from submit-queue
Extend ingress e2e
Splits the test into a cross platform conformance list, and platform specific bits that exercise features through annotations. Also exercises the features in https://github.com/kubernetes/contrib/pull/1133. Assigning to Girish, simply because I assigned the other pr to Minhan.
- Dropped the regex test and just test for nslookup exiting 0.
- Moved more setup into BeforeEach and used nested Context for non-local
case.
- Poll inside the container using a bash loop.
- Aim for less console noise unless something goes wrong.
- Commented out the tests trying to verify that a DNS name is absent.
Automatic merge from submit-queue
in each pd test, create and delete the pod for every iteration to give new pod name for exec
fix#26141
based on chat with @ncdc
The following is a snapshot of the log. Each iteration now has a new Pod name
```text
[It] should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] [Flaky]
/srv/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:277
STEP: creating PD1
Jun 10 15:55:45.878: INFO: Successfully created a new PD: "rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c".
STEP: creating PD2
Jun 10 15:55:49.794: INFO: Successfully created a new PD: "rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c".
Jun 10 15:55:49.794: INFO: PD Read/Writer Iteration #0
STEP: submitting host0Pod to kubernetes
W0610 15:55:49.860308 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: writing a file in the container
Jun 10 15:56:09.792: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '988876932586416926' > '/testpd1/tracker0''
Jun 10 15:56:12.003: INFO: Wrote value: "988876932586416926" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:12.003: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '8414937992264649637' > '/testpd2/tracker0''
Jun 10 15:56:13.170: INFO: Wrote value: "8414937992264649637" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:13.170: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:14.325: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:14.325: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:15.590: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: deleting host0Pod
Jun 10 15:56:15.841: INFO: PD Read/Writer Iteration #1
STEP: submitting host0Pod to kubernetes
W0610 15:56:15.905485 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: reading a file in the container
Jun 10 15:56:16.832: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:18.132: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:18.132: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:19.354: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: writing a file in the container
Jun 10 15:56:19.354: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '7639503234625274799' > '/testpd1/tracker1''
Jun 10 15:56:20.526: INFO: Wrote value: "7639503234625274799" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:20.526: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '7400445987108171911' > '/testpd2/tracker1''
Jun 10 15:56:21.694: INFO: Wrote value: "7400445987108171911" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:21.694: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:22.904: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:22.905: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:24.080: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: reading a file in the container
Jun 10 15:56:24.081: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:25.290: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:25.290: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:26.491: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: deleting host0Pod
Jun 10 15:56:26.756: INFO: PD Read/Writer Iteration #2
STEP: submitting host0Pod to kubernetes
W0610 15:56:26.821828 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: reading a file in the container
Jun 10 15:56:27.898: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:29.096: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:29.096: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:30.325: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: reading a file in the container
Jun 10 15:56:30.325: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:31.528: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:31.529: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:32.972: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: writing a file in the container
Jun 10 15:56:32.972: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '1846555975530999997' > '/testpd1/tracker2''
Jun 10 15:56:34.157: INFO: Wrote value: "1846555975530999997" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:34.157: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '2775947264799611726' > '/testpd2/tracker2''
Jun 10 15:56:35.661: INFO: Wrote value: "2775947264799611726" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:35.662: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:36.868: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:36.868: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:38.062: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: reading a file in the container
Jun 10 15:56:38.062: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:39.221: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:39.221: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:40.397: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: reading a file in the container
Jun 10 15:56:40.397: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker2'
Jun 10 15:56:41.584: INFO: Read file "/testpd1/tracker2" with content: 1846555975530999997
STEP: reading a file in the container
Jun 10 15:56:41.585: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker2'
Jun 10 15:56:42.800: INFO: Read file "/testpd2/tracker2" with content: 2775947264799611726
STEP: deleting host0Pod
```
@saad-ali
Automatic merge from submit-queue
Dumping logs of federation pods (federation-apiserver, federation-controller-manager) on e2e test failure
Ref https://github.com/kubernetes/kubernetes/issues/26762
This should help with debugging failures.
Right now there is no way to access those logs.
@kubernetes/sig-cluster-federation @colhom
Automatic merge from submit-queue
Call NewFramework constructor instead of hand creating framework.
https://github.com/kubernetes/kubernetes/issues/27486, probably because we defined a new clientConfigGetter for node e2es and this test was hand creating the framework.
Automatic merge from submit-queue
Kubelet Volume Attach/Detach/Mount/Unmount Redesign
This PR redesigns the Volume Attach/Detach/Mount/Unmount in Kubelet as proposed in https://github.com/kubernetes/kubernetes/issues/21931
```release-note
A new volume manager was introduced in kubelet that synchronizes volume mount/unmount (and attach/detach, if attach/detach controller is not enabled).
This eliminates the race conditions between the pod creation loop and the orphaned volumes loops. It also removes the unmount/detach from the `syncPod()` path so volume clean up never blocks the `syncPod` loop.
```
Automatic merge from submit-queue
federation: choosing a default federation name in test instead of failing
The tests are failing right now:
http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-federation/
```
[k8s.io] Service [Feature:Federation] should be able to discover a non-local federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federated-service.go:130 Jun 14 12:40:35.091: FEDERATION_NAME environment variable must be set
[k8s.io] Service [Feature:Federation] should be able to discover a federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federated-service.go:130 Jun 14 12:40:40.802: FEDERATION_NAME environment variable must be set
```
This is to fix them.
cc @kubernetes/sig-cluster-federation @mml
This commit adds a new volume manager in kubelet that synchronizes
volume mount/unmount (and attach/detach, if attach/detach controller
is not enabled).
This eliminates the race conditions between the pod creation loop
and the orphaned volumes loops. It also removes the unmount/detach
from the `syncPod()` path so volume clean up never blocks the
`syncPod` loop.
Automatic merge from submit-queue
Make timeout for starting system pods configurable
Context: in 2000-node clusters (if only one node is big enough to fit heapster, which is our testing configuration), heapster won't be scheduled until that node has route. However, creating routes is pretty expensive and currently can take even 2 hours.
@zmerlynn @gmarek
Automatic merge from submit-queue
Add image pulling node e2e
Fixes#27007.
Based on #27309, will rebase after #27309 gets merged.
This PR added all tests mentioned in #27007:
* Pull an image from invalid registry;
* Pull an invalid image from gcr;
* Pull an image from gcr;
* Pull an image from docker hub;
* Pull an image needs auth with/without secrets.
For the imagePullSecrets test, I created a new gcloud project "authenticated-image-pulling", and the service account in the code only has "Storage Object Viewer" permission.
/cc @pwittrock @vishh
[]()
Automatic merge from submit-queue
Add description to created node images
Make it a little easier to see who to contact about important node e2e images.
Automatic merge from submit-queue
Updating federation up scripts to work in non e2e setup
Ref: https://github.com/kubernetes/kubernetes.github.io/pull/656
Updating the federation up scripts so that they work as per steps in https://github.com/kubernetes/kubernetes.github.io/pull/656.
Changes are:
* Updating the default namespace to be "federation" instead of "federation-e2e"
* Updated the kubeconfig context to be named "federation-cluster" instead of "federated-context"
* Fixing federation-up so that FEDERATION_IMAGE_TAG is set even when federation-up is run without running `e2e.go --up`. e2e-up.sh sets it here: 6a388d4a0d/hack/e2e-internal/e2e-up.sh (L44).
* Adding a "missingkey=zero" option to template parser. Without this, the parser adds `"<no value>"` at the place of an env var that is not set. With this change, it instead replaces it with the corresponding zero value (for ex "" for strings). This is required for the FEDERATION_DNS_PROVIDER_CONFIG env var.
cc @kubernetes/sig-cluster-federation @colhom @mml
Automatic merge from submit-queue
Implement first set of federated service e2e tests.
These tests are untested and there is no guarantee that they work. The ongoing auth problems is blocking these e2es from being tested and upon @quinton-hoole's request I am submitting them now.
Only the last commit here needs review.
Depends on #26953
cc @nikhiljindal @colhom @mfanjie @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Fix node e2e coreos kubelet cgroup detection
Fixes#26979#26431
The root issue, as best I can tell, is that cgroup detection does not work when the kubelet is started under an ssh session and the systemd `*Accounting` variables are set. I added additional logging and noted some differences in the cgroup slice names between those cadvisor returns and the kubelet detects for itself.
This difference does not occur if the kubelet is properly running under a unit. That environment is also a more common and sane environment.
See also discussion in #26903
cc @derekwaynecarr @vishh @pwittrock
Note that these tests are untested and there is no guarantee that they work.
The ongoing auth problems is blocking these e2es from being tested and upon
@quinton-hoole's request I am submitting them now.
Automatic merge from submit-queue
Add pending pod check in cluster autoscaler e2e tests
The tests should wait until all pods are running before declaring a success and resizing the mig.
cc: @fgrzadkowski @piosz @jszczepkowski
Automatic merge from submit-queue
volume integration: wait for PVs before creating PVCs
The test should wait until all volumes are processed by volume controller (i.e. in the controller cache) before creating a PVC.
Without that, the "best" matching PV could not be in the cache and controller might bind the PVC to suboptiomal one.
This fixes integration test flake "Bind mismatch! Expected pvc-2 capacity 50000000000 but got pvc-2 capacity 52000000000".
Fixes#27179 (together with #26894)
Automatic merge from submit-queue
Fix integration pv flakes
There are two fixes in this PR:
- run tests in separarate functions and use objects with different names, otherwise events from the beginning of the function are caught later when we watch for events of a different PV/PVC
- don't set PV.Spec.ClaimRef.UID of pre-bound PVs. PVs with UID set are considered as bound and they are deleted/recycled when appropriate PVC does not exists yet.
Fixes#26730 and probably also ~~#26894~~ #26256
The test should wait until all volumes are processed by volume controller (i.e.
in the controller cache) before creating a PVC.
Without that, the "best" matching PV could not be in the cache and controller
might bind the PVC to suboptiomal one.
This fixes integration test flake "Bind mismatch! Expected pvc-2 capacity
50000000000 but got pvc-2 capacity 52000000000".
Automatic merge from submit-queue
e2e: actually check error when we fail to GET the scheduled pod
and GET the pod before we try to gracefully delete it.
and add a debug log.
ref #26224
Automatic merge from submit-queue
Considering all nodes for the scheduler cache to allow lookups
Fixes the actual issue that led me to create https://github.com/kubernetes/kubernetes/issues/22554
Currently the nodes in the cache provided to the predicates excludes the unschedulable nodes using field level filtering for the watch results. This results in the above issue as the `ServiceAffinity` predicate uses the cached node list to look up the node metadata for a peer pod (another pod belonging to the same service). Since this peer pod could be currently hosted on a node that is currently unschedulable, the lookup could potentially fail, resulting in the pod failing to be scheduled.
As part of the fix, we are now including all nodes in the watch results and excluding the unschedulable nodes using `NodeCondition`
@derekwaynecarr PTAL
Automatic merge from submit-queue
Port the downward api test to the node e2e suite
Also extend the framework to allow a custom client config loading function, so
that the node e2e suite can reuse the same framework across tests.
This fixes#26609
/cc @timstclair @pwittrock
Automatic merge from submit-queue
Add e2e test for node problem detector
Based on https://github.com/kubernetes/node-problem-detector/pull/12.
This PR added e2e test for node problem detector. It starts a node problem detector with test configuration, and test its functionality.
Question:
* Should this be a node e2e test or e2e test?
* Should we mark this as Serial? The test should not affect other tests, but it will add a `TestConditon` in node conditions and remove in cleanup.
@dchen1107
/cc @kubernetes/sig-node
[]()