Automatic merge from submit-queue
Fix mount collision timeout issue
Short- or medium-term workaround for #29555. The root issue being fixed here is that the recent attach/detach work in the kubelet uses a unique volume name as a key that tracks the work that has to be done for each volume in a pod to attach/mount/umount/detach. However, the non-attachable volume plugins do not report unique names for themselves, which causes collisions when a single secret or configmap is mounted multiple times in a pod.
This is still a WIP -- I need to add a couple E2E tests that ensure that tests break in the future if there is a regression -- but posting for early review.
cc @kubernetes/sig-storage
Ultimately, I would like to refine this a bit further. A couple things I would like to change:
1. `GetUniqueVolumeName` should be a property ONLY of attachable volumes
2. I would like to see the kubelet apparatus for attach/mount/umount/detach handle non-attachable volumes specifically to avoid things like the `WaitForControllerAttach` call that has to be done for those volume types now
Automatic merge from submit-queue
Fix killing child sudo process in e2e_node tests
Fixes#29211.
The context is we are trying to kill a process started as `sudo kube-apiserver`, but `sudo` ignores signals from the same process group. Applying `Setpgid` means the `sudo kill` process won't be in the same process group, so will not fall foul of this nifty feature.
I also took the liberty of removing some code setting `Pdeathsig` because it claims to be doing something in the same area, but actually it doesn't do that at all. The setting is applied to the forked process, i.e. `sudo`, and it means the `sudo` will get killed if we (`e2e_node.test`) die. This (a) isn't what the comment says and (b) doesn't help because sending SIGKILL to the sudo process leaves sudo's child alive.
I didn't use the "hack for linux-only" approach because I think `Setpgid` is available on all platforms that `e2e_node` builds on.
This is the default for etcd2, but etcd3 only listens on 2379.
Specifying the ports keeps things consistent no matter which
version the user has installed.
Automatic merge from submit-queue
Faster test
<!--
Checklist for submitting a Pull Request
Please remove this comment block before submitting.
1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
2. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
3. If you want this PR to automatically close an issue when it is merged,
add `fixes #<issue number>` or `fixes #<issue number>, fixes #<issue number>`
to close multiple issues (see: https://github.com/blog/1506-closing-issues-via-pull-requests).
4. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
-->
In attempting to troubleshoot flakes with this test case I actually wanted to understand how it worked.
There's some poor comments that need work.
I added some additional output which may or may not help in debugging the flakes.
I doubt this fixes the flake.
My major concern is the 'refactor' I did of the test case to batch up runs by sub-test-case. As it stood there was a 200ms pause between each sub, so they should not have interfered with each other. Now they are just started as fast as possible, but only 20 run at a time before moving on to the next 20. I am not sure if I am violating the ethos of the original test case.
Runs on my computer are down from 2m40s -> 40s.
Getting rid of the arbitrary client limiting brings it down to ~12 seconds. 11 to fetch the image and <1 to actually run the tests against the proxies. I can add a zero to the number of loops if you want to hit it harder. It would result in 10x as much text output though.
[]()
Automatic merge from submit-queue
Add support for kubectl create quota command
Follow-up of https://github.com/kubernetes/kubernetes/pull/19625
```
Create a resourcequota with the specified name, hard limits and optional scopes
Usage:
kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=bool] [flags]
Aliases:
quota, q
Examples:
// Create a new resourcequota named my-quota
$ kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10
// Create a new resourcequota named best-effort
$ kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort
```
Automatic merge from submit-queue
Rework pod waiting mechanism in e2e tests to accept pod and watch based
This PR re-applies #28212 which was reverted in #29223. The only difference is that the initial PR contained also `PodStartTimeout` shortening (see [here](4b0c0bd924)) which might caused the problems. Let's give it a 2nd try. I've tested all the flakes and they were passing on my machine.
@smarterclayton @apelisse ptal
- what the test is doing
- how the test is set up
- subsections of the test setup
additional output
- print time spent getting ready to run proxy attempts
- number of test cases
- multiple attempts of each test case
- how many total proxying attempts will be made
- fast path output now has numerical identity of attempt like error output
- error output has time taken and http status like fast path output
batching runs
- run groups of test cases vs starting all 34*20=680 proxy attempts at
the same time.
- don't wait between starting proxy attempts anymore.
proxy e2e changes
- disable the client side rate limiter
- use `By` construct of ginkgo for inline `STEP` logging
- move the waitGroup add outside of the loop
Automatic merge from submit-queue
Syncing imaging pulling backoff logic
- Syncing the backoff logic in the parallel image puller and the sequential image puller to prepare for merging the two pullers into one.
- Moving image error definitions under kubelet/images
Automatic merge from submit-queue
Change SETUP_NODE to True for node e2e docker validation test.
The continuous node e2e docker validation test is failing because:
```
W0722 00:48:52.163940 1265 image_list.go:85] Could not pre-pull image gcr.io/google_containers/netexec:1.4 exit status 1 output: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
```
This is because jenkins is not added to docker user group.
For other images tested in node e2e, jenkins is added to docker user group when the images are initially created https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/environment/setup_host.sh#L102.
However, in node e2e docker validation test, we are using GCI image which doesn't do that.
So we should use the `SETUP_NODE` option to add user to docker group before test running b6c87904f6/test/e2e_node/e2e_remote.go (L150-L159).
This is only one line change, could you help me review the PR? @wonderfly
Thanks a lot! :)
Automatic merge from submit-queue
test/e2e: plug time.Ticker resource leak.
This commit ensures that `logPodStartupStatus` does not leak
running `time.Ticker` instances. Upon termination of the consuming
routine, we stop the ticker.
Automatic merge from submit-queue
use regular client instead of kubectl in scheduler predicate tests when checking/setting/cleanning taints/labels
The existing implementation in scheduler predicate tests uses kubectl to check/set/clean taints/labels on node, which makes the test very related to kubectl.
This PR is to use regular client instead.
Automatic merge from submit-queue
Revert "Drop support for --gce-service-account, require activated creds"
Reverts kubernetes/kubernetes#28802
This appears to break the soak tests with "invalid grant" errors -- see the recent batch of errors in #27920.
Automatic merge from submit-queue
Allow for overriding throughput in load test
We seem to be already supporting higher throughput that what the default is.
I'm going to increase the throughput in our tests:
- speed up scalability tests
- ensure that what I'm seeing locally is really the repeatable case
This PR is a short preparation for those experiments.
[Ideally, I would like to have kubemark-500 to be finishing within 30 minutes. And I think this should be doable pretty soon.]
@gmarek
Automatic merge from submit-queue
Change some node e2e test to use the prepull image framework.
Fix https://github.com/kubernetes/kubernetes/issues/28868.
Node e2e test framework pre-pulls all images in [image_list.go](bc2f223f5a/test/e2e_node/image_list.go)
All node e2e test should use image from the "image_list". If a test needs new image, we should update the image_list to include the new image.
/cc @kubernetes/sig-node to notice people to use `image_list` when adding test. :)
Automatic merge from submit-queue
add tokenreviews endpoint to implement webhook
Wires up an API resource under `apis/authentication.k8s.io/v1beta1` to expose the webhook token authentication API as an API resource. This allows one API server to use another for authentication and uses existing policy engines for the "authoritative" API server to controller access to the endpoint.
@cjcullen you wrote the initial type
Automatic merge from submit-queue
Start namespace controller in node e2e
Fix https://github.com/kubernetes/kubernetes/issues/28320.
Based on https://github.com/kubernetes/kubernetes/pull/28807, only the last 2 commits are new.
Before this PR, there was no namespace controller running in node e2e test infrastructure. We can not enable the [`delete-namespace`](f2ddd60eb9/test/e2e/framework/test_context.go (L109)) flag in the test framework.
So after the test running, there will be running pod left on the test node. This seems to be acceptable in our test infrastructure because we create an new instance each time.
However, in 1.4 we may want to provide part of the test as node conformance test to the user, they definitely don't want the test to leave tons of pods on their node after test running.
Currently, there is no easy way to only start namespace controller in kube-controller-manager (confirmed with @mikedanese), so in this PR I started a "uncontainerized" one in the test infrastructure.
This PR:
* Started the namespace controller in the node e2e test infrastructure and enable the automatic namespace deletion.
* Change the privileged test to use framework (@yujuhong), so that all node e2e tests are using the framework and test pods will be cleaned up by namespace controller.
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Switched watches in tests require ResourceVersion to be passed
For testing the Watches are not sufficient in that it might miss the event of transitioning a Pod from one state to another which might happen before we start Watching events. To remedy this, I'm proposing to switch to Gets to always read the actual state of a Pod.
@smarterclayton this fixes https://github.com/openshift/origin/issues/9192 and hopefully all `gave up waiting for pod...` flakes
[]()
Automatic merge from submit-queue
Don't repeat the program name in healthCheckCommand.String()
The name is in both `Path` and `Args[0]`, so start printing args at 1.
Also refactor to avoid an extra space character in the output.
I pondered whether `healthCheckCommand.String()` should check if the slice is empty, to avoid a panic, but it didn't check for `Cmd==nil` before.
Fixes#29107
Automatic merge from submit-queue
Change the docker validation node e2e test to use gci-canary-test
This PR changed the continuous docker validation node e2e test to use the image config file introduced in https://github.com/kubernetes/kubernetes/pull/28708. @euank
This PR also changed the gci image family from `gci-preview-test` to `gci-canary-test`. @wonderfly
Automatic merge from submit-queue
Return (bool, error) in Authorizer.Authorize()
Before this change, Authorize() method was just returning an error, regardless of whether the user is unauthorized or whether there is some other unrelated error. Returning boolean with information about user authorization and error (which should be unrelated to the authorization) separately will make it easier to debug.
Fixes#27974
Automatic merge from submit-queue
Node E2E: Make it possible to share test between e2e and node e2e
This PR is part of the plan to improve node e2e test coverage.
* Now to improve test coverage, we have to copy test from e2e to node e2e.
* When adding a new test, we have to decide its destiny at the very beginning - whether it is a node e2e or e2e.
This PR makes it possible to share test between e2e and node e2e.
By leveraging the mechanism of ginkgo, as long as we can import the test package in the test suite, the corresponding `Describe` will be run to initialize the global variable `_`, and the test will be inserted into the test suite. (See https://github.com/onsi/composition-ginkgo-example)
In the future, we just need to use the framework to write the test, and put the test into `test/e2e/node`, then it will be automatically shared by the 2 test suites.
This PR:
1) Refactored the framework to make it automatically differentiate e2e and node e2e (Mainly refactored the `PodClient` and the apiserver client initialization).
2) Created a new directory `test/e2e/node` and make it shared by e2e and node e2e.
3) Moved `container_probe.go` into `test/e2e/node` to verify the change.
@kubernetes/sig-node
[]()
Automatic merge from submit-queue
[flake fix] Wait for the podInformer to observe the pod
Fix#29065
The problem is that the rc manager hasn't observed pod1, so it creates another pod and scales down, pod1 might get deleted. To fix it, wait for the podInformer to observe the pod before running the rc manager.
Marked as P0 as it's fixing a P0 flake.
Automatic merge from submit-queue
Drop support for --gce-service-account, require activated creds
Now that `gcloud auth activate-service-account` is in remove support in the test framework for default service accounts -- testing GCE/GKE now requires prior gcloud activation.
This commit ensures that `logPodStartupStatus` does not leak
running `time.Ticker` instances. Upon termination of the consuming
routine, we stop the ticker.
Before this change, Authorize() method was just returning an error,
regardless of whether the user is unauthorized or whether there
is some other unrelated error. Returning boolean with information
about user authorization and error (which should be unrelated to
the authorization) separately will make it easier to debug.
Fixes#27974
Automatic merge from submit-queue
Fix verify results in MaxPods
As we already have "unschedulable" PodCondition we can stop relying on Events, which should make the tests more reliable.
cc @davidopp
Automatic merge from submit-queue
authorize based on user.Info
Update the `authorization.Attributes` to use the `user.Info` instead of discrete getters for each piece.
@kubernetes/sig-auth
Automatic merge from submit-queue
Fix a bug in mirror pod node e2e test.
Fixed a bug in test/e2e_node/mirror_pod_test.go. The function 'checkMirrorPodDisappear' returns nil even when the pod does not disappear. It should return a non-nil error.
@Random-Liu
Automatic merge from submit-queue
[GarbageCollector] Let the RC manager set/remove ControllerRef
What's done:
* RC manager sets Controller Ref when creating new pods
* RC manager sets Controller Ref when adopting pods with matching labels but having no controller
* RC manager clears Controller Ref when pod labels change
* RC manager clears pods' Controller Ref when rc's selector changes
* RC manager stops adoption/creating/deleting pods when rc's DeletionTimestamp is set
* RC manager bumps up ObservedGeneration: The [original code](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/replication/replication_controller_utils.go#L36) will do this.
* Integration tests:
* verifies that changing RC's selector or Pod's Labels triggers adoption/abandoning
* e2e tests (separated to #27151):
* verifies GC deletes the pods created by RC if DeleteOptions.OrphanDependents=false, and orphans the pods if DeleteOptions.OrphanDependents=true.
TODO:
- [x] we need to be able to select Pods that have a specific ControllerRef. Then each time we sync the RC, we will iterate through all the Pods that has a controllerRef pointing the RC, event if the labels of the Pod doesn't match the selector of RC anymore. This will prevent a Pod from stuck with a stale controllerRef, which could be caused by the race between abandoner (the goroutine that removes controllerRef) and worker the goroutine that add controllerRef to pods).
- [ ] use controllerRef instead of calling `getPodController`. This might be carried out by the control-plane team.
- [ ] according to the controllerRef proposal (#25256): "For debugging purposes we want to add an adoptionTime annotation prefixed with kubernetes.io/ which will keep the time of last controller ownership transfer." This might be carried out by the control-plane team.
cc @lavalamp @gmarek
Automatic merge from submit-queue
[garbage collector] add e2e test
This PR also includes some changes to plumb controller-manager's `--enable_garbage_collector` from the environment variable.
The e2e test will not be run by the core suite because it's marked `[Feature:GarbageCollector]`.
The corresponding jenkins job configuration PR is https://github.com/kubernetes/test-infra/pull/132.
Automatic merge from submit-queue
Support terminal resizing for exec/attach/run
```release-note
Add support for terminal resizing for exec, attach, and run. Note that for Docker, exec sessions
inherit the environment from the primary process, so if the container was created with tty=false,
that means the exec session's TERM variable will default to "dumb". Users can override this by
setting TERM=xterm (or whatever is appropriate) to get the correct "smart" terminal behavior.
```
Fixes#13585
Add support for terminal resizing for exec, attach, and run. Note that for Docker, exec sessions
inherit the environment from the primary process, so if the container was created with tty=false,
that means the exec session's TERM variable will default to "dumb". Users can override this by
setting TERM=xterm (or whatever is appropriate) to get the correct "smart" terminal behavior.
This allows us to start building real dependencies into Makefile.
Leave old hack/* scripts in place but advise to use 'make'. There are a few
rules that call things like 'go run' or 'build/*' that I left as-is for now.
Automatic merge from submit-queue
node_e2e: configure gce images via config file
This file provides the abiliy to specify image project on a per-image
basis and is more extensible for future changes.
For backwards compatibility and local development convenience, the
existing flags are kept and should work.
The eventual goal is to be able to source some images, such as the CoreOS one (and possibly containervm one) from their upstream projects and do all new configuration changes via a cloud-init key added to the image config.
This PR is a first step there. A following PR will add a config key of `cloud-init` or `user-data` and migrate the CoreOS e2e to use that.
This motivation is driven by the fact that currently the changes needed for the CoreOS image can all be done quickly in cloud-init and this will make it much easier to update the image and ensure that changes are applied consistently.
/cc @timstclair @vishh @yifan-gu @pwittrock
Automatic merge from submit-queue
Node E2E: Prep for continuous Docker validation node e2e test
Based on https://github.com/kubernetes/kubernetes/pull/28516, for https://github.com/kubernetes/kubernetes/issues/25215.
https://github.com/kubernetes/kubernetes/pull/26813 added support to run e2e test on gci preview image and newest docker version.
This PR added the same support to node e2e test.
The main dependencies of node e2e test are `docker`, `kubelet`, `etcd` and `apiserver`.
Currently, node e2e test builds `kubelet` and `apiserver` locally, and copies them into `/tmp` directory in VM instance. GCI also has built-in `docker`. So the only dependency missing is `etcd`.
This PR injected a simple cloud-init script when creating instance to install `etcd` during node startup.
@andyzheng0831 for the cloud init script.
@wonderfly for the gci instance setup.
@pwittrock for the node e2e test change.
/cc @dchen1107
[]()
Automatic merge from submit-queue
Deprecate the term "Ubernetes"
Deprecate the term "Ubernetes" in favor of "Cluster Federation" and "Multi-AZ Clusters"
Automatic merge from submit-queue
Fix path for examples - storage/volume directories changed
Added /volume and /storage in a couple of spots.
Fixes#27978
Automatic merge from submit-queue
Return server's representation of pod from framework pod creation functions
Since PodInterface.Create returns the server's representation of the pod, which may differ from the api.Pod object passed to Create, we do the same from the framework's pod creation functions. This is useful if e.g. you create pods using Pod.GenerateName rather than Pod.Name, and you still want to refer to pods by name later on (e.g. for deletion).
cc @timstclair
This file provides the abiliy to specify image project on a per-image
basis and is more extensible for future changes.
For backwards compatibility and local development convenience, the
existing flags are kept and should work.
Previous volume binder code was not able to cope with PVs or PVCs getting
modified during the binding process. Current one should be resilient to
these changes, so let's test it.
It makes the test approximately twice as long as before, from ~2 seconds to
~4-5.
Since PodInterface.Create returns the server's representation of the
pod, which may differ from the api.Pod object passed to Create, we do
the same from the framework's pod creation functions. This is useful if
e.g. you create pods using Pod.GenerateName rather than
Pod.Name, and you still want to refer to pods by name later on
(e.g. for deletion).
Automatic merge from submit-queue
Update coreos node e2e image to a version that uses cgroupfs
Temporary fix for #28192. This PR updates coreos node e2e image to a version that uses cgroupfs.
cc @vishh @yifan-gu
Search and replace for references to moved examples
Reverted find and replace paths on auto gen docs
Reverting changes to changelog
Fix bugs in test-cmd.sh
Fixed path in examples README
ran update-all successfully
Updated verify-flags exceptions to include renamed files
Automatic merge from submit-queue
Node E2E: Disable kubenet for local node e2e test.
After https://github.com/kubernetes/kubernetes/pull/28196, we must manually setup cni and nsenter in local node to run `make test_e2e_node`, which may not be necessary for local development.
I've tried to move cni downloading logic into `BeforeSuite`, however it is still hard to figure out who should install nsenter, manually installed by every developer? in the `setup_host.sh` script? in `BeforeSuite`?
This PR:
* Added a flag to disable kubenet and disabled kubenet in local test.
* Cleaned up the CNI installation logic a bit.
/cc @yujuhong @freehan
[]()
Automatic merge from submit-queue
E2E: Add UpdatePod function in e2e framework and change the test to use it.
Fix https://github.com/kubernetes/kubernetes/issues/28096.
Some e2e tests need to update pod, but the pod update is a bit complex because of potential conflict. #28096 happened just because the test only called pod `Update` once.
This PR move the update pod logic into a util function `UpdatePod` in e2e framework, and change the tests to use it.
Mark P2 because the original issue is P0, but in fact happens not quite frequently. :)
[]()
the test to use it.
Automatic merge from submit-queue
Add test/test_owners.csv, for automatic assignment of test failures.
This file will be read by the munger -- see kubernetes/contrib#1264
This also includes a simple script to do minor automatic updates to the CSV.
I'd like to get `update_owners.py` into a more usable state -- right now the CSV is based directly on the Google Sheets data. It has 9 outdated tests and is missing 80 new tests.
I can randomly assign new tests to people on kubernetes-maintainers, but are there any caveats to how the assignment should work? Should they be load balanced? Should some people in the group not receive issues? Etc.
Automatic merge from submit-queue
Fix node e2e issues on selinux enabled systems
It fixes following 3 node e2es:
```
[Fail] [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits [It] it should run with the expected status [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:114
[Fail] [k8s.io] Kubelet metrics api when querying /stats/summary [It] it should report resource usage through the stats api
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:158
```
```
[Fail] [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits [It] should report termination message if TerminationMessagePath is set [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:150
```
@kubernetes/rh-cluster-infra
Automatic merge from submit-queue
e2e: increase timeout when waiting for deployment pods to be deleted
Use the same timeout as the one used for waiting for the deployment
reaper to complete.
Takes a stab at https://github.com/kubernetes/kubernetes/issues/28067
@kubernetes/deployment PTAL
Automatic merge from submit-queue
Reorganize volume controllers and manager
* Move both PV and attach/detach volume controllers to `controllers/volume` (closes#26222)
* Rename `kubelet/volume` to `kubelet/volumemanager`
* Add/update OWNER files
Automatic merge from submit-queue
Add MinReadySeconds to rolling updater
Add MinReadySeconds support to RollingUpdater that allows to specify the number of seconds to wait on top of the pod is "ready" because its readiness probe passed.
Automatic merge from submit-queue
Fix node confomance test
Fixes https://github.com/kubernetes/kubernetes/issues/28255, https://github.com/kubernetes/kubernetes/issues/28250, https://github.com/kubernetes/kubernetes/issues/28341.
The main reason of the flake is that in the failed test expects the `PodPhase` to keep `Pending`. It did `Eventually` check and `Consistently` check for 5 seconds. However, the default `PodPhase` is `Pending`, when the check passes, the `PodStatus` could still be in default state.
After that, the test expects the container status to be `Waiting`, which may not be the case, because the default `ContainerStatuses` is empty, and the pod could still be in the default state.
This PR changes the test to ensure `ContainerStatuses` first and then check the `PodPhase` after that.
Mark P1 because the test fails relatively frequently and does block some PRs.
@pwittrock
/cc @liangchenye @ncdc
[]()
Automatic merge from submit-queue
Use slices of items to clean up after tests
Fixes#27582.
We used to maintain a pointer variable for each process to kill after the
tests finish. @lavalamp suggested using a slice instead, which is a much
cleaner solution. This implements @lavalamp's suggestion and also extends
the idea to tracking directories that need to be removed after the tests finish.
This also means that we should no longer check for nil `killCmd`s inside
`func (k *killCmd) Kill() error {...}` (see #27582 and #27589). If a nil
`killCmd` makes it in there, something is bad elsewhere and we want to see
the nil pointer exception immediately.
Mentioning @timstclair and @euank wrt the original issue/PR.
Automatic merge from submit-queue
Federated Services e2e: Simplify logic and logging around verificatio…
Simplify logic and logging around verification of underlying services.
Fixes#28269.
Without this PR, service verification in 4 of our e2e tests sometimes fails.
[Fail] [k8s.io] Kubelet metrics api when querying /stats/summary [It] it should report resource usage through the stats api
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:158
[Fail] [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits [It] should report termination message if TerminationMessagePath is set [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:150
[Fail] [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits [It] it should run with the expected status [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:114
Fixes#27582
We used to maintain a pointer variable for each process to kill after the
tests finish. @lavalamp suggested using a slice instead, which is a much
cleaner solution. This implements @lavalamp's suggestion and also extends
the idea to tracking directories that need to be removed after the tests finish.
This also means that we should no longer check for nil `killCmd`s inside
`func (k *killCmd) Kill() error {...}` (see #27582 and #27589). If a nil
`killCmd` makes it in there, something is bad elsewhere and we want to see
the nil pointer exception immediately.
Automatic merge from submit-queue
Remove duplicated nginx image. Use nginx-slim instead
This PR removes the image `gcr.io/google_containers/nginx:1.7.9` and uses `gcr.io/google_containers/nginx-slim:0.7`.
Besides removing the duplication `1.7.9` is 16 months old.
Automatic merge from submit-queue
Fix federation e2e tests by correctly managing cluster clients
1. The main fix: Correct overall BeforeEach() to create a new set of cluster clients, rather than just append to the set created by all previous tests. This was screwing up a lot of stuff in difficult to diagnose ways.
2. Add lots of debug logging.
3. Be better about cleaning up after each test.
```
SUCCESS! -- 6 Passed | 0 Failed :-)
```
cc @nikhiljindal @madhusudancs @mfanjie @colhom FYI
Automatic merge from submit-queue
Add two pd tests with default grace period
Add two tests in pd.go. They are same as the flaky test, but the pod deletion has default grace period
Automatic merge from submit-queue
Refactored, expanded and fixed federated-services e2e tests.
1. Moved BeforeEach() and AfterEach() to an inner scope, to prevent clashes with Framework's BeforeEach() and AfterEach(). Morte to come on this, as it's a major bug in our use of Ginkgo, and affects many other tests.
2. Keep track of which clusters we have created namespaces in, so that we don't try to delete namespaces out of clusters that we didn't create them in (e.g. the primary cluster, where the framework already creates and deleted the required namespace).
3. Separate tests for federated service creation and verification that underlying services are created correctly.
4. For DNS resolution tests, create backend pods (and delete on cleanup) where required).
5. For non-local DNS resolution, delete a backend pod in one cluster to test, and in the remainder of clusters on cleanup.
6. Lots of refactoring to make code re-usable across multiple test.
7. Lots of debugging/fixing to make sure that everything that the testscreate are cleaned up properly afterwards, and don't clash with the cleanups done by the e2e Framework.
Automatic merge from submit-queue
TLS bootstrap API group (alpha)
This PR only covers the new types and related client/storage code- the vast majority of the line count is codegen. The implementation differs slightly from the current proposal document based on discussions in design thread (#20439). The controller logic and kubelet support mentioned in the proposal are forthcoming in separate requests.
I submit that #18762 ("Creating a new API group is really hard") is, if anything, understating it. I've tried to structure the commits to illustrate the process.
@mikedanese @erictune @smarterclayton @deads2k
```release-note-experimental
An alpha implementation of the the TLS bootstrap API described in docs/proposals/kubelet-tls-bootstrap.md.
```
[]()
Automatic merge from submit-queue
Add EndpointReconcilerConfig to master Config
Add EndpointReconcilerConfig to master Config to allow downstream integrators to customize the reconciler and reconciliation interval when starting a customized master
@kubernetes/sig-api-machinery @deads2k @smarterclayton @liggitt @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Skip multi-zone e2e tests unless provider is GCE, GKE or AWS
No need to fail the tests. If label is not present then it means that node is not in any zone.
Related issue: #27372
Automatic merge from submit-queue
Convert service account token controller to use a work queue
Converts the service account token controller to use a work queue. This allows parallelization of token generation (useful when there are several simultaneous namespaces or service accounts being created). It also lets us requeue failures to be retried sooned than the next sync period (which can be very long).
Fixes an issue seen when a namespace is created with secrets quotaed, and the token controller tries to create a token secret prior to the quota status having been initialized. In that case, the secret is rejected at admission, and the token controller wasn't retrying until the resync period.
Automatic merge from submit-queue
Mark "RW PD, remove it, then schedule" test flaky
Mark test as flaky while it is being investigated. Tracked by https://github.com/kubernetes/kubernetes/issues/27691
Assigning to @jlowdermilk since he's on call
Add EndpointReconcilerConfig to master Config to allow downstream integrators to customize the reconciler
and reconciliation interval when starting a customized master.
Automatic merge from submit-queue
e2e: Allow skipping tests for specific runtimes, skip a few tests under rkt
The main benefit of this is that it gives a developer more useful output (more signal to noise) for things that are known broken on that runtime.
cc @kubernetes/rktnetes-maintainers , @ixdy
I'll run this PR through our jenkins and make sure things look happy and compare to the e2e results for this PR.
Automatic merge from submit-queue
[Refactor] QOS to have QOS Class type for QoS classes
This PR adds a QOSClass type and initializes QOSclass constants for the three QoS classes.
It would be good to use this in all future QOS related features.
This would be good to have for the (Pod level cgroups isolation proposal)[https://github.com/kubernetes/kubernetes/pull/26751] that i am working on aswell.
@vishh PTAL
Signed-off-by: Buddha Prakash <buddhap@google.com>
Automatic merge from submit-queue
e2e.framework.util.StartPods: panic if the number or replicas is zero
The number of pods to start must be non-zero.
Otherwise the function waits for pods forever if ``waitForRunning`` is true.
It the number of replicas is zero, panic so the mistake is heard all over the e2e realm.
Update all callers of StartPods to test for non-zero number of replicas.
Automatic merge from submit-queue
Set grace period to 0 when deleting namespaces after the test.
Otherwise, we try to run the next test and the pods are still there.
Automatic merge from submit-queue
Proportionally scale paused and rolling deployments
Enable paused and rolling deployments to be proportionally scaled.
Also have cleanup policy work for paused deployments.
Fixes#20853Fixes#20966Fixes#20754
@bgrant0607 @janetkuo @ironcladlou @nikhiljindal
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/20273)
<!-- Reviewable:end -->
Automatic merge from submit-queue
e2e: Delete old code
These tests were added commented out over a year ago. Now they don't compile. The port forward test has a whole file devoted to replacing it (`e2e/portforward.go`) and while the exec test doesn't have a perfect replacement, it has several tests that cover for it (exec over a websocket, an e2e_node test, all the kubectl execs). If we want that test, it would be better to write it fresh anyways.
cc @ncdc
Automatic merge from submit-queue
Use gcloud for default node pool and api for other in cluster autoscaler e2e test
cc: @piosz @jszczepkowski @fgrzadkowski
Currently there is a problem with gcloud when non-default pool is used for cluster update. So we temporarily switch to the old ca-enable method for non-default pools until it is fixed.
Automatic merge from submit-queue
A few changes to federated-service e2e test.
Most of the changes that get the test to pass have been made already or
elsewhere. Here we restructure a bit fixing a nesting problem, extend the
timeouts, and start creating distinct backend pods that I'll delete in the
non-local test (coming shortly).
Also some extra debugging info in the DNS code. I made some upstream
changes to skydns in https://github.com/skynetservices/skydns/pull/283
For #27739
Includes a commit from @madhusudancs that I will remove once his merges.
Automatic merge from submit-queue
e2e_node: lower the log verbosity level
The current level is so high that the logs are almost unreadable.
This fixes#27593
Most of the changes that get the test to pass have been made already or
elsewhere. Here we restructure a bit fixing a nesting problem, extend
the timeouts, and start creating distinct backend pods that I'll delete
in the non-local test (coming shortly).
Also some extra debugging info in the DNS code. I made some upstream
changes to skydns in https://github.com/skynetservices/skydns/pull/283
Automatic merge from submit-queue
Fixes a node e2e test error
Fixes following node e2e test error:
[k8s.io] Kubelet metrics api when querying /stats/summary [It] it should report resource usage through the stats api
And the logs show following error:
```
Jun 21 15:57:13 localhost journal: tee: /test-empty-dir-mnt: Is a directory
```
And the test fails with:
```
------------------------------
• Failure [310.665 seconds]
[k8s.io] Kubelet
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
metrics api
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:161
when querying /stats/summary
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:160
it should report resource usage through the stats api [It]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:159
Timed out after 300.000s.
Expected
<*errors.errorString | 0xc82026b6f0>: {
s: "expected \"volume used\" to not be zero",
}
to be nil
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:158
------------------------------
```
@kubernetes/rh-cluster-infra
Automatic merge from submit-queue
increase addon check interval
Do static pods have a crash loop back off? If so, this test would be much faster if we restarted the kubelet to clear that.
Fixes#26770
Automatic merge from submit-queue
Add integration test for binding PVs using label selectors
Adds an integration test for persistent volume claim 'MatchExpressions' label selector.
Automatic merge from submit-queue
Fix 7 broken example e2e tests
Fixes#27325, Fixes#27727
7 broken example e2e tests:
- [x] Spark
* `namespace` is specified in example yaml files which conflict with e2e test namespaces, fixed by removing the namespace in yaml (the yaml files of [spark example](https://github.com/kubernetes/kubernetes/tree/master/examples/spark) doesn't need the namespace specified since it's specified in its context) -- cc @k82 who added namespace to Spark example in #23807
* wait for pods to exist before determining if it's running
- [x] Hazelcast
* wait for pods to exist before determining if it's running
- [x] Redis
* image `kubernetes/redis:v2` is not found, changed to `kubernetes/redis:v1` instead
* wait for pods to exist before determining if it's running
- [x] Celery-RabbitMQ
* remove 1 redundant call to `forEachPod`
* wait for pods to exist before determining if it's running
- [x] Cassandra
* fix `kubectl exec` on incorrect pod name
* fix getting endpoint ip addresses before creating pods
* wait for pods to exist before determining if it's running
- [x] Storm
* wait for pods to exist before determining if it's running
- [x] RethinkDB
* wait for pods to exist before determining if it's running
[]()
[k8s.io] Kubelet metrics api when querying /stats/summary [It] it should report resource usage through the stats api
And the logs show following error:
Jun 21 15:57:13 localhost journal: tee: /test-empty-dir-mnt: Is a directory
Automatic merge from submit-queue
Reapply ScheduledJob tests (2ab885a53a)
Re-applied the ScheduledJob tests (#25737) which were reverted due to an integration test error in #27184.
The problem was in `TestBatchGroupBackwardCompatibility` which is testing backwards compatibility for storing jobs (`extensions/v1beta1` vs `batch/v1`), which is not needed for `batch/v2alpha1`. I've added a skip to aforementioned test for that group. See `test/integration/master_test.go` for the actual fix.
@caesarxuchao @mikedanese ptal
@piosz @jszczepkowski @erictune fyi
[]()
Automatic merge from submit-queue
GCE provider: Limit Filter calls to regexps rather than insane blobs
Filters can't exceed 4k, and GET requests against the GCE API are also limited, so these break down in different ways at different cluster counts. Fix it by introducing an advisory `node-instance-prefix` configuration in the GCE provider that can hint the `EnsureLoadBalancer`/`UpdateLoadBalancer code` (and the firewall creation/update code). If it's not there, or wrong (a hostname that's registered violates it), just ignore it and grab the whole project.
Fixes#27731
[]()
Filters can't exceed 4k, and GET requests against the GCE API are also
limited, so these break down in different ways at different cluster
counts. Fix it by introducing an advisory node-instance-prefix
configuration in the GCE provider that can hint the
EnsureLoadBalancer/UpdateLoadBalancer code (and the firewall
creation/update code). If it's not there, or wrong (a hostname that's
registered violates it), just ignore it and grab the whole project.
Automatic merge from submit-queue
Migrate most of remaining tests from cmd/integration to test/integration to use framework
Ref #25940
Built on top of https://github.com/kubernetes/kubernetes/pull/27182 - only the last commit is unique
Automatic merge from submit-queue
Add possibility to run integration tests in parallel
- add env. variable with etcd URL to intergration tests
- update documentation with example how to use it to find flakes
Automatic merge from submit-queue
Add integration test for binding PVs using label selectors
Adds an integration test for persistent volume claim label selector.
Many integration tests delete all keys in etcd as part of their cleanup.
To run these tests in parallel we must run several etcd daemons, each on
different port and pass etcd url to the test suite.
Automatic merge from submit-queue
Node E2E: add termination message test
Based on #23658.
This PR:
1) Cleans up the `ConformanceContainer` a bit
2) Add termination message test
This test proves #23639, without #23658, the test could not pass.
@liangchenye @kubernetes/sig-node
Automatic merge from submit-queue
add unit and integration tests for rbac authorizer
This PR adds lots of tests for the RBAC authorizer.
The plan over the next couple days is to add a lot more test cases.
Updates #23396
cc @erictune
Automatic merge from submit-queue
WaitForRunningReady also waits for PodsSuccess
Ref. #27095 - fixes the test, doesn't fix the problem.
cc @yujuhong @fejta
Automatic merge from submit-queue
Add integration test for provisioning/deleting many PVs.
The test is configurable by KUBE_INTEGRATION_PV_OBJECTS for load tests, 100 objects are created by default.
@kubernetes/sig-storage
Automatic merge from submit-queue
Filter seccomp profile path from malicious .. and /
Without this patch with `localhost/<some-releative-path>` as seccomp profile one can load any file on the host, e.g. `localhost/../../../../dev/mem` which is not healthy for the kubelet.
/cc @jfrazelle
Unit tests depend on https://github.com/kubernetes/kubernetes/pull/26710.
Automatic merge from submit-queue
Revert revert of downward api node defaults
Reverts the revert of https://github.com/kubernetes/kubernetes/pull/27439Fixes#27062
@dchen1107 - who at Google can help debug why this caused issues with GKE infrastructure but not GCE merge queue?
/cc @wojtek-t @piosz @fgrzadkowski @eparis @pmorie
Automatic merge from submit-queue
Cleanups following #27587
- Add back the negative assertions, but mark them [Slow].
- Use the current DNS TTL of 180 sec as our timeout for all DNS tests.
- Assorted cleanups and refactoring.
Automatic merge from submit-queue
Extend ingress e2e
Splits the test into a cross platform conformance list, and platform specific bits that exercise features through annotations. Also exercises the features in https://github.com/kubernetes/contrib/pull/1133. Assigning to Girish, simply because I assigned the other pr to Minhan.
- Dropped the regex test and just test for nslookup exiting 0.
- Moved more setup into BeforeEach and used nested Context for non-local
case.
- Poll inside the container using a bash loop.
- Aim for less console noise unless something goes wrong.
- Commented out the tests trying to verify that a DNS name is absent.
Automatic merge from submit-queue
in each pd test, create and delete the pod for every iteration to give new pod name for exec
fix#26141
based on chat with @ncdc
The following is a snapshot of the log. Each iteration now has a new Pod name
```text
[It] should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] [Flaky]
/srv/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:277
STEP: creating PD1
Jun 10 15:55:45.878: INFO: Successfully created a new PD: "rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c".
STEP: creating PD2
Jun 10 15:55:49.794: INFO: Successfully created a new PD: "rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c".
Jun 10 15:55:49.794: INFO: PD Read/Writer Iteration #0
STEP: submitting host0Pod to kubernetes
W0610 15:55:49.860308 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: writing a file in the container
Jun 10 15:56:09.792: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '988876932586416926' > '/testpd1/tracker0''
Jun 10 15:56:12.003: INFO: Wrote value: "988876932586416926" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:12.003: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '8414937992264649637' > '/testpd2/tracker0''
Jun 10 15:56:13.170: INFO: Wrote value: "8414937992264649637" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:13.170: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:14.325: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:14.325: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:15.590: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: deleting host0Pod
Jun 10 15:56:15.841: INFO: PD Read/Writer Iteration #1
STEP: submitting host0Pod to kubernetes
W0610 15:56:15.905485 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: reading a file in the container
Jun 10 15:56:16.832: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:18.132: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:18.132: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:19.354: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: writing a file in the container
Jun 10 15:56:19.354: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '7639503234625274799' > '/testpd1/tracker1''
Jun 10 15:56:20.526: INFO: Wrote value: "7639503234625274799" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:20.526: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '7400445987108171911' > '/testpd2/tracker1''
Jun 10 15:56:21.694: INFO: Wrote value: "7400445987108171911" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:21.694: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:22.904: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:22.905: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:24.080: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: reading a file in the container
Jun 10 15:56:24.081: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:25.290: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:25.290: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:26.491: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: deleting host0Pod
Jun 10 15:56:26.756: INFO: PD Read/Writer Iteration #2
STEP: submitting host0Pod to kubernetes
W0610 15:56:26.821828 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: reading a file in the container
Jun 10 15:56:27.898: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:29.096: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:29.096: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:30.325: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: reading a file in the container
Jun 10 15:56:30.325: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:31.528: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:31.529: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:32.972: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: writing a file in the container
Jun 10 15:56:32.972: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '1846555975530999997' > '/testpd1/tracker2''
Jun 10 15:56:34.157: INFO: Wrote value: "1846555975530999997" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:34.157: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '2775947264799611726' > '/testpd2/tracker2''
Jun 10 15:56:35.661: INFO: Wrote value: "2775947264799611726" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:35.662: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:36.868: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:36.868: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:38.062: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: reading a file in the container
Jun 10 15:56:38.062: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:39.221: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:39.221: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:40.397: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: reading a file in the container
Jun 10 15:56:40.397: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker2'
Jun 10 15:56:41.584: INFO: Read file "/testpd1/tracker2" with content: 1846555975530999997
STEP: reading a file in the container
Jun 10 15:56:41.585: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker2'
Jun 10 15:56:42.800: INFO: Read file "/testpd2/tracker2" with content: 2775947264799611726
STEP: deleting host0Pod
```
@saad-ali
Automatic merge from submit-queue
Dumping logs of federation pods (federation-apiserver, federation-controller-manager) on e2e test failure
Ref https://github.com/kubernetes/kubernetes/issues/26762
This should help with debugging failures.
Right now there is no way to access those logs.
@kubernetes/sig-cluster-federation @colhom
Automatic merge from submit-queue
Call NewFramework constructor instead of hand creating framework.
https://github.com/kubernetes/kubernetes/issues/27486, probably because we defined a new clientConfigGetter for node e2es and this test was hand creating the framework.
Automatic merge from submit-queue
Kubelet Volume Attach/Detach/Mount/Unmount Redesign
This PR redesigns the Volume Attach/Detach/Mount/Unmount in Kubelet as proposed in https://github.com/kubernetes/kubernetes/issues/21931
```release-note
A new volume manager was introduced in kubelet that synchronizes volume mount/unmount (and attach/detach, if attach/detach controller is not enabled).
This eliminates the race conditions between the pod creation loop and the orphaned volumes loops. It also removes the unmount/detach from the `syncPod()` path so volume clean up never blocks the `syncPod` loop.
```
Automatic merge from submit-queue
federation: choosing a default federation name in test instead of failing
The tests are failing right now:
http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-federation/
```
[k8s.io] Service [Feature:Federation] should be able to discover a non-local federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federated-service.go:130 Jun 14 12:40:35.091: FEDERATION_NAME environment variable must be set
[k8s.io] Service [Feature:Federation] should be able to discover a federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federated-service.go:130 Jun 14 12:40:40.802: FEDERATION_NAME environment variable must be set
```
This is to fix them.
cc @kubernetes/sig-cluster-federation @mml
This commit adds a new volume manager in kubelet that synchronizes
volume mount/unmount (and attach/detach, if attach/detach controller
is not enabled).
This eliminates the race conditions between the pod creation loop
and the orphaned volumes loops. It also removes the unmount/detach
from the `syncPod()` path so volume clean up never blocks the
`syncPod` loop.
Automatic merge from submit-queue
Make timeout for starting system pods configurable
Context: in 2000-node clusters (if only one node is big enough to fit heapster, which is our testing configuration), heapster won't be scheduled until that node has route. However, creating routes is pretty expensive and currently can take even 2 hours.
@zmerlynn @gmarek
Automatic merge from submit-queue
Add image pulling node e2e
Fixes#27007.
Based on #27309, will rebase after #27309 gets merged.
This PR added all tests mentioned in #27007:
* Pull an image from invalid registry;
* Pull an invalid image from gcr;
* Pull an image from gcr;
* Pull an image from docker hub;
* Pull an image needs auth with/without secrets.
For the imagePullSecrets test, I created a new gcloud project "authenticated-image-pulling", and the service account in the code only has "Storage Object Viewer" permission.
/cc @pwittrock @vishh
[]()
Automatic merge from submit-queue
Add description to created node images
Make it a little easier to see who to contact about important node e2e images.
The number of pods to start must be non-zero.
Otherwise the function waits for pods forever if waitForRunning is true.
It the number of replicas is zero, panic so the mistake is heard all over the e2e realm.
Update all callers of StartPods to test for non-zero number of replicas.
Automatic merge from submit-queue
Updating federation up scripts to work in non e2e setup
Ref: https://github.com/kubernetes/kubernetes.github.io/pull/656
Updating the federation up scripts so that they work as per steps in https://github.com/kubernetes/kubernetes.github.io/pull/656.
Changes are:
* Updating the default namespace to be "federation" instead of "federation-e2e"
* Updated the kubeconfig context to be named "federation-cluster" instead of "federated-context"
* Fixing federation-up so that FEDERATION_IMAGE_TAG is set even when federation-up is run without running `e2e.go --up`. e2e-up.sh sets it here: 6a388d4a0d/hack/e2e-internal/e2e-up.sh (L44).
* Adding a "missingkey=zero" option to template parser. Without this, the parser adds `"<no value>"` at the place of an env var that is not set. With this change, it instead replaces it with the corresponding zero value (for ex "" for strings). This is required for the FEDERATION_DNS_PROVIDER_CONFIG env var.
cc @kubernetes/sig-cluster-federation @colhom @mml
Automatic merge from submit-queue
Implement first set of federated service e2e tests.
These tests are untested and there is no guarantee that they work. The ongoing auth problems is blocking these e2es from being tested and upon @quinton-hoole's request I am submitting them now.
Only the last commit here needs review.
Depends on #26953
cc @nikhiljindal @colhom @mfanjie @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Fix node e2e coreos kubelet cgroup detection
Fixes#26979#26431
The root issue, as best I can tell, is that cgroup detection does not work when the kubelet is started under an ssh session and the systemd `*Accounting` variables are set. I added additional logging and noted some differences in the cgroup slice names between those cadvisor returns and the kubelet detects for itself.
This difference does not occur if the kubelet is properly running under a unit. That environment is also a more common and sane environment.
See also discussion in #26903
cc @derekwaynecarr @vishh @pwittrock
Note that these tests are untested and there is no guarantee that they work.
The ongoing auth problems is blocking these e2es from being tested and upon
@quinton-hoole's request I am submitting them now.
Automatic merge from submit-queue
Add pending pod check in cluster autoscaler e2e tests
The tests should wait until all pods are running before declaring a success and resizing the mig.
cc: @fgrzadkowski @piosz @jszczepkowski
Automatic merge from submit-queue
volume integration: wait for PVs before creating PVCs
The test should wait until all volumes are processed by volume controller (i.e. in the controller cache) before creating a PVC.
Without that, the "best" matching PV could not be in the cache and controller might bind the PVC to suboptiomal one.
This fixes integration test flake "Bind mismatch! Expected pvc-2 capacity 50000000000 but got pvc-2 capacity 52000000000".
Fixes#27179 (together with #26894)
Automatic merge from submit-queue
Fix integration pv flakes
There are two fixes in this PR:
- run tests in separarate functions and use objects with different names, otherwise events from the beginning of the function are caught later when we watch for events of a different PV/PVC
- don't set PV.Spec.ClaimRef.UID of pre-bound PVs. PVs with UID set are considered as bound and they are deleted/recycled when appropriate PVC does not exists yet.
Fixes#26730 and probably also ~~#26894~~ #26256
The test should wait until all volumes are processed by volume controller (i.e.
in the controller cache) before creating a PVC.
Without that, the "best" matching PV could not be in the cache and controller
might bind the PVC to suboptiomal one.
This fixes integration test flake "Bind mismatch! Expected pvc-2 capacity
50000000000 but got pvc-2 capacity 52000000000".
Automatic merge from submit-queue
e2e: actually check error when we fail to GET the scheduled pod
and GET the pod before we try to gracefully delete it.
and add a debug log.
ref #26224
Automatic merge from submit-queue
Considering all nodes for the scheduler cache to allow lookups
Fixes the actual issue that led me to create https://github.com/kubernetes/kubernetes/issues/22554
Currently the nodes in the cache provided to the predicates excludes the unschedulable nodes using field level filtering for the watch results. This results in the above issue as the `ServiceAffinity` predicate uses the cached node list to look up the node metadata for a peer pod (another pod belonging to the same service). Since this peer pod could be currently hosted on a node that is currently unschedulable, the lookup could potentially fail, resulting in the pod failing to be scheduled.
As part of the fix, we are now including all nodes in the watch results and excluding the unschedulable nodes using `NodeCondition`
@derekwaynecarr PTAL
Automatic merge from submit-queue
Port the downward api test to the node e2e suite
Also extend the framework to allow a custom client config loading function, so
that the node e2e suite can reuse the same framework across tests.
This fixes#26609
/cc @timstclair @pwittrock
Automatic merge from submit-queue
Add e2e test for node problem detector
Based on https://github.com/kubernetes/node-problem-detector/pull/12.
This PR added e2e test for node problem detector. It starts a node problem detector with test configuration, and test its functionality.
Question:
* Should this be a node e2e test or e2e test?
* Should we mark this as Serial? The test should not affect other tests, but it will add a `TestConditon` in node conditions and remove in cleanup.
@dchen1107
/cc @kubernetes/sig-node
[]()
Automatic merge from submit-queue
Listing pods only once when getting pods for RS in deployment
Fixes#26834
1. Avoid ranging over RSes and then `List` pods of each RS. Instead, `List` pods of the deployment once, and then filter pods of each RS.
2. Avoid using clientset to `List` pods in deployment controller. Use podStore instead. (TODO in some functions because the unit tests don't have podStore.)
@kubernetes/deployment
[]()