The GC needs to build clients based only on Resource or Kind. Hoist the
restmapper out of the controller and the clientpool, support a new
ClientForGroupVersionKind and ClientForGroupVersionResource, and use the
appropriate one in both places.
Automatic merge from submit-queue
Allow to use GetSigner with vagrant provider
In order to run tests that require ssh access to a node on vagrant
we need to provide path to private ssh key.
Now it will be possible to do using VAGRANT_SSH_KEY environment variable
Automatic merge from submit-queue
Add test for --quiet flag for kubectl run
This adds a test for the changes introduced in #30247 and #28801.
Ref #28695
In order to run tests that require ssh access to a node on vagrant
we need to provide path to private ssh key.
Now it will be possible to do using VAGRANT_SSH_KEY environment variable
Change-Id: Ic5fe0037edd46d0db3b8036ad7fc03cf1ea07574
Automatic merge from submit-queue
Provide an e2e skip helper checking for available resource
@janetkuo @dims this is the promised util function, but unfortunately I just learned that dynamic client suffers from the problem I've fixed in the manually written one (https://github.com/kubernetes/kubernetes/pull/29187) I need to look into the dynamic client in that case :/
Automatic merge from submit-queue
update taints e2e, restrict taints operation with key, effect
Since taints are now unique by key, effect on a node, this PR is to restrict existing taints adding/removing/updating operations in taints e2e.
Also fixes https://github.com/kubernetes/kubernetes/issues/31066#issuecomment-242870101
Related prior Issue/PR #29362 and #30590
Automatic merge from submit-queue
Log pressure condition, memory usage, events in memory eviction test
I want to log this to help us debug some of the latest memory eviction test flakes, where we are seeing burstable "fail" before the besteffort. I saw (in the logs) attempts by the eviction manager to evict besteffort a while before burstable phase changed to "Failed", but the besteffort's phase appeared to remain "Running". I want to see the pressure condition interleaved with the pod phases to get a sense of the eviction manager's knowledge vs. pod phase.
Automatic merge from submit-queue
Return detailed error message for better debugging.
Try to provide more details error message for debugging when this flake #31561 happens again.
@pwittrock
Automatic merge from submit-queue
e2e: log wget output on CheckConnectivityToHost error
Log output might help to diagnose e2e flakes, whether they are caused by dns issues or connection timeouts.
Might help with flake https://github.com/kubernetes/kubernetes/issues/28188.
Automatic merge from submit-queue
add retries for add/update/remove taints on node in taints e2e
fixes taint update conflict in taints e2e by adding retries for add/update/remove taints on node.
ref #27655 and #31066
Automatic merge from submit-queue
Continue on #30774: Change podNamespacer API
continue on #30774, credit to @wojtek-t, Ref #30759
I just fixed a test and converted IsActivePod to operate on *Pod.
Automatic merge from submit-queue
Let load and density e2e tests use GC if it's on
I've run the 100 and 500 nodes tests and they both pass.
The test-infra half of the PR is https://github.com/kubernetes/test-infra/pull/369
cc @lavalamp
Automatic merge from submit-queue
federation: Adding secret API
Adding secret API to federation-apiserver and updating the federation client to include secrets
Automatic merge from submit-queue
Fix deployment e2e test: waitDeploymentStatus should error when entering an invalid state
Follow up #28162
1. We should check that max unavailable and max surge aren't violated at all times in e2e tests (didn't check this in deployment scaled rollout yet, but we should wait for it to become valid and then continue do the check until it finishes)
2. Fix some minor bugs in e2e tests
@kubernetes/deployment
bindata and yaml, Gobindata automation
bindata utils for generating, go generate
match server version
gitignore for dirty, ca, rbase, KUBE_ROOT, buildfix
(rebased jul-25,29)
Automatic merge from submit-queue
Rework pod waiting mechanism in e2e tests to accept pod and watch based
This PR re-applies #28212 which was reverted in #29223. The only difference is that the initial PR contained also `PodStartTimeout` shortening (see [here](4b0c0bd924)) which might caused the problems. Let's give it a 2nd try. I've tested all the flakes and they were passing on my machine.
@smarterclayton @apelisse ptal
Automatic merge from submit-queue
Add MinReadySeconds to rolling updater
Add MinReadySeconds support to RollingUpdater that allows to specify the number of seconds to wait on top of the pod is "ready" because its readiness probe passed.
Automatic merge from submit-queue
e2e: Allow skipping tests for specific runtimes, skip a few tests under rkt
The main benefit of this is that it gives a developer more useful output (more signal to noise) for things that are known broken on that runtime.
cc @kubernetes/rktnetes-maintainers , @ixdy
I'll run this PR through our jenkins and make sure things look happy and compare to the e2e results for this PR.
Automatic merge from submit-queue
e2e.framework.util.StartPods: panic if the number or replicas is zero
The number of pods to start must be non-zero.
Otherwise the function waits for pods forever if ``waitForRunning`` is true.
It the number of replicas is zero, panic so the mistake is heard all over the e2e realm.
Update all callers of StartPods to test for non-zero number of replicas.
Automatic merge from submit-queue
WaitForRunningReady also waits for PodsSuccess
Ref. #27095 - fixes the test, doesn't fix the problem.
cc @yujuhong @fejta
The number of pods to start must be non-zero.
Otherwise the function waits for pods forever if waitForRunning is true.
It the number of replicas is zero, panic so the mistake is heard all over the e2e realm.
Update all callers of StartPods to test for non-zero number of replicas.
Automatic merge from submit-queue
Port the downward api test to the node e2e suite
Also extend the framework to allow a custom client config loading function, so
that the node e2e suite can reuse the same framework across tests.
This fixes#26609
/cc @timstclair @pwittrock
Now that GCE routes take an extremely long time to come up and there's
a variance in "Ready" and "Schedulable", start cherry-picking tests
where we really want to have all nodes routable/schedulable for
testing. Adding logging. This will increase test times on large
clusters but should have 0 impact on normal testing.
Automatic merge from submit-queue
Fix some gce-only tests to run on gke as well
Enable "Services should work after restarting apiserver [Disruptive]" and DaemonRestart tests, except the 2 that require master ssh access.
Move restart/upgrade related test helpers into their own file in framework package.
Automatic merge from submit-queue
Use pause image depending on the server's platform when testing
Removed all pause image constant strings, now the pause image is chosen by arch. Part of the effort of making e2e arch-agnostic.
The pause image name and version is also now only in two places, and it's documented to bump both
Also removed "amd64" constants in the code. Such constants should be replaced by `runtime.GOARCH` or by looking up the server platform
Fixes: #22876 and #15140
Makes it easier for: #25730
Related: #17981
This is for `v1.3`
@ixdy @thockin @vishh @kubernetes/sig-testing @andyzheng0831 @pensu
Many tests expect all kube-system pods to be running and ready. The newly
added image prepull add-on pod can in the "succeeded" state. This commit fixes
the tests to allow kube-system pods to be succeeded.
Automatic merge from submit-queue
Prepull images in e2e
Quick and dirty image puller because the SQ stalled multiple times just *today* on image pull flake (https://github.com/kubernetes/kubernetes/issues/25277).
@kubernetes/sig-node @kubernetes/sig-testing wdyt?
I removed the netexec and goproxy pods from the proxy exec test. Instead
it now runs kubectl locally and the proxy is running in-process. Since
Go won't proxy for localhost requests, this test cannot pass if the API
server is local. However it was already disabled for local clusters.
Automatic merge from submit-queue
e2e/framework/util.StartPods: don't wait for pods that are not created
When running ``[k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]`` pods can be created in a way in which additional pods have to be create to fully saturate node's capacity CPU in a cluster. Additional pods are created by calling ``framework.StartPods``. The function creates pods with a given label and waits for them (if ``waitForRunning`` is ``true``). This is fine as long as the number of pods to created is non-zero. If there are zero pods to be created and ``waitForRunning`` is ``true``, the function waits forever as there is not going to be any pods with requested label. Thus, resulting in ``Error waiting for 0 pods to be running - probably a timeout``. Causing the e2e test to fail even if it should not.
Adding condition to return from the function immediately if there is not pod to create.
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Automatic merge from submit-queue
Add timeout to e2e network connectivity checks
Some e2e tests use wget to check connectivity, and the default e2e
timeout is 900s. This change allows the timeout to be specified on a
check-by-check basis. This will also make the check useful for negative
checks (like those used by openshift to validate isolation) since a
short timeout is suggested where connectivity is not expected.
Some e2e tests use wget to check connectivity, and the default e2e
timeout is 900s. This change allows the timeout to be specified on a
check-by-check basis. This will also make the check useful for negative
checks (like those used by openshift to validate isolation) since a
short timeout is suggested where connectivity is not expected.