Commit Graph

451 Commits

Author SHA1 Message Date
Antonio Ojea
6d3fd8353c don't panic if nodeIPs are not found 2021-06-21 10:59:09 +02:00
Kubernetes Prow Robot
6bac142190
Merge pull request #102138 from damemi/balance-pods-parallel
(scheduler e2e) Create balanced pods in parallel
2021-05-27 14:04:23 -07:00
Mike Dame
36cdb72eb6 (scheduler e2e) Create balanced pods in parallel 2021-05-27 16:01:18 -04:00
Konstantin Misyutin
351f4e9c9c cleanup: remove TODO at e2e scheduling preemption test
Signed-off-by: Konstantin Misyutin <konstantin.misyutin@huawei.com>
2021-05-19 17:34:50 +08:00
Mike Dame
07029c941a Remove Limits from scheduling e2e balanced pod resources
The purpose of the pod created by `createBalancedPodForNodes()` is to ensure
that all nodes have equal resource requests (as seen by the scheduler). This
prevents the default scheduling behavior (which attempts to balance resource requests)
from interfering with e2e's which test other priorities/score plugins.

Because the scheduler only worries about requests, specifying `Limits` in this pod
is unnecessary. In fact, if the calculated "balancing" limit is too low, it can cause
the balancing pod to never start due to OOMKill errors, leading to flakes and failures.
2021-04-21 15:58:00 -04:00
Kubernetes Prow Robot
2147937c41
Merge pull request #100128 from ingvagabund/sig-scheduling-single-node-e2e
[sig-scheduling] SchedulerPreemption|SchedulerPredicates|SchedulerPriorities: adjust some e2e tests to run in a single node cluster scenario
2021-04-13 10:31:09 -07:00
Jan Chaloupka
bf2fc250a4 validates basic preemption works|validates lower priority pod preemption by critical pod: allocate 4/5 instead of 2/3
To run the tests in a single node cluster, create two pods consuming 2/5 of the extended resource instead of one consuming 2/3.
The low priority pod will be consuming 2/5 of the extended resource instead so in case there's only a single node,
a high priority pod consuming 2/5 of the extended resource can be still scheduled. Thus, making sure only the low priority pod
gets preempted once the preemptor pod consuming 2/5 of the extended resource gets scheduled while keeping the high priority pod untouched.
2021-04-13 09:47:28 +02:00
Kubernetes Prow Robot
dbf1102437
Merge pull request #100762 from AliceZhang2016/multi-az-timeout
List pod list once to avoid timeout in Multi-AZ Clusters
2021-04-10 20:29:43 -07:00
Mengxue Zhang
200ef16f1d list pod list once to avoid timeout 2021-04-06 13:44:11 +00:00
Maru Newby
253df78f1b Tag Multi-AZ scheduling tests as serial
As per mdame, we can't ensure that the cluster is actually balanced
if other tests are adding or deleting pods in parallel.
2021-03-18 10:31:03 -07:00
Jan Chaloupka
34c2672115 Skip PodTopologySpread tests in a single node cluster scenario 2021-03-11 12:32:09 +01:00
wojtekt
006dc7477f Make sig-storage be the owner of ubernetes_lite_volumes test 2021-03-03 15:17:28 +01:00
Benjamin Elder
56e092e382 hack/update-bazel.sh 2021-02-28 15:17:29 -08:00
Kubernetes Prow Robot
84483a5aac
Merge pull request #98073 from damemi/fix-priority-balancedpods
(e2e/scheduler) Ensure minimum memory limit in createBalancedPodForNodes
2021-02-19 06:34:25 -08:00
Mike Dame
32b0333c15 (e2e/scheduler) Ensure minimum memory limit in createBalancedPodForNodes 2021-02-18 09:30:33 -05:00
Benjamin Elder
ad325377b5 shorten scheduling priorities taint key 2021-02-12 01:30:30 -08:00
Stephen Heywood
ee7ee85669 Update conformance metadata for relocated test 2021-02-10 13:32:58 +13:00
Kubernetes Prow Robot
23a46d8843
Merge pull request #97819 from damemi/bz1876918-priorities-test-refactor
Move deferred taint cleanup call to ensure all are removed
2021-02-06 21:37:12 -08:00
Mike Dame
6579cd8e9e Balance nodes in scheduling e2e
This adds a call to createBalancedPods during the ubernetes_lite scheduling e2es,
which are prone to improper score balancing due to unbalanced utilization.
2021-02-05 09:40:34 -05:00
Mike Dame
cc1eab1ca2 Move deferred taint cleanup call to ensure all are removed 2021-01-29 10:17:21 -05:00
Antonio Ojea
08a8e80c9f move e2e hostport conflict test to sig-network
The test "validates that there is no conflict between pods with same
hostPort but different hostIP and protocol" was testing the scheduler
capability to schedule pods on the same node with hostPorts, however,
it wasn´t validating that the HostPorts was working, causing false
positives, because the pods were scheduled, but the HostPort exposed
wasn´t working.

In order to test the HostPort functionality, we have to use HostNetwork
pods, that are incompatible with Windows platforms. Also, since this
is touching both network and scheduling, there is no clear the ownership,
but sig-network is happy to adopt it.

We also add a new test for scheduling only under "scheduling", so Windows
folks can use it to test the scheduled in that platform.
2021-01-27 21:55:36 +01:00
Jan Chaloupka
2318992227 e2e: Pod should avoid nodes that have avoidPod annotation: clean remaining pods
The test is not cleaning all pods it created.
Memory balancing pods are deleted once the test namespace is.
Thus, leaving the pods running or in terminating state when a new test is run.
In case the next test is "[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run",
the test can fail.
2021-01-21 17:08:40 +01:00
Kubernetes Prow Robot
33ee864e92
Merge pull request #97003 from ravisantoshgudimetla/remove-scheduler-preemption-test-from-conformance
make hostPort match test linuxonly
2021-01-14 19:39:51 -08:00
ravisantoshgudimetla
c183ac16d1 make hostPort match test linuxonly 2021-01-14 16:40:39 -05:00
Saikat Roychowdhury
bc4977724f Skip multi az PD storage test if no extra zone detected 2020-12-22 21:06:44 +00:00
zzzkl
987562bb8e
Fix typo in e2e test log 2020-12-15 15:39:09 +08:00
KeZhang
e3ba42324b remove unused funcs for e2e predicates 2020-12-12 09:43:01 +08:00
Tomas Nozicka
e4d7915a2e Use non privileged ports 2020-12-11 13:57:46 +01:00
Kubernetes Prow Robot
83b2c7a1bf
Merge pull request #96311 from thockin/kep-1659-topology-labels
Convert users of old failure-domain labels to new
2020-12-08 17:28:27 -08:00
Kubernetes Prow Robot
bc63d37155
Merge pull request #96042 from bertinatto/custom-timeouts
[sig-storage] Add custom timeouts in E2E tests
2020-12-08 16:29:44 -08:00
Fabio Bertinatto
c82626f96f e2e: use custom timeouts in all storage E2E tests 2020-12-02 15:57:58 -03:00
Wei Huang
8cf3347d87
Increase preemption timeout from 1 minute to 2 minutes 2020-12-01 14:17:12 -08:00
Antonio Ojea
d5b7ef86bb correct e2e test predicates conflict hostport
The e2e test, included as part of Conformance,
"validates that there is no conflict between
 pods with same hostPort but different hostIP and protocol"
was only testing that the pods were scheduled without conflict
but was never testing the functionality.

The test should check that pods with containers forwarding the same
hostPort can be scheduled without conflict, and that those exposed
HostPort are forwarding the ports to the corresponding pods.

the predicate tests were using loopback addresses for the the
hostPort test, however, those have different semantics depending
on the IP family, i.e. you can not bind to ::1 and ::2 simultanously,
in addition, IP forwarding from localhost to localhost in IPv6 is
not working since it doesn't have the kernel route_localnet hack.
2020-11-17 15:28:29 +01:00
Tim Hockin
3bd337baf4 Make tests deal with old and new topology labels 2020-11-12 11:22:47 -08:00
Kubernetes Prow Robot
3e43d5b92a
Merge pull request #96292 from wangyx1992/cleanup-scheduler-log-capatilization
cleanup: fix log capitalization in scheduler
2020-11-12 11:20:45 -08:00
Yixiang2019
842cc6b4e2 cleanup: fix log capitalization in scheduler
Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>
2020-11-12 20:10:26 +08:00
Sergey Kanzhelev
06da0e5e74 GA of RuntimeClass feature gate and API 2020-11-11 19:22:32 +00:00
Tim Hockin
819ff9b087
Use topology labels instead of old beta names (#96033)
* Rename const for topology.../zone

* Rename const for topology.../region

* Rename const for failure-domain.../zone

* Rename const for failure-domain.../region

* Restore old names for compat
2020-11-05 20:26:50 -08:00
Wei Huang
6ccbd3c9a9
Update PriorityClass conformance test to cover DeleteCollection 2020-10-28 15:35:46 -07:00
Stephen Heywood
017de540eb Promote verify PriorityClass endpoints e2e test to Conformance 2020-10-27 12:36:28 +13:00
Aldo Culquicondor
1840fcd4bb Add more Pods and relax skew in E2E spread test
A spreading test is more meaningful with a greater number of Pods. However, we cannot always expect perfect spreading. We accept a skew of 2 for 5*z Pods, where z is the number of zones.

Change-Id: Iab0de06a95974fbfec604f003b550f15db618ebd
2020-10-21 14:10:43 -04:00
Kubernetes Prow Robot
943b1dbf53
Merge pull request #95495 from deads2k/remove-secondary-retry
remove secondary client retries in e2e tests
2020-10-15 08:06:39 -07:00
David Eads
64c099d670 remove secondary client retries in e2e tests 2020-10-15 08:30:42 -04:00
Wei Huang
f8cfbc8ac1
PriorityClass lifecycle tests 2020-10-13 12:06:07 -07:00
Adhityaa Chandrasekar
9970966844 ubernetes_lite.go: remove image argument from SpreadServiceOrFail
Signed-off-by: Adhityaa Chandrasekar <adtac@google.com>
2020-09-14 20:32:06 +00:00
Wei Huang
24bbedb27d
Deflake LimitRange e2e test 2020-08-07 17:22:35 -07:00
Jefftree
ace97738e2 Update formatting of conformance comment 2020-07-29 20:50:44 -07:00
hasheddan
e990698d5f
Use local daemonset manifest for installing Nvidia drivers
Updates sig-scheduling e2e Nvidia GPU tests to install drivers using
local manifest by default. Currently the DaemonSet is fetched from the
GoogleCloudPlatform/container-enginer-accelerators repo by default.
Using a local manifest allows for manually specifying the image
cos-gpu-installer image rather than always using latest. A remote
manifest can still be fetched by setting
NVIDIA_DRIVER_INSTALLER_DAEMONSET env var.

Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
2020-07-18 21:01:00 -05:00
hasheddan
5f904f5e79
Do not raise exception if unscheduled Pod status is unknown
Currently when checking for unscheduled pods an exception will be raised
if a pod is not scheduled and the status is unknown. This update modifies
the logic to include any pod without a NodeName in the not scheduled
pods returned.

Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
2020-06-29 14:16:15 -05:00
Kubernetes Prow Robot
18db08b813
Merge pull request #92545 from hasheddan/scheduling-part-two
Do not ignore unscheduled pods when NodeName not in set of worker nodes
2020-06-27 19:02:13 -07:00