- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Move scheduler plugin unit tests use testing PodWrapper
where applicable to reduce duplicating pod creation
code and shorten number of lines.
Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
GetNamespaceLabelsSnapshot has a fallback when it gets errors when looking up a
namespace, therefore reporting the error is more informational than a real
error. In particular, not finding the namespace is normal when running
test/integration/scheduler_perf and happens so frequently that there is a lot
of output on stderr:
E0120 12:19:09.204768 95305 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"namespace-1\" not found" namespace="namespace-1"
Both `ErrReasonAffinityRulesNotMatch` and `ErrReasonAntiAffinityRulesNotMatch` are
more precise than `ErrReasonAffinityNotMatch`.
Signed-off-by: Dave Chen <dave.chen@arm.com>
Currently, in interpodaffinty plugin, it only processes all nodes when the incoming
pod with affinity. Actually, it only cares about all nodes when the incoming pod
with preferred affinity. Then it will reduces the number of nodes need to be
processed.
This is a performance optimization that reduces the overhead of inter-pod affinity PreFilter calculaitons. Basically
eliminates that overhead when no pods in the cluster use required pod anti-affinity. This offered 20% improvement on 5k clusters for preferred anti-affinity benchmarks.
When check the incoming pod's anti-affinity rules, there is change to
return early when there is no any matched anti-affinity terms in the
whole cluster.
The lack of this validation on incoming pods causes unpredictable cluster outcomes
when later calculating affinity results against existing pods (see #92714). This fix
quickly addresses the main source where these problems should be caught.
It is unfortunately difficult to add this validation directly to the API server due
to the fact that it may break migrations with existing pods that fail this check. This
is a compromise to address the current issue.