This uses the generic ptr.To in k8s.io/utils to replace functions and
code constructs which only serve to return pointers to intstr
values. Other uses of the deprecated pointer package are updated in
modified files.
Signed-off-by: Stephen Kitt <skitt@redhat.com>
When defining a ClusterIP Service, we can specify externalIP, and the
traffic policy of externalIP is subject to externalTrafficPolicy.
However, the policy can't be set when type is not NodePort or
LoadBalancer, and will default to Cluster when kube-proxy processes the
Service.
This commit updates the defaulting and validation of Service to allow
specifying ExternalTrafficPolicy for ClusterIP Services with
ExternalIPs.
Signed-off-by: Quan Tian <qtian@vmware.com>
PVC and containers shared the same ResourceRequirements struct to define their
API. When resource claims were added, that struct got extended, which
accidentally also changed the PVC API. To avoid such a mistake from happening
again, PVC now uses its own VolumeResourceRequirements struct.
The `Claims` field gets removed because risk of breaking someone is low:
theoretically, YAML files which have a claims field for volumes now
get rejected when validating against the OpenAPI. Such files
have never made sense and should be fixed.
Code that uses the struct definitions needs to be updated.
The fact that the .status.loadBalancer field can be set while .spec.type
is not "LoadBalancer" is a flub. Any spec update will already clear
.status.ingress, so it's hard to really rely on this. After this
change, updates which try to set this combination will fail validation.
Existing cases of this will not be broken. Any spec/metadata update
will clear it (no error) and this is the only stanza of status.
New gate "AllowServiceLBStatusOnNonLB" is off by default, but can be
enabled if this change actually breaks someone, which seems exceeedingly
unlikely.
using wait.PollUntilContextTimeout instead of deprecated wait.Poll for test/integration/scheduler
using wait.PollUntilContextTimeout instead of deprecated wait.Poll for test/e2e/scheduling
using wait.ConditionWithContextFunc for PodScheduled/PodIsGettingEvicted/PodScheduledIn/PodUnschedulable/PodSchedulingError
The new test case covers pods with multiple claims from multiple drivers. This
leads to different behavior (scheduler waits for information from all drivers
instead of optimistically selecting one node right away) and to more concurrent
updates of the PodSchedulingContext objects.
The test case is currently not enabled for unit testing or integration
testing. It can be used manually with:
-bench=BenchmarkPerfScheduling/SchedulingWithMultipleResourceClaims/2000pods_100nodes
... -perf-scheduling-label-filter=