Fix ingress validation so that it validates the rules of an ingress that
specifies a wildcard host. Commit 60f4fbf4f2
added an inopportune continue statement that caused this validation to be
skipped. For backwards compatibility, this change restores validation for
v1 of the api but still skips it on v1beta1.
* pkg/apis/networking/validation/validation.go (IngressValidationOptions):
Add AllowInvalidWildcardHostRule field to indicate that validation of rules
should be skipped for ingresses that specify wildcard hosts.
(ValidateIngressCreate): Set AllowInvalidWildcardHostRule to true if the
request is using the v1beta1 API version.
(ValidateIngressUpdate): Set AllowInvalidWildcardHostRule to true if the
request or old ingress is using the v1beta1 API version.
(validateIngressRules): Don't skip validation of the ingress rules unless
the ingress has a wildcard host and AllowInvalidWildcardHostRule is true.
(allowInvalidWildcardHostRule): New helper for ValidateIngressCreate and
ValidateIngressUpdate.
* pkg/apis/networking/validation/validation_test.go
(TestValidateIngressCreate, TestValidateIngressUpdate): Add test cases to
ensure that validation is performed on v1 objects and skipped on v1beta
objects for backwards compatibility.
(TestValidateIngressTLS): Specify PathType so that the test passes.
Co-authored-by: jordan@liggitt.net
The apiserver is expected to send pod deletion events that might arrive at a different time. However, sometimes a node could be recreated without its pods being deleted.
Partial revert of https://github.com/kubernetes/kubernetes/pull/86964
Signed-off-by: Aldo Culquicondor <acondor@google.com>
Change-Id: I51f683e5f05689b711c81ebff34e7118b5337571
Previously the controllers would proceed with additional creates,
updates, or deletes if 1 failed. That could potentially result in
scenarios where an EndpointSlice create or update failing while a delete
worked. This updates the logic so that removals will not happen if
additions fail.
The lack of this validation on incoming pods causes unpredictable cluster outcomes
when later calculating affinity results against existing pods (see #92714). This fix
quickly addresses the main source where these problems should be caught.
It is unfortunately difficult to add this validation directly to the API server due
to the fact that it may break migrations with existing pods that fail this check. This
is a compromise to address the current issue.
The KEP specifies that the controller will "mirror all labels from the
Endpoints resource and all endpoints and ports from the corresponding subset".
I'd missed that in my initial implementation, this should fix that.
- Move "ForgetPod" after "RunReservePluginsUnreserve", so that the cache would hold the pod to
avoid it's being retried simutaneously until Unreserve is completed.
- Move "assume" ahead of "RunReservePluginsReserve". This is based on the fact that "ForgetPod" is
the last step of failure path, so "assume" should be reversly treated as the first step. The
current failure path is like this:
assume -> reserve -> unreserve -> forgetPod -> recordingFailure
- Make subtests of TestReservePluginUnreserve stateless
When create fake data for the nodeTree unittests, The 'append' is invoked
on the common fake data set. That makes the unittests is running with unexpected
fake data after that.