Fix ingress validation so that it validates the rules of an ingress that
specifies a wildcard host. Commit 60f4fbf4f2
added an inopportune continue statement that caused this validation to be
skipped. For backwards compatibility, this change restores validation for
v1 of the api but still skips it on v1beta1.
* pkg/apis/networking/validation/validation.go (IngressValidationOptions):
Add AllowInvalidWildcardHostRule field to indicate that validation of rules
should be skipped for ingresses that specify wildcard hosts.
(ValidateIngressCreate): Set AllowInvalidWildcardHostRule to true if the
request is using the v1beta1 API version.
(ValidateIngressUpdate): Set AllowInvalidWildcardHostRule to true if the
request or old ingress is using the v1beta1 API version.
(validateIngressRules): Don't skip validation of the ingress rules unless
the ingress has a wildcard host and AllowInvalidWildcardHostRule is true.
(allowInvalidWildcardHostRule): New helper for ValidateIngressCreate and
ValidateIngressUpdate.
* pkg/apis/networking/validation/validation_test.go
(TestValidateIngressCreate, TestValidateIngressUpdate): Add test cases to
ensure that validation is performed on v1 objects and skipped on v1beta
objects for backwards compatibility.
(TestValidateIngressTLS): Specify PathType so that the test passes.
Co-authored-by: jordan@liggitt.net
The apiserver is expected to send pod deletion events that might arrive at a different time. However, sometimes a node could be recreated without its pods being deleted.
Partial revert of https://github.com/kubernetes/kubernetes/pull/86964
Signed-off-by: Aldo Culquicondor <acondor@google.com>
Change-Id: I51f683e5f05689b711c81ebff34e7118b5337571
Add a test that verifies that an ingress with an empty TLS value or with a
TLS value that specifies an empty list of hosts passes validation.
* pkg/apis/networking/validation/validation_test.go
(TestValidateEmptyIngressTLS): New test.
Previously the controllers would proceed with additional creates,
updates, or deletes if 1 failed. That could potentially result in
scenarios where an EndpointSlice create or update failing while a delete
worked. This updates the logic so that removals will not happen if
additions fail.
Only wait for finished binding or error, but not both
Signed-off-by: Aldo Culquicondor <acondor@google.com>
Change-Id: I13d16e6c7c45c6527591aa05cc79fc5e96d47a68
As a part of cleaning up inactive members (who haven't been active since
beginning of 2019) from OWNERS files, this commit moves abrarshivani to
emeritus_approvers section.
When check the incoming pod's anti-affinity rules, there is change to
return early when there is no any matched anti-affinity terms in the
whole cluster.
The lack of this validation on incoming pods causes unpredictable cluster outcomes
when later calculating affinity results against existing pods (see #92714). This fix
quickly addresses the main source where these problems should be caught.
It is unfortunately difficult to add this validation directly to the API server due
to the fact that it may break migrations with existing pods that fail this check. This
is a compromise to address the current issue.
This code can be called not only when a container is dead and restarted,
but when is started for the first time too. For example, any pod with
initContainer and containers will exhibit this behaviour. The reason is
that in that case, the "if createPodSandbox" path will return the
initContainers only and on the next call to this function this code is
executed to start the containers for the fist time.
In that case, it is wrong to log that the container is dead and will be
restarted, as it was never started. In fact, the restart count will not
be increased.
This commit just changes this to say that the container is not in the
desired state and should be started. In the end, the kubelet is a state
machine and that is all we really care about.
No tests are added, as the behaviour was correct and tests don't check
logs messages.
Signed-off-by: Rodrigo Campos <rodrigo@kinvolk.io>
As of now, the kubelet is passing the security context to container runtime even
if the security context has invalid options for a particular OS. As a result,
the pod fails to come up on the node. This error is particularly pronounced on
the Windows nodes where kubelet is allowing Linux specific options like SELinux,
RunAsUser etc where as in [documentation](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-container),
we clearly state they are not supported. This PR ensures that the kubelet strips
the security contexts of the pod, if they don't make sense on the Windows OS.