Only wait for finished binding or error, but not both
Signed-off-by: Aldo Culquicondor <acondor@google.com>
Change-Id: I13d16e6c7c45c6527591aa05cc79fc5e96d47a68
As a part of cleaning up inactive members (who haven't been active since
beginning of 2019) from OWNERS files, this commit moves abrarshivani to
emeritus_approvers section.
When check the incoming pod's anti-affinity rules, there is change to
return early when there is no any matched anti-affinity terms in the
whole cluster.
The lack of this validation on incoming pods causes unpredictable cluster outcomes
when later calculating affinity results against existing pods (see #92714). This fix
quickly addresses the main source where these problems should be caught.
It is unfortunately difficult to add this validation directly to the API server due
to the fact that it may break migrations with existing pods that fail this check. This
is a compromise to address the current issue.
This code can be called not only when a container is dead and restarted,
but when is started for the first time too. For example, any pod with
initContainer and containers will exhibit this behaviour. The reason is
that in that case, the "if createPodSandbox" path will return the
initContainers only and on the next call to this function this code is
executed to start the containers for the fist time.
In that case, it is wrong to log that the container is dead and will be
restarted, as it was never started. In fact, the restart count will not
be increased.
This commit just changes this to say that the container is not in the
desired state and should be started. In the end, the kubelet is a state
machine and that is all we really care about.
No tests are added, as the behaviour was correct and tests don't check
logs messages.
Signed-off-by: Rodrigo Campos <rodrigo@kinvolk.io>
As of now, the kubelet is passing the security context to container runtime even
if the security context has invalid options for a particular OS. As a result,
the pod fails to come up on the node. This error is particularly pronounced on
the Windows nodes where kubelet is allowing Linux specific options like SELinux,
RunAsUser etc where as in [documentation](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-container),
we clearly state they are not supported. This PR ensures that the kubelet strips
the security contexts of the pod, if they don't make sense on the Windows OS.
The KEP specifies that the controller will "mirror all labels from the
Endpoints resource and all endpoints and ports from the corresponding subset".
I'd missed that in my initial implementation, this should fix that.