This allows us to return with a timeout error as soon as the
context is canceled. Previously in cases where the mount will
never succeed pods can get stuck deleting for 2 minutes.
In the Sync*Pod methods that call VolumeManager.WaitFor*, we
must filter out wait.Interrupted errors from being logged as
they are part of control flow, not runtime problems. Any
early interruption should result in exiting the Sync*Pod method
as quickly as possible without logging intermediate errors.
The pod worker may recieve a new pod which is marked as terminal in
the runtime cache. This can occur if a pod is marked as terminal and the
kubelet is restarted.
The kubelet needs to drive these pods through the termination state
machine. If upon restart, the kubelet receives a pod which is terminal
based on runtime cache, it indicates that pod finished
`SyncTerminatingPod`, but it did not complete `SyncTerminatedPod`. The
pod worker needs ensure that `SyncTerminatedPod` will run on these pods.
To accomplish this, set `finished=False`, on the pod sync status, to
drive the pod through the rest of the state machine.
This will ensure that status manager and other kubelet subcomponents
(e.g. volume manager), will be aware of this pod and properly cleanup
all of the resources of the pod after the kubelet is restarted.
While making change, also update the comments to provide a bit more
background around why the kubelet needs to read the runtime pod cache
for newly synced terminal pods.
Signed-off-by: David Porter <david@porter.me>
A pod that cannot be started yet (due to static pod fullname
exclusion when UIDs are reused) must be accounted for in the
pod worker since it is considered to have been admitted and will
eventually start.
Due to a bug we accidentally cleared pendingUpdate for pods that
cannot start yet which means we can't report the right metric to
users in kubelet_working_pods and in theory we might fail to start
the pod in the future (although we currently have not observed
that in tests that should catch such an error). Describe, implement,
and test the invariant that when startPodSync returns in every path
that either activeUpdate OR pendingUpdate is set on the status, but
never both, and is only nil when the pod can never start.
This bug was detected by a "programmer error" assertion we added
on metrics that were not being reported, suggesting that we should
be more aggressive on using log assertions and automating detection
in tests.
`findAndRemoveDeletedPods()` processes only pods from volume_manager cache: `dswp.desiredStateOfWorld.GetVolumesToMount()`. `podWorker` calls volume_manager `WaitForUnmount()` asynchronously. If it happens after populator cleaned up resources, an entry is added to `processedPods` and will never be seen. Let's cleanup such entries if they don't have a pod and marked for deletion.
A previous commit added the capability to read the DNS configuration options
from a Windows host, while removing the capability to read from a resolv.conf-like
file.
This commit addresses this issue: if the given ``--resolv-conf`` option is not set to
``Host``, it will consider it as a file, preserving the previous behavior.
This previously existed so the dockershim networking stuff had
different approvers than general sig-network stuff, but that code is
gone now, and the remaining kubelet DNS code should be owned by
sig-network in general (and sig-node via inheritance).
HandlePodCleanups is responsible for restarting pods that are no
longer running (usually due to delete and recreation with the same
UID in quick succession). We have to filter the list of pods to
restart from podManager to get the list of admitted pods, which
uses filterOutInactivePods on the kubelet. That method excludes
pods the pod worker has already terminated. Since a restarted
pod will be in the terminated state before HandlePodCleanups
calls SyncKnownPods, we have to call filterOutInactivePods after
SyncKnownPods, otherwise the to-be-restarted pod is ignored and
we have to wait for the next houskeeping cycle to restart it.
Since static pods are often critical system components, this
extra 2s wait is undesirable and we should restart as soon as
we can. Add a failing test that passes after we move the filter
call after SyncKnownPods.
The status manager channel forces all container status to be
processed, even if multiple updates are generated in succession.
Instead of queueing the updates, just remember which ones changed
and process them in a batch. This should reduce QPS load from
the Kubelet for status, reduce latency of status propagation to
the API in general, and is easier to reason about.
This also prevents status from being lost when the channel is
full - all updates sent by SetPodStatus are guaranteed to be
recorded. Changing to remove the channel allows us to set a
marker flag when the pod worker state machine completes that
avoids the status manager having to call into the pod worker
directly.
Though not obvious as currently written, PreferNodeIP() has different
semantics with legacy and external cloud providers, since one kind of
node IP value never gets passed in the external cloud provider case.
Split it into two functions to make this clearer (and to prepare for
adding new external-cloud-only semantics, and to make it clearer that
some of the code can be deleted when legacy cloud providers go away).