Commit Graph

16 Commits

Author SHA1 Message Date
Clayton Coleman
6b9a381185
kubelet: Force deleted pods can fail to move out of terminating
If a CRI error occurs during the terminating phase after a pod is
force deleted (API or static) then the housekeeping loop will not
deliver updates to the pod worker which prevents the pod's state
machine from progressing. The pod will remain in the terminating
phase but no further attempts to terminate or cleanup will occur
until the kubelet is restarted.

The pod worker now maintains a store of the pods state that it is
attempting to reconcile and uses that to resync unknown pods when
SyncKnownPods() is invoked, so that failures in sync methods for
unknown pods no longer hang forever.

The pod worker's store tracks desired updates and the last update
applied on podSyncStatuses. Each goroutine now synchronizes to
acquire the next work item, context, and whether the pod can start.
This synchronization moves the pending update to the stored last
update, which will ensure third parties accessing pod worker state
don't see updates before the pod worker begins synchronizing them.

As a consequence, the update channel becomes a simple notifier
(struct{}) so that SyncKnownPods can coordinate with the pod worker
to create a synthetic pending update for unknown pods (i.e. no one
besides the pod worker has data about those pods). Otherwise the
pending update info would be hidden inside the channel.

In order to properly track pending updates, we have to be very
careful not to mix RunningPods (which are calculated from the
container runtime and are missing all spec info) and config-
sourced pods. Update the pod worker to avoid using ToAPIPod()
and instead require the pod worker to directly use
update.Options.Pod or update.Options.RunningPod for the
correct methods. Add a new SyncTerminatingRuntimePod to prevent
accidental invocations of runtime only pod data.

Finally, fix SyncKnownPods to replay the last valid update for
undesired pods which drives the pod state machine towards
termination, and alter HandlePodCleanups to:

- terminate runtime pods that aren't known to the pod worker
- launch admitted pods that aren't known to the pod worker

Any started pods receive a replay until they reach the finished
state, and then are removed from the pod worker. When a desired
pod is detected as not being in the worker, the usual cause is
that the pod was deleted and recreated with the same UID (almost
always a static pod since API UID reuse is statistically
unlikely). This simplifies the previous restartable pod support.
We are careful to filter for active pods (those not already
terminal or those which have been previously rejected by
admission). We also force a refresh of the runtime cache to
ensure we don't see an older version of the state.

Future changes will allow other components that need to view the
pod worker's actual state (not the desired state the podManager
represents) to retrieve that info from the pod worker.

Several bugs in pod lifecycle have been undetectable at runtime
because the kubelet does not clearly describe the number of pods
in use. To better report, add the following metrics:

  kubelet_desired_pods: Pods the pod manager sees
  kubelet_active_pods: "Admitted" pods that gate new pods
  kubelet_mirror_pods: Mirror pods the kubelet is tracking
  kubelet_working_pods: Breakdown of pods from the last sync in
    each phase, orphaned state, and static or not
  kubelet_restarted_pods_total: A counter for pods that saw a
    CREATE before the previous pod with the same UID was finished
  kubelet_orphaned_runtime_pods_total: A counter for pods detected
    at runtime that were not known to the kubelet. Will be
    populated at Kubelet startup and should never be incremented
    after.

Add a metric check to our e2e tests that verifies the values are
captured correctly during a serial test, and then verify them in
detail in unit tests.

Adds 23 series to the kubelet /metrics endpoint.
2023-03-08 22:03:51 -06:00
David Ashpole
64af1adace
Second attempt: Plumb context to Kubelet CRI calls (#113591)
* plumb context from CRI calls through kubelet

* clean up extra timeouts

* try fixing incorrectly cancelled context
2022-11-05 06:02:13 -07:00
Antonio Ojea
9c2b333925 Revert "plumb context from CRI calls through kubelet"
This reverts commit f43b4f1b95.
2022-11-02 13:37:23 +00:00
David Ashpole
f43b4f1b95
plumb context from CRI calls through kubelet 2022-10-28 02:55:28 +00:00
vikram Jadhav
0de4397490 mockery to mockgen conversion 2021-09-25 16:15:08 +00:00
Sergey Kanzhelev
4e94144d1e Fix golint failures for kubelet/container 2020-05-20 19:01:23 +00:00
David McMahon
ef0c9f0c5b Remove "All rights reserved" from all the headers. 2016-06-29 17:47:36 -07:00
Yu-Ju Hong
dc42d25f4a kubelet: remove background updating thread in RuntimeCache
This feature is no longer useful pods don't sync as often. For batch
creation/deletions/syncs, the cache will be up-to-date for most pods since it
will be updated frequently. For other cases, continue updating for two more
seconds don't usually help, as temporal locality doesn't hold across pod syncs.
2015-11-19 17:25:51 -08:00
Yifan Gu
f197a9db4e kubelet: Minor refactors.
Remove some TODOs.
Unexport DockerManager.Puller and DockerManager.PodInfraContainerImage.
Add "docker" for all "go-dockerclient" imports.
2015-06-04 16:08:45 -07:00
Prashanth B
da42f13941 Merge pull request #7749 from yujuhong/stale_cache
Kubelet: record the timestamp correctly in the runtime cache
2015-05-06 09:20:30 -07:00
Yu-Ju Hong
c075719f05 Kubelet: fix the runtime cache to not cache the stale pods
If a pod worker sees stale pods from the runtime cache which were retrieved
before their last sync finished, it may think that the pod were not started
correctly, and attemp to fix that by killing/restarting containers.
There are two issues that may cause runtime cache to store stale pods:
  1. The timstamp is recorded *after* getting the pods from the container
     runtime. This may lead the consumer to think the pods are newer than they
     actually are.
  2. The cache updates are triggered by many goroutines (pod workers, and the
     updating thread). There is no mechanism to enforece that the cache would
     only be updated to newer pods.

This change fixes the above two issues by making sure one always record the
timestamp before getting pods from the container runtime, and updates the
cached pods only if the timestamp is newer.
2015-05-05 18:28:38 -07:00
Paul Morie
d90689bb71 Fix typo in runtime_cache.go 2015-05-04 16:33:48 -04:00
Eric Paris
6b3a6e6b98 Make copyright ownership statement generic
Instead of saying "Google Inc." (which is not always correct) say "The
Kubernetes Authors", which is generic.
2015-05-01 17:49:56 -04:00
Yifan Gu
e1feed9a8b kubelet/container: Replace DockerCache with RuntimeCache. 2015-04-13 18:16:05 -07:00
Yifan Gu
a5e6bea9b5 kubelet/container: Update the cache interface. 2015-04-13 17:38:18 -07:00
Yifan Gu
487d34e409 kubelet: add container runtime cache and fake runtime. 2015-03-20 13:15:20 -07:00