As part of this change, the code responsible for managing the sandbox
image within the kubelet has been removed. Previously, the kubelet used
to prevent sandbox image from the garbage collection process. However,
with this update, the responsibility of managing the sandbox containers
has been shifted to the CRI implementation itself. By allowing sandbox
image pinning from CRI, we improve efficiency and simplify the kubelet's
interaction with the container runtime. As a result, the kubelet can now
rely on the container runtime's built-in mechanisms for sandbox container
lifecycle management.
Signed-off-by: Sohan Kunkerkar <sohank2602@gmail.com>
The two are not coupled except accidentally. Separate them and
update callsites. This will reduce the scope of PodManager interface
to make exposing the pod worker cleaner.
The HandlePod* methods are all structurally similar, but accrued
subtle differences. In general the only point for Handle is to
process admission and to update the pod worker with the desired
state of the kubelet's config (so that pod worker can make it
the actual state).
Add a new GetPodAndMirrorPod() method that handles when the config
pod is ambiguous (pod or mirror pod) and inline the structure.
Add comments on questionable additions in the config methods for
future improvement.
Move the metric observation of container count closer to where
pods are actually started (in the pod worker). A future change
can likely move it to syncPod.
This allows us to return with a timeout error as soon as the
context is canceled. Previously in cases where the mount will
never succeed pods can get stuck deleting for 2 minutes.
In the Sync*Pod methods that call VolumeManager.WaitFor*, we
must filter out wait.Interrupted errors from being logged as
they are part of control flow, not runtime problems. Any
early interruption should result in exiting the Sync*Pod method
as quickly as possible without logging intermediate errors.
If a CRI error occurs during the terminating phase after a pod is
force deleted (API or static) then the housekeeping loop will not
deliver updates to the pod worker which prevents the pod's state
machine from progressing. The pod will remain in the terminating
phase but no further attempts to terminate or cleanup will occur
until the kubelet is restarted.
The pod worker now maintains a store of the pods state that it is
attempting to reconcile and uses that to resync unknown pods when
SyncKnownPods() is invoked, so that failures in sync methods for
unknown pods no longer hang forever.
The pod worker's store tracks desired updates and the last update
applied on podSyncStatuses. Each goroutine now synchronizes to
acquire the next work item, context, and whether the pod can start.
This synchronization moves the pending update to the stored last
update, which will ensure third parties accessing pod worker state
don't see updates before the pod worker begins synchronizing them.
As a consequence, the update channel becomes a simple notifier
(struct{}) so that SyncKnownPods can coordinate with the pod worker
to create a synthetic pending update for unknown pods (i.e. no one
besides the pod worker has data about those pods). Otherwise the
pending update info would be hidden inside the channel.
In order to properly track pending updates, we have to be very
careful not to mix RunningPods (which are calculated from the
container runtime and are missing all spec info) and config-
sourced pods. Update the pod worker to avoid using ToAPIPod()
and instead require the pod worker to directly use
update.Options.Pod or update.Options.RunningPod for the
correct methods. Add a new SyncTerminatingRuntimePod to prevent
accidental invocations of runtime only pod data.
Finally, fix SyncKnownPods to replay the last valid update for
undesired pods which drives the pod state machine towards
termination, and alter HandlePodCleanups to:
- terminate runtime pods that aren't known to the pod worker
- launch admitted pods that aren't known to the pod worker
Any started pods receive a replay until they reach the finished
state, and then are removed from the pod worker. When a desired
pod is detected as not being in the worker, the usual cause is
that the pod was deleted and recreated with the same UID (almost
always a static pod since API UID reuse is statistically
unlikely). This simplifies the previous restartable pod support.
We are careful to filter for active pods (those not already
terminal or those which have been previously rejected by
admission). We also force a refresh of the runtime cache to
ensure we don't see an older version of the state.
Future changes will allow other components that need to view the
pod worker's actual state (not the desired state the podManager
represents) to retrieve that info from the pod worker.
Several bugs in pod lifecycle have been undetectable at runtime
because the kubelet does not clearly describe the number of pods
in use. To better report, add the following metrics:
kubelet_desired_pods: Pods the pod manager sees
kubelet_active_pods: "Admitted" pods that gate new pods
kubelet_mirror_pods: Mirror pods the kubelet is tracking
kubelet_working_pods: Breakdown of pods from the last sync in
each phase, orphaned state, and static or not
kubelet_restarted_pods_total: A counter for pods that saw a
CREATE before the previous pod with the same UID was finished
kubelet_orphaned_runtime_pods_total: A counter for pods detected
at runtime that were not known to the kubelet. Will be
populated at Kubelet startup and should never be incremented
after.
Add a metric check to our e2e tests that verifies the values are
captured correctly during a serial test, and then verify them in
detail in unit tests.
Adds 23 series to the kubelet /metrics endpoint.
1. Core Kubelet changes to implement In-place Pod Vertical Scaling.
2. E2E tests for In-place Pod Vertical Scaling.
3. Refactor kubelet code and add missing tests (Derek's kubelet review)
4. Add a new hash over container fields without Resources field to allow feature gate toggling without restarting containers not using the feature.
5. Fix corner-case where resize A->B->A gets ignored
6. Add cgroup v2 support to pod resize E2E test.
KEP: /enhancements/keps/sig-node/1287-in-place-update-pod-resources
Co-authored-by: Chen Wang <Chen.Wang1@ibm.com>
This change is to promote local storage capacity isolation feature to GA
At the same time, to allow rootless system disable this feature due to
unable to get root fs, this change introduced a new kubelet config
"localStorageCapacityIsolation". By default it is set to true. For
rootless systems, they can set this configuration to false to disable
the feature. Once it is set, user cannot set ephemeral-storage
request/limit because capacity and allocatable will not be set.
Change-Id: I48a52e737c6a09e9131454db6ad31247b56c000a
When adding functionality to the kubelet package and a test file, is
kind of painful to run unit tests today locally.
We usually can't run specifying the test file, as if xx_test.go and
xx.go use the same package, we need to specify all the dependencies. As
soon as xx.go uses the Kuebelet type (we need to do that to fake a
kubelet in the unit tests), this is completely impossible to do in
practice.
So the other option is to run the unit tests for the whole package or
run only a specific funtion. Running a single function can work in some
cases, but it is painful when we want to test all the functions we
wrote. On the other hand, running the test for the whole package is very
slow.
Today some unit tests try to connect to the API server (with retries)
create and list lot of pods/volumes, etc. This makes running the unit
test for the kubelet package slow.
This patch tries to make running the unit test for the whole package
more palatable. This patch adds a skip if the short version was
requested (go test -short ...), so we don't try to connect
to the API server or skip other slow tests.
Before this patch running the unit tests took in my computer (I've run
it several times so the compilation is already done):
$ time go test -v
real 0m21.303s
user 0m9.033s
sys 0m2.052s
With this patch it takes ~1/3 of the time:
$ time go test -short -v
real 0m7.825s
user 0m9.588s
sys 0m1.723s
Around 8 seconds is something I can wait to run the tests :)
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
Other components must know when the Kubelet has released critical
resources for terminal pods. Do not set the phase in the apiserver
to terminal until all containers are stopped and cannot restart.
As a consequence of this change, the Kubelet must explicitly transition
a terminal pod to the terminating state in the pod worker which is
handled by returning a new isTerminal boolean from syncPod.
Finally, if a pod with init containers hasn't been initialized yet,
don't default container statuses or not yet attempted init containers
to the unknown failure state.