Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
add missing LastTransitionTime of ContainerReady condition
**What this PR does / why we need it**:
add missing LastTransitionTime of ContainerReady condition
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
xref #64646
**Special notes for your reviewer**:
/cc freehan yujuhong
**Release note**:
```release-note
add missing LastTransitionTime of ContainerReady condition
```
Automatic merge from submit-queue (batch tested with PRs 42973, 41582)
Improve status manager unit testing
This is designed to simplify testing logic in the status manager, and decrease reliance on syncBatch. This is a smaller portion of #37119, and should be easier to review than that change.
It makes the following changes:
- creates convenience functions for get, update, and delete core.Action
- prefers using syncPod on elements in the podStatusChannel to using syncBatch to reduce unintended reliance on syncBatch
- combines consuming, validating, and clearing actions into single verifyActions function. This replaces calls to testSyncBatch(), verifyActions(), and ClearActions
- changes comments in testing functions into log statements for easier debugging
@Random-Liu
Automatic merge from submit-queue
Print dereferenced pod status fields when logging status update
Before: "Terminated:0xc421932af0"
After:"Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-03-07 14:50:48 -0500 EST,ContainerID:docker://bd453bb969264b3ace2b3934a568af7679a0d51fee543a5f8a82429ff654970e,}"
"Ignoring same status for pod" messages already print status fully, these "Status for pod updated" messages should too IMO
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 40964, 42967, 43091, 43115)
Improve code coverage for pkg/kubelet/status/generate.go
**What this PR does / why we need it**:
Improve code coverage for pkg/kubelet/status/generate.go from #39559
Thanks.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 40239, 40397, 40449, 40448, 40360)
move the discovery and dynamic clients
Moved the dynamic client, discovery client, testing/core, and testing/cache to `client-go`. Dependencies on api groups we don't have generated clients for have dropped out, so federation, kubeadm, and imagepolicy.
@caesarxuchao @sttts
approved based on https://github.com/kubernetes/kubernetes/issues/40363
Enforce the following limits:
12kb for total message length in container status
4kb for the termination message path file
2kb or 80 lines (whichever is shorter) from the log on error
Fallback to log output if the user requests it.
cleanupTerminatedPods is responsible for checking whether a pod has been
terminated and force a status update to trigger the pod deletion. However, this
function is called in the periodic clenup routine, which runs every 2 seconds.
In other words, it forces a status update for each non-running (and not yet
deleted in the apiserver) pod. When batch deleting tens of pods, the rate of
new updates surpasses what the status manager can handle, causing numerous
redundant requests (and the status channel to be full).
This change forces a status update only when detecting the DeletionTimestamp is
set for a terminated pod. Note that for other non-terminated pods, the pod
workers should be responsible for setting the correct status after killling all
the containers.
cleanupTerminatedPods is responsible for checking whether a pod has been
terminated and force a status update to trigger the pod deletion. However, this
function is called in the periodic clenup routine, which runs every 2 seconds.
In other words, it forces a status update for each non-running (and not yet
deleted in the apiserver) pod. When batch deleting tens of pods, the rate of
new updates surpasses what the status manager can handle, causing numerous
redundant requests (and the status channel to be full).
This change forces a status update only when detecting the DeletionTimestamp is
set for a terminated pod. Note that for other non-terminated pods, the pod
workers should be responsible for setting the correct status after killling all
the containers.