Commit Graph

11 Commits

Author SHA1 Message Date
zhoumingcheng
ca8f3dff9a add unit test coverage for pkg/kubelet/util/queue
Signed-off-by: zhoumingcheng <zhoumingcheng@beyondcent.com>
2022-06-23 17:57:17 +08:00
wojtekt
53ce79a18a Migrate to k8s.io/utils/clock in pkg/kubelet 2021-09-10 12:20:09 +02:00
Clayton Coleman
3e095d12b4
Refactor move of client-go/util/clock to apimachinery 2017-05-20 14:19:48 -04:00
deads2k
5a8f075197 move authoritative client-go utils out of pkg 2017-01-24 08:59:18 -05:00
deads2k
c47717134b move utils used in restclient to client-go 2017-01-19 07:55:14 -05:00
deads2k
6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Harry Zhang
cb14b35bde Refactor util clock into it's own pkg 2016-07-28 02:29:04 -04:00
David McMahon
ef0c9f0c5b Remove "All rights reserved" from all the headers. 2016-06-29 17:47:36 -07:00
Daniel Smith
4a7d70aef1 extend fake clock 2016-02-01 15:36:15 -08:00
Yu-Ju Hong
4ab505606b Always overwrite items in kubelet's work queue
This allows kubelet to change the next sync time based on the last result.
2016-01-12 16:25:19 -08:00
Yu-Ju Hong
2eb17df46b kubelet: independent pod syncs and backoff on error
Currently kubelet syncs all pods every 10s. This is not preferred because
 * Some pods may have been sync'd recently.
 * This may cause all the pods to be sync'd at once, causing undesirable
   CPU spikes.

This PR replaces the global syncs with independent, periodic pod syncs. At the
end of syncing, each pod worker will enqueue itslef with a future timestamp (
current time + sync interval), when it will be due for another sync.
 * If the pod worker encoutners an sync error, it may requeue with a different
   timestamp to retry sooner.
 * If a sync is triggered by the update channel (events or spec changes), the
   pod worker would enqueue a new sync time.

This change is necessary for moving to long or no periodic sync period once pod
lifecycle event generator is completed. We will still rely on the mechanism to
requeue the pod on sync error.

This change also makes sure that if a sync does not succeed (either due to
real error or the per-container backoff mechanism), an error would be propagated
back to the pod worker, which is responsible for requeuing.
2015-11-03 13:29:08 -08:00