Merge pull request #23914 from sky-uk/make-etcd-cache-size-configurable
Automatic merge from submit-queue Make etcd cache size configurable Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster. I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases. There are some other fundamental issues that I'm not addressing here: - Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated) - Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size. - 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status. This is provided as a fix for #23323
This commit is contained in:
@@ -82,6 +82,7 @@ deleting-pods-burst
|
||||
deleting-pods-qps
|
||||
deployment-controller-sync-period
|
||||
deployment-label-key
|
||||
deserialization-cache-size
|
||||
dest-file
|
||||
disable-filter
|
||||
docker-email
|
||||
|
Reference in New Issue
Block a user