Refactor common validation into methods that validate a single container
and call these methods when iterating the three types of container
lists. Move initContainer-specific validation from validateContainers to
validateInitContainers.
This resolves issues where init and ephemeral containers would return
duplicate or incorrectly formatted errors for problems detected by
validateContainers.
Adds missing tests based on KUBE_COVER and checks that errors returned
by validation are of the type and for the field expected. Fixes tests
that had multiple errors so later failures aren't masked if there's
a regression in only one of the errors.
This introduces no changes to unit tests other than to switch from
map-based to struct-based tables in TestValidateContainers and
TestValidateInitContainers in order to make diffs for later commits
easier to read.
The evictorLock only protects zonePodEvictor and zoneNoExecuteTainter.
processTaintBaseEviction showed indications of increased lock contention
among goroutines (see issue 110341 for more details).
The refactor done is to ensure that all codepaths in that function that
hold the evictorLock AND make API calls under the lock, are now making
API calls outside the lock and the lock is held only for accessing either
zonePodEvictor or zoneNoExecuteTainter or both.
Two other places where the refactor was done is the doEvictionPass and
doNoExecuteTaintingPass functions which make multiple API calls under
the evictorLock.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
At present the CSI spec secret name validation for ControllerPublish,
ControllerExpand, NodePublish secrets are performed against
ValidateDNS1123Label() and it causes the secret name validation
inside the CSI spec to go wrong if the secret name is more than 63 chars.
Kubernetes allow the secret object name to be on `DNS SubDomainName`
and having a secret name length between 0-253 is correct/valid. So the CSI
spec validation also has to be performed accordingly.
This commit address this issue in validation for above mentioned funcs.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The proxies watch node labels for topology changes, but node labels
can change in bursts especially in larger clusters. This causes
pressure on all proxies because they can't filter the events, since
the topology could match on any label.
Change node event handling to queue the request rather than immediately
syncing. The sync runner can already handle short bursts which shouldn't
change behavior for most cases.
Signed-off-by: Dan Williams <dcbw@redhat.com>