- use in-cache Pod instead of real-time Pod (by calling API server) to mark it as unschedulable
in internal schedulingQ
- remove the backoff logic as now we don't call API server
- the whole logic is changed to a synchronous call
ensure that when a pod servicing UDP traffic is deleted the conntrack entries
are cleaned up and another backend can pick up the traffic with minimal
interruption
When using NodePort services and long running connections that on pod deletion
stale conntrack entries can halt the flow of traffic. Add a test case to check
that conntrack entries are cleaned up.
In case two or more controllers share the informers created through InitTestScheduler,
it's not safe to start the informers until all controllers set their informer
indexers. Otherwise, some controller might fail to register their indexers
in time. Thus, it's responsibility of each consumer to make sure all informers
are started after all controllers had time to get initiliazed.
ginkgo has a weird bug that - AfterEach does not get called when
testsuite exits with certain kind of interrupt (Ctrl-C for example).
More info - https://github.com/onsi/ginkgo/issues/222
We workaround this issue in Kubernetes by adding a special hook into
AfterSuite call, but AfterSuite can not be used to peforms certain
kind of cleanup because it can race with AfterEach hook and
framework.AfterEach hook will set framework.ClientSet to nil.
This presents a problem in cleaning up CSI driver and testpods. This
PR removes cleanup of driver manifest via CleanupAction because that
is not safe and racy (such as f.ClientSet may disappear!) and makes
AfterSuite hooks run in a ordered fashion
We should read and verify the data before actually closing the
connection to avoid connection based-races within the test.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
Removes the fatal error from getIP and moves it to
retry loop so that application will not immediately
crash on failure.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
This updates the error messages when registering a
node to be more explicit about what error occurred
and how long it will wait to retry.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
Currently the guestbook application will fail if unable
to resolve TCP address on first attempt. If pod networking
is not setup when the application starts then it will be
unable to resolve, leading to frequent failures. This moves
the address resolution into the retry block so it will try
again if unsuccessful on first attempt.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>