Some tests in this test suite expects --max-pods (i.e. the maximum pod capacity
on kubelet) to be greater than default, which applies only to the GCE test
environment. Split the tests into two sets so that we can better categorize
the tests in the jenkins setup, without making the test itself aware of the
environment.
Ref #14084, Print output when error occurs in e2e test "in Services should be able to create a functioning external load balancer with user-provided load balancer ip"
Correct port-forward data copying logic so that the server closes its
half of the data stream when socat exits, and the client closes its half
of the data stream when it finishes writing.
Modify the client to wait for both copies (client->server,
server->client) to finish before it unblocks.
Fix race condition in the Kubelet's handling of incoming port forward
streams. Have the client generate a connectionID header to be used to
associate the error and data streams for a single connection, instead of
assuming that streams n and n+1 go together. Attempt to generate a
pseudo connectionID in the server in the event the connectionID header
isn't present (older clients); this is a best-effort approach that only
really works with 1 connection at a time, whereas multiple concurrent
connections will only work reliably with a newer client that is
generating connectionID.
Remove unnecessary yaml file.
Define 4 specific pod behaviors.
(sleeping for short periods is going to be flaky during automated
testing. Also, sleep -1 still exits 0)
Don't wait for a certain number of active pods in tests
where the pods terminate after a finite time, since this is racy.
Changed some tests to use pods that run forever, and not wait
for completion.
Added tests with local restarts.
Convert the DeleteOptions to the correct api group.
The test was disabled because sometimes restart count could not reach the
target before timeout. This change lowers the target restart count, increases
the timeout threshold to 5 minutes, and adds the test to the SLOW suite.
Running the test in a local cluster takes ~1m40s to complete.
kubelet sends up status updates to flip the ready condition of a pod after the
pod is already in the running state. RunRC should wait until the pod condition
is ready to make sure there is no pending status update which may affect the
follow-up performance test.
- Don't mess with non-test node labels in daemonset e2e test
Other e2e tests will expect labels on the nodes. The daemonset test should only
add and remove its own labels.
- Refactor node updating in deamonset e2e test
- pre-create node api objects from the scheduler when offers arrive
- decline offers until nodes a registered
- turn slave attributes as k8s.mesosphere.io/attribute-* labels
- update labels from executor Register/Reregister
- watch nodes in scheduler to make non-Mesos labels available for NodeSelector matching
- add unit tests for label predicate
- add e2e test to check that slave attributes really end up as node labels