Remove unnecessary yaml file.
Define 4 specific pod behaviors.
(sleeping for short periods is going to be flaky during automated
testing. Also, sleep -1 still exits 0)
Don't wait for a certain number of active pods in tests
where the pods terminate after a finite time, since this is racy.
Changed some tests to use pods that run forever, and not wait
for completion.
Added tests with local restarts.
Convert the DeleteOptions to the correct api group.
The test was disabled because sometimes restart count could not reach the
target before timeout. This change lowers the target restart count, increases
the timeout threshold to 5 minutes, and adds the test to the SLOW suite.
Running the test in a local cluster takes ~1m40s to complete.
kubelet sends up status updates to flip the ready condition of a pod after the
pod is already in the running state. RunRC should wait until the pod condition
is ready to make sure there is no pending status update which may affect the
follow-up performance test.
- Don't mess with non-test node labels in daemonset e2e test
Other e2e tests will expect labels on the nodes. The daemonset test should only
add and remove its own labels.
- Refactor node updating in deamonset e2e test
- pre-create node api objects from the scheduler when offers arrive
- decline offers until nodes a registered
- turn slave attributes as k8s.mesosphere.io/attribute-* labels
- update labels from executor Register/Reregister
- watch nodes in scheduler to make non-Mesos labels available for NodeSelector matching
- add unit tests for label predicate
- add e2e test to check that slave attributes really end up as node labels
This test tracks kubelet resource usage over a long period of time (1hr)
when running N pods (e.g., N=0,50), and prints out the resource usage. This
would give us an idea how much kubelet's management overhead is in a stable
cluster.
Some followup items:
* Use a more realistic workload (e.g., including probing)
* Fail the test if the resource usage is too high.
Caveat:
* We assume the scheduler would do a decent job distributing the pause pods,
but we should double check.
* Cluster addon pods could be unevenly distributed and skews the resource
usage on nodes.