kubelet sends up status updates to flip the ready condition of a pod after the
pod is already in the running state. RunRC should wait until the pod condition
is ready to make sure there is no pending status update which may affect the
follow-up performance test.
- Don't mess with non-test node labels in daemonset e2e test
Other e2e tests will expect labels on the nodes. The daemonset test should only
add and remove its own labels.
- Refactor node updating in deamonset e2e test
- pre-create node api objects from the scheduler when offers arrive
- decline offers until nodes a registered
- turn slave attributes as k8s.mesosphere.io/attribute-* labels
- update labels from executor Register/Reregister
- watch nodes in scheduler to make non-Mesos labels available for NodeSelector matching
- add unit tests for label predicate
- add e2e test to check that slave attributes really end up as node labels
This test tracks kubelet resource usage over a long period of time (1hr)
when running N pods (e.g., N=0,50), and prints out the resource usage. This
would give us an idea how much kubelet's management overhead is in a stable
cluster.
Some followup items:
* Use a more realistic workload (e.g., including probing)
* Fail the test if the resource usage is too high.
Caveat:
* We assume the scheduler would do a decent job distributing the pause pods,
but we should double check.
* Cluster addon pods could be unevenly distributed and skews the resource
usage on nodes.
Other cluster provider than gce or gke might have different cgroup layouts.
From outside we cannot know how these look like (especially in conformance test
which do not know the cluster provider at all).
Hence, this PR defaults to only the "/" cgroup to collect stats for. In the case
of gce or gke the full container list is tested.
Fixes https://github.com/mesosphere/kubernetes-mesos/issues/436
As discussed @gmarek the given test does not belong into the conformance test
suite because it makes a lot of static assumptions about the cgroup setup of the
nodes which cannot be fulfilled by all cluster providers. Depending on the
installation the kubelet is not allowed to move around process
into specific containers.
Fixes https://github.com/mesosphere/kubernetes-mesos/issues/439.