- pre-create node api objects from the scheduler when offers arrive
- decline offers until nodes a registered
- turn slave attributes as k8s.mesosphere.io/attribute-* labels
- update labels from executor Register/Reregister
- watch nodes in scheduler to make non-Mesos labels available for NodeSelector matching
- add unit tests for label predicate
- add e2e test to check that slave attributes really end up as node labels
- instantiate framework.Controller for pods in the executor using framework.NewInformer,
in order to watch pod updates for pods on that host
- forwards updates like graceful termination to the kubelet.
This might also be the preparation for other updates which are supported by the
kubelet.
1. Add EvnetRecordQps and EventBurst parameter in kubelet.
2. If EvnetRecordQps and EventBurst was set, rate limit events in kubelet
with a independent ratelimiter as setted.
Allow the user to specify the resolver configuration file that is used
to determine the default DNS parameters. This defaults to the system's
/etc/resolv.conf.
Currently, whenever there is any update, kubelet would force all pod workers to
sync again, causing resource contention and hence performance degradation.
This commit flips kubelet to use incremental updates (as opposed to snapshots).
This allows us to know what pods have changed and send updates to those pod
workers only. The `SyncPods` function has been replaced with individual
handlers, each handling an operation (ADD, REMOVE, UPDATE). Pod workers are
still triggered periodically, and kubelet performs periodic cleanup as well.
This commit also spawns a new goroutine solely responsible for killing pods.
This is necessary because pod killing could hold up the sync loop for
indefinitely long amount of time now user can define the graceful termination
period in the container spec.
In contrast to the docker and system containers (= cgroup path) this has no
undesired consequence, only that the kubelet itself will be in its own cgroup
below the executor cgroup.
pflag can handle IP addresses so use the pflag code instead of doing it
ourselves. This means our code just uses net.IP and we don't have all of
the useless casting back and forth!
The minion server will
- launch the proxy and executor
- relaunch them when they terminate uncleanly
- logrotate their logs.
It is a replacement for a full-blown init process like s6 which is not necessary
in this case.
Before NodeName in the pod spec was used. Hence, pods with a fixed, pre-set
NodeName were never scheduled by the k8sm-scheduler, leading e.g. to a failing
e2e intra-pod test.
Fixesmesosphere/kubernetes-mesos#388
This patch
- set limits (0.25 cpu, 64 MB) on containers which are not limited in pod spec
(these are also passed to the kubelet such that it uses them for the docker
run limits)
- sums up the container resource limits for cpu and memory inside a pod,
- compares the sums to the offered resources
- puts the sums into the Mesos TaskInfo such that Mesos does the accounting
for the pod.
- parses the static pod spec and adds up the resources
- sets the executor resources to 0.25 cpu, 64 MB plus the static pod resources
- sets the cgroups in the kubelet for system containers, resource containers
and docker to the one of the executor that Mesos assigned
- adds scheduler parameters --default-container-cpu-limit and
--default-container-mem-limit.
The containers themselves are resource limited the Docker resource limit which
the kubelet applies when launching them.
Fixesmesosphere/kubernetes-mesos#68 and mesosphere/kubernetes-mesos#304
The file source was created even when no static pods were configured.
In this case it was never marked as seen. As a consequence the kubelet
syncPods functions never deleted pods because it was too cautious due
an unseen pod source, leading to leaked pods.
The TestExecutorFrameworkMessage test sends a "task-lost:foo" message to the
executor in order to mark a pod as lost. For that the pod must be running first.
Otherwise, the executor code will send "TASK_FAILED" status updates, not "TASK_LOST".
Before this patch there was no synchronization between the pod startup and the
test case. Moreover, in order to startup a task a working apiserver URL must be
passed to the executor which was not the case either.
Fixesmesosphere/kubernetes-mesos#351
- the mesos scheduler gets a --static-pods-config parameter with a directory with
pods specs. They are zipped and sent over to newly started mesos executors.
- the mesos executor receives the zipper static pod config via ExecutorInfo.Data
and starts up the pods via the kubelet FileSource mechanism.
- both - the scheduler and the executor side - are fully unit tested