Commit Graph

12 Commits

Author SHA1 Message Date
Sascha Grunert
de37b9d293
Make CRI v1 the default and allow a fallback to v1alpha2
This patch makes the CRI `v1` API the new project-wide default version.
To allow backwards compatibility, a fallback to `v1alpha2` has been added
as well. This fallback can either used by automatically determined by
the kubelet.

Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
2021-11-17 11:05:05 -08:00
aheng-ch
ff7b94fa5a fix removing pods from podTopologyHints mapping 2021-05-10 19:44:15 +08:00
Artyom Lukianov
95b2777204 memory manager: specify the container cpuset.memory during the creation
Set the container cpuset.memory during the creation and avoid an additional
call to the resources update of the container.

Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
2021-03-02 17:01:46 +02:00
Artyom Lukianov
9ae499ae46 memory manager: pass memory manager flags to the container manager
Pass memory manager flags to the container manager and call all relevant memory manager
methods under the container manager.

Signed-off-by: Byonggon Chun <bg.chun@samsung.com>
2021-02-09 00:54:58 +02:00
Artyom Lukianov
38dc7509f8 cpu manager: specify the container CPU set during the creation
We can set the container cpuset.cpus diring the creation and it
will not need to call to update resources after the container creation.

Additional side effect of the change, that the runc process that responsible
to create the container will run with the same CPU affinity because the
runc runs on the cpuset provided in the config.json arg.

It will allow to prevent undesirable interupts on isolated CPUs.

Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
2021-01-20 17:53:33 +02:00
Chris Friesen
ab5870d808 Fix exclusive CPU allocations being deleted at container restart
The expectation is that exclusive CPU allocations happen at pod
creation time. When a container restarts, it should not have its
exclusive CPU allocations removed, and it should not need to
re-allocate CPUs.

There are a few places in the current code that look for containers
that have exited and call CpuManager.RemoveContainer() to clean up
the container.  This will end up deleting any exclusive CPU
allocations for that container, and if the container restarts within
the same pod it will end up using the default cpuset rather than
what should be exclusive CPUs.

Removing those calls and adding resource cleanup at allocation
time should get rid of the problem.

Signed-off-by: Chris Friesen <chris.friesen@windriver.com>
2020-04-27 11:36:54 -06:00
nolancon
4baa1d967d Check for nil cpuManager 2020-03-05 07:54:33 +00:00
Louise Daly
9f0081cc36 Updates to container manager and internal container lifecycle to accommodate Topology Manager
Co-authored-by: Conor Nolan <conor.nolan@intel.com>
2019-07-24 08:09:38 +01:00
Connor Doyle
81ccd396d7 Fixed nil InternalContainerLifecycle in cm stubs. 2017-09-04 07:24:59 -07:00
Connor Doyle
ec706216e6 Un-revert "CPU manager wiring and none policy"
This reverts commit 8d2832021a.
2017-09-04 07:24:59 -07:00
Shyam JVS
8d2832021a Revert "CPU manager wiring and none policy" 2017-09-01 18:17:36 +02:00
Connor Doyle
7c6e31617d CPU Manager initialization and lifecycle calls. 2017-08-30 08:50:41 -07:00