A recent change made it so that the CPUManager receives a list of
initial containers that exist on the system at startup. This list can be
non-empty, for example, after a kubelet retart.
This commit ensures that the CPUManagers containerMap structure is
initialized with the containers from this list.
The logic has been updated to match the logic of the best-effort policy
except in two places:
1) The hint filtering frunction has been updated to allow "don't care"
hints encoded with a `nil` affinity mask, to pass through the filter in
addition to hints that have just a single NUMA bit set.
2) After calculating the `bestHint` we transform "don't care" affinities
encoded as having all NUMA bits set in their affinity masks into "don't
care" affinities encoded as `nil`.
- Initialize best Hint to TopologyHint{}
- Update checks.
- Move generic unit test case into policy specific tests and updated
expected outcome to reflect changes.
- Restructure function
- Remove bug fix for catching {nil true} - To be fixed in later commit
- Restore unit tests to original state for testing filterHints
This is to keep consistency with the other policies.
This change may be made across all policies in a future PR, but removing it
from the scope of this PR for now.
- Best Effort Policy: Return hint with nil affinity as opposed to
defaultAffinity when provider has no preference for NUMA affinty or no
possible NUMA affinities.
- Single NUMA Node Policy: Remove defaultHint from mergeProvidersHints.
Instead return appropriate TopologyHint where required.
- Update unit tests to reflect changes. Some test cases moved into
individual policy test functions due to differing returned affinties
per policy.
- Remove getHintMatch method.
- Replace with simplified versions of mergePermutation and
iterateAllProviderTopologyHints methods - as used in best-effort.
- Remove getHintMatch unit tests.
- Update filterHints test to reflect changes in previous commit.
- Some common test cases achieve differing expected results based on
policy due to independent merge strategies. These cases are moved into
individual policy based test functions.
- Only append valid preferred-true hints to filtered
- Return true if allResourceHints only consist of
nil-affinity/preferred-true hints: {nil true}, update defaultHint
preference accordingly.
Explanation taken from original commit:
- Change the current method of finding the best hint.
Instead of going over all permutations, sort the hints and find
the narrowest hint common to all resources.
- Break out early when merging to a preferred hint is not possible
- Remove need to pass policy and numaNodes as arguments
- Remove PolicySingleNUMANode special case check in policy_best_effort
- Add mergeProviderHints base to policy_single_numa_node for upcoming
commit
This check is redundant since we protect this call with a call to
`m.sourcesReady.AllReady()` earlier on. Moreover, having this check in
place means that we will leave some stale state around in cases where
there are actually no active pods in the system and this loop hasn't
cleaned them up yet. This can happen, for example, if a pod exits while
the kubelet is down for some reason. We see this exact case being
triggered in our e2e tests, where a test has been failing since October
when this change was first introduced.
This change is to prevent problems when we remove the V1->V2 migration
code in the future. Without this, the checksums of all checkpoints would
be hashed with the name CPUManagerCheckpointV2 embedded inside of them,
which is undesirable. We want the checkpoints to be hashed with the name
CPUManagerCheckpoint instead.
The updated CPUManager from PR #84462 implements logic to migrate the
CPUManager checkpoint file from an old format to a new one. To do so, it
defines the following types:
```
type CPUManagerCheckpoint = CPUManagerCheckpointV2
type CPUManagerCheckpointV1 struct { ... }
type CPUManagerCheckpointV2 struct { ... }
```
This replaces the old definition of just:
```
type CPUManagerCheckpoint struct { ... }
```
Code was put in place to ensure proper migration from checkpoints in V1
format to checkpoints in V2 format. However (and this is a big however),
all of the unit tests were performed on V1 checkpoints that were
generated using the type name `CPUManagerCheckpointV1` and not the
original type name of `CPUManagerCheckpoint`. As such, the checksum in
the checkpoint file uses the `CPUManagerCheckpointV1` type to calculate
its checksum and not the original type name of `CPUManagerCheckpoint`.
This causes problems in the real world since all pre-1.18 checkpoint
files will have been generated with the original type name of
`CPUManagerCheckpoint`. When verifying the checksum of the checkpoint
file across an upgrade to 1.18, the checksum is calculated assuming
a type name of `CPUManagerCheckpointV1` (which is incorrect) and the
file is seen to be corrupt.
This patch ensures that all V1 checksums are verified against a type
name of `CPUManagerCheckpoint` instead of ``CPUManagerCheckpointV1`.
It also locks the algorithm used to calculate the checksum in place,
since it wil never change in the future (for pre-1.18 checkpoint
files at least).
These information associatedd with these containers is used to migrate
the CPUManager state from it's old format to its new (i.e. keyed off of
podUID and containerName instead of containerID).
For now, we just pass 'nil' as the set of 'initialContainers' for
migrating from old state semantics to new ones. In a subsequent commit
will we pull this information from higher layers so that we can pass it
down at this stage properly.
Previously, the state was keyed off of containerID intead of podUID and
containerName. Unfortunately, this is no longer possible as we move to a
to model where we we allocate CPUs to containers at pod adit time rather
than container start time.
This patch is the first step towards full migration to the new
semantics. Only the unit tests in cpumanager/state are passing. In
subsequent commits we will update the CPUManager itself to use these new
semantics.
This patch also includes code to do migration from the old checkpoint format
to the new one, assuming the existence of a ContainerMap with the proper
mapping of (containerID)->(podUID, containerName). A subsequent commit
will update code in higher layers to make sure that this ContainerMap is
made available to this state logic.
This patch removes pkg/util/mount completely, and replaces it with the
mount package now located at k8s.io/utils/mount. The code found at
k8s.io/utils/mount was moved there from pkg/util/mount, so the code is
identical, just no longer in-tree to k/k.
This patch moves fake.go to mount_fake.go, and follows to principle of
always returning a discrete type rather than an Interface. All callers
of "FakeMounter" are changed to instead use "NewFakeMounter()". The
FakeMounter "Log" struct member is changed to not be exported, and
instead only access through a new "GetLog()" method.
cause by kubelet startup be interrupted on setting list of cgroups
In the 'cgroupManagerImpl.Exists' not check&recreate the hugetlb cgroup dir. Then setting the limits in non-exist cgroup dir will cause kubelet start failed.
Signed-off-by: bingshen.wbs <bingshen.wbs@alibaba-inc.com>
This ensures that we have the most up-to-date state when generating
topology hints for a container. Without this, it's possible that some
resources will be seen as allocated, when they are actually free.
This will become especially important as we move to a model where
exclusive CPUs are assigned at pod admission time rather than at pod
creation time.
Having this function will allow us to do garbage collection on these
CPUs anytime we are about to allocate CPUs to a new set of containers,
in addition to reclaiming state periodically in the reconcileState()
loop.
These changes make it so that a set of common test cases can be used for
all merge strategies, with specific test cases being able to be
specified on a policy-by-policy basis.