e2e_node: provide an option to specify hugepages on the specific NUMA node

On the multi NUMA node environment, kernel splits hugepages allocated under
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages file equally between NUMA nodes.
That makes it harder to predict where several pods will start because the number
of hugepages on each NUMA node will depend on the amount of NUMA nodes under the environment.
The memory manager test will allocate hugepages on the specific NUMA node to make
the test more predictable on the multi NUMA nodes environment.

Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
This commit is contained in:
Artyom Lukianov
2021-11-10 14:31:48 +02:00
parent ea2011d72a
commit ba06be98e5
2 changed files with 18 additions and 7 deletions

View File

@@ -324,7 +324,7 @@ var _ = SIGDescribe("Memory Manager [Disruptive] [Serial] [Feature:MemoryManager
if *is2MiHugepagesSupported {
ginkgo.By("Configuring hugepages")
gomega.Eventually(func() error {
return configureHugePages(hugepagesSize2M, hugepages2MiCount)
return configureHugePages(hugepagesSize2M, hugepages2MiCount, pointer.IntPtr(0))
}, 30*time.Second, framework.Poll).Should(gomega.BeNil())
}
})
@@ -356,7 +356,8 @@ var _ = SIGDescribe("Memory Manager [Disruptive] [Serial] [Feature:MemoryManager
if *is2MiHugepagesSupported {
ginkgo.By("Releasing allocated hugepages")
gomega.Eventually(func() error {
return configureHugePages(hugepagesSize2M, 0)
// configure hugepages on the NUMA node 0 to avoid hugepages split across NUMA nodes
return configureHugePages(hugepagesSize2M, 0, pointer.IntPtr(0))
}, 90*time.Second, 15*time.Second).ShouldNot(gomega.HaveOccurred(), "failed to release hugepages")
}
})