The remote runtime implementation now supports the `verbose` fields,
which are required for consumers like cri-tools to enable multi CRI
version support.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
- Graduate the feature gate to Beta and enable it by default.
- Pre-set the default value for UnversionedKubeletConfigMap
to "true" in test/e2e_kubeadm.
- Fix a couple of typos in "tolerate" introduced in the PR that
added the FG in 1.23.
"-bench=PerfScheduling/Preemption/500Nodes" ran both the
PerfScheduling/Preemption/500Nodes and the
PerfScheduling/PreemptionPVs/500Nodes benchmark.
This can be avoided by choosing names where none is the prefix of another.
When running an integration test that measures performance, like for example
test/integration/scheduler_perf, running etcd with debug level output is
undesirable because it creates additional load on the system and isn't
realistic.
The default is still "debug", but ETCD_LOGLEVEL=warn can be used to override
that.
ARTIFACTS environment variables if KUBE_JUNIT_REPORT_DIR is not set.
Add missing record_command to a couple of tests in legacy-script.sh.
Fix some typos in /test/cmd/README.md
A cpu/topology manager e2e test wants to require one exclusive CPU
and a share of CPU time; let's round up the allocatable CPU requirements
(from 1 to 2) to reduce the chances of false negatives.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Even though CI machines _usually_ have at least two cpus,
let's rather not assume this holds true, and let's actually
check the allocatable CPUs, skipping even the simplest
tests if the assumption is broken, to avoid false negatives.
Signed-off-by: Francesco Romani <fromani@redhat.com>
The existing cpu/topology manager tests correctly check for the
node resources and skip if the detected resources are not enough
to run the tests, to avoid false negatives.
Unfortunately they do the check against the node capacity, while
the correct approach is to check the allocatable resources.
The existing check is correct only on a narrow set of cases;
otherwise can still lead to false negatives.
This PR fixes that.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Make sure to log out the cpu capacity and allocatable for
the node running the tests, to make the troubleshooting
of test failures easier.
Signed-off-by: Francesco Romani <fromani@redhat.com>
This is causing a bug when upgrading from older releases to 1.23 because
of Service's maybe-too-clever default-on-read logic.
Service depends on `Decorator()` to be called upon read, to
back-populate old saved objects which do not have `.clusterIPs[]` set.
This works on read, but the cache saves the pre-decorated type (as it is
documented)
In 1.23, this code was refactored and it seems some edge-case handling
was inadvertently removed (I have not confirmed exactly what happened).
Test by aojea
We do not have guarantee that the agnhost's `/hostname` endpoint returns
a hostname and not an FQDN. We also do not have guarantee a hostname
gets passed to the execHostnameTest() function for comparison.
So make sure we're comparing hostnames in execHostnameTest().
When CRDs are deleted, discovery local cache is not invalidated.
This brings about `resource not found` error when new CRD with same name is created
with different fields(ie. changing scope from cluster-wide to namespaced).
Because this already deleted CRD still stays in serverresources.json and kubectl tries to use it.
This local cached files have 10 minutes TTL. After deletion, if user waits 10 minutes,
files will be expired and deleted and there will be no errors. However, 10 minutes is a long time
and cache needs to be invalidated after deletion occurs.
This PR adds a document into delete command by noting that there might be a need to invalidate discovery
cache when CRD is deleted. In addition to that this PR adds a test to catch this behavior.