This provides a mechanism for overriding the forced increase of the klog
verbosity to 4 when starting the apiserver and uses that for the scheduler_perf
benchmark. Other tests run as before.
A global variable was used because adding an explicit parameter to several
helper functions would have caused a lot of code churn (test ->
integration/util.StartApiserver ->
integration/framework.RunAnAPIServerUsingServer ->
integration/framework.startAPIServerOrDie).
"-bench=PerfScheduling/Preemption/500Nodes" ran both the
PerfScheduling/Preemption/500Nodes and the
PerfScheduling/PreemptionPVs/500Nodes benchmark.
This can be avoided by choosing names where none is the prefix of another.
When running an integration test that measures performance, like for example
test/integration/scheduler_perf, running etcd with debug level output is
undesirable because it creates additional load on the system and isn't
realistic.
The default is still "debug", but ETCD_LOGLEVEL=warn can be used to override
that.
This is causing a bug when upgrading from older releases to 1.23 because
of Service's maybe-too-clever default-on-read logic.
Service depends on `Decorator()` to be called upon read, to
back-populate old saved objects which do not have `.clusterIPs[]` set.
This works on read, but the cache saves the pre-decorated type (as it is
documented)
In 1.23, this code was refactored and it seems some edge-case handling
was inadvertently removed (I have not confirmed exactly what happened).
Test by aojea
We don't need to worry about data loss once the data has been written to an
output stream. Calling fsync unnecessarily has been the reason for performance
issues in the past.
The recent regression https://github.com/kubernetes/kubernetes/issues/107033
shows that we need a way to automatically measure different logging
configurations (structured text, JSON with and without split streams) under
realistic conditions (time stamping, caller identification).
System calls may affect the performance and thus writing into actual files is
useful. A temp dir under /tmp (usually a tmpfs) is used, so the actual IO
bandwidth shouldn't affect the outcome. The "normal" json.Factory code is used
to construct the JSON logger when we have actual files that can be set as
os.Stderr and os.Stdout, thus making this as realistic as possible.
When discarding the output instead of writing it, the focus is more on the rest
of the pipeline and changes there can be investigated more reliably.
The benchmarks automatically gather "log entries per second" and "bytes per
second", which is useful to know when considering requirements like the ones
from https://github.com/kubernetes/kubernetes/issues/107029.
logcheck complains:
Additional arguments to ErrorS should always be Key Value pairs. Please check if there is any key or value missing.
That check is intentional, but not applicable here. The check can be worked
around by calling the functions through variables.
The benchmark depends on k8s.io/api (for v1.Container). Such a dependency is
not desirable for k8s.io/component-base/logs, even if it's just for
testing. The solution is to create a separate directory where such a dependency
isn't a problem.
The alternative, a separate package with its own go.mod file under
k8s.io/component-base/logs wouldd have been more complicated to maintain (yet
another go.mod file and different whitelisted dependencies).