The benchmarks and unit tests were written so that they used custom APIs for
each log format. This made them less realistic because there were subtle
differences between the benchmark and a real Kubernetes component. Now all
logging configuration is done with the official
k8s.io/component-base/logs/api/v1.
To make the different test cases more comparable, "messages/s" is now reported
instead of the generic "ns/op".
Making the LoggingConfiguration part of the versioned component-base/config API
had the theoretic advantage that components could have offered different
configuration APIs with experimental features limited to alpha versions (for
example, sanitization offered only in a v1alpha1.KubeletConfiguration). Some
components could have decided to only use stable logging options.
In practice, this wasn't done. Furthermore, we don't want different components
to make different choices regarding which logging features they offer to
users. It should always be the same everywhere, for the sake of consistency.
This can be achieved with a saner Go API by dropping the distinction between
internal and external LoggingConfiguration types. Different stability levels of
indidividual fields have to be covered by documentation (done) and potentially
feature gates (not currently done).
Advantages:
- everything related to logging is under component-base/logs;
previously this was scattered across different packages and
different files under "logs" (why some code was in logs/config.go
vs. logs/options.go vs. logs/logs.go always confused me again
and again when coming back to the code):
- long-term config and command line API are clearly separated
into the "api" package underneath that
- logs/logs.go itself only deals with legacy global flags and
logging configuration
- removal of separate Go APIs like logs.BindLoggingFlags and
logs.Options
- LogRegistry becomes an implementation detail, with less code
and less exported functionality (only registration needs to
be exported, querying is internal)
When a Logger gets called directly via contextual logging, it has to do its own
verbosity check and therefore needs to know what the intended verbosity level
is.
This used to work previously because all verbosity checks were done in klog
before invoking the Logger.
The recent regression https://github.com/kubernetes/kubernetes/issues/107033
shows that we need a way to automatically measure different logging
configurations (structured text, JSON with and without split streams) under
realistic conditions (time stamping, caller identification).
System calls may affect the performance and thus writing into actual files is
useful. A temp dir under /tmp (usually a tmpfs) is used, so the actual IO
bandwidth shouldn't affect the outcome. The "normal" json.Factory code is used
to construct the JSON logger when we have actual files that can be set as
os.Stderr and os.Stdout, thus making this as realistic as possible.
When discarding the output instead of writing it, the focus is more on the rest
of the pipeline and changes there can be investigated more reliably.
The benchmarks automatically gather "log entries per second" and "bytes per
second", which is useful to know when considering requirements like the ones
from https://github.com/kubernetes/kubernetes/issues/107029.
logcheck complains:
Additional arguments to ErrorS should always be Key Value pairs. Please check if there is any key or value missing.
That check is intentional, but not applicable here. The check can be worked
around by calling the functions through variables.
The benchmark depends on k8s.io/api (for v1.Container). Such a dependency is
not desirable for k8s.io/component-base/logs, even if it's just for
testing. The solution is to create a separate directory where such a dependency
isn't a problem.
The alternative, a separate package with its own go.mod file under
k8s.io/component-base/logs wouldd have been more complicated to maintain (yet
another go.mod file and different whitelisted dependencies).