![]() Normal binaries should never have to do this. It's not safe when there are already some goroutines running which might do logging. Therefore the new default is to return an error when a binary accidentally re-applies. A few unit ensure that there are no goroutines and have to call the functions more then once. The new ResetForTest API gets used by those to enable changing the logging settings more than once in the same process. Integration tests use the same code as the normal binaries. To make reuse of that code safe, component-base/logs can be configured to silently ignore any additional calls. This addresses data races that were found when enabling -race for integration tests. To catch cases where the integration test does want to modify the config, the old and new config get compared and an error is raised when it's not the same. To avoid having to modify all integration tests which start test servers, reconfiguring component-base/logs is done by the test server packages. |
||
---|---|---|
.. | ||
contextual-logging | ||
data | ||
.gitignore | ||
benchmark_test.go | ||
common_test.go | ||
get-logs.sh | ||
load_test.go | ||
load.go | ||
README.md |
Benchmarking logging
Any major changes to the logging code, whether it is in Kubernetes or in klog, must be benchmarked before and after the change.
Running the benchmark
go test -v -bench=. -benchmem -benchtime=10s .
Real log data
The files under data
define test cases for specific aspects of formatting. To
test with a log file that represents output under some kind of real load, copy
the log file into data/<file name>.log
and run benchmarking as described
above. -bench=BenchmarkLogging/<file name without .log suffix>
can be used
to benchmark just the new file.
When using data/v<some number>/<file name>.log
, formatting will be done at
that log level. Symlinks can be created to simulating writing of the same log
data at different levels.
No such real data is included in the Kubernetes repo because of their size. They can be found in the "artifacts" of this https://testgrid.kubernetes.io/sig-instrumentation-tests#kind-json-logging-master Prow job:
artifacts/logs/kind-control-plane/containers
artifacts/logs/kind-*/kubelet.log
With sufficient credentials, gsutil
can be used to download everything for a job directly
into a directory that then will be used by the benchmarks automatically:
kubernetes$ test/integration/logs/benchmark/get-logs.sh
++ dirname test/integration/logs/benchmark/get-logs.sh
+ cd test/integration/logs/benchmark
++ latest_job
++ gsutil cat gs://kubernetes-jenkins/logs/ci-kubernetes-kind-e2e-json-logging/latest-build.txt
+ job=1618864842834186240
+ rm -rf ci-kubernetes-kind-e2e-json-logging
+ mkdir ci-kubernetes-kind-e2e-json-logging
...
This sets up the data
directory so that additional test cases are available
(BenchmarkEncoding/v3/kind-worker-kubelet/
,
BenchmarkEncoding/kube-scheduler/
, etc.).
To clean up, use
git clean -fx test/integration/logs/benchmark
Analyzing log data
While loading a file, some statistics about it are collected. Those are shown when running with:
go test -v -bench=BenchmarkEncoding/none -run=none .