
The following investigation occurred during development. Add TimingHistogram impl that shares lock with WeightedHistogram Benchmarking and profiling shows that two layers of locking is noticeably more expensive than one. After adding this new alternative, I now get the following benchmark results. ``` (base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension goos: darwin goarch: amd64 pkg: k8s.io/component-base/metrics/prometheusextension cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz BenchmarkTimingHistogram-16 22232037 52.79 ns/op 0 B/op 0 allocs/op PASS ok k8s.io/component-base/metrics/prometheusextension 1.404s (base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension goos: darwin goarch: amd64 pkg: k8s.io/component-base/metrics/prometheusextension cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz BenchmarkTimingHistogram-16 22190997 54.50 ns/op 0 B/op 0 allocs/op PASS ok k8s.io/component-base/metrics/prometheusextension 1.435s ``` and ``` (base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension goos: darwin goarch: amd64 pkg: k8s.io/component-base/metrics/prometheusextension cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz BenchmarkTimingHistogramDirect-16 28863244 40.99 ns/op 0 B/op 0 allocs/op PASS ok k8s.io/component-base/metrics/prometheusextension 1.890s (base) mspreitz@mjs12 kubernetes % (base) mspreitz@mjs12 kubernetes % (base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension goos: darwin goarch: amd64 pkg: k8s.io/component-base/metrics/prometheusextension cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz BenchmarkTimingHistogramDirect-16 27994173 40.37 ns/op 0 B/op 0 allocs/op PASS ok k8s.io/component-base/metrics/prometheusextension 1.384s ``` So the new implementation is roughly 20% faster than the original. Add overlooked exception, rename timingHistogram to timingHistogramLayered Use the direct (one mutex) style of TimingHistogram impl This is about a 20% gain in CPU speed on my development machine, in benchmarks without lock contention. Following are two consecutive trials. (base) mspreitz@mjs12 prometheusextension % go test -benchmem -run=^$ -bench Histogram . goos: darwin goarch: amd64 pkg: k8s.io/component-base/metrics/prometheusextension cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz BenchmarkTimingHistogramLayered-16 21650905 51.91 ns/op 0 B/op 0 allocs/op BenchmarkTimingHistogramDirect-16 29876860 39.33 ns/op 0 B/op 0 allocs/op BenchmarkWeightedHistogram-16 49227044 24.13 ns/op 0 B/op 0 allocs/op BenchmarkHistogram-16 41063907 28.82 ns/op 0 B/op 0 allocs/op PASS ok k8s.io/component-base/metrics/prometheusextension 5.432s (base) mspreitz@mjs12 prometheusextension % go test -benchmem -run=^$ -bench Histogram . goos: darwin goarch: amd64 pkg: k8s.io/component-base/metrics/prometheusextension cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz BenchmarkTimingHistogramLayered-16 22483816 51.72 ns/op 0 B/op 0 allocs/op BenchmarkTimingHistogramDirect-16 29697291 39.39 ns/op 0 B/op 0 allocs/op BenchmarkWeightedHistogram-16 48919845 24.03 ns/op 0 B/op 0 allocs/op BenchmarkHistogram-16 41153044 29.26 ns/op 0 B/op 0 allocs/op PASS ok k8s.io/component-base/metrics/prometheusextension 5.044s Remove layered implementation of TimingHistogram
External Repository Staging Area
This directory is the staging area for packages that have been split to their own repository. The content here will be periodically published to respective top-level k8s.io repositories.
Repositories currently staged here:
k8s.io/api
k8s.io/apiextensions-apiserver
k8s.io/apimachinery
k8s.io/apiserver
k8s.io/cli-runtime
k8s.io/client-go
k8s.io/cloud-provider
k8s.io/cluster-bootstrap
k8s.io/code-generator
k8s.io/component-base
k8s.io/controller-manager
k8s.io/cri-api
k8s.io/csi-api
k8s.io/csi-translation-lib
k8s.io/kube-aggregator
k8s.io/kube-controller-manager
k8s.io/kube-proxy
k8s.io/kube-scheduler
k8s.io/kubectl
k8s.io/kubelet
k8s.io/legacy-cloud-providers
k8s.io/metrics
k8s.io/mount-utils
k8s.io/pod-security-admission
k8s.io/sample-apiserver
k8s.io/sample-cli-plugin
k8s.io/sample-controller
The code in the staging/ directory is authoritative, i.e. the only copy of the code. You can directly modify such code.
Using staged repositories from Kubernetes code
Kubernetes code uses the repositories in this directory via symlinks in the
vendor/k8s.io
directory into this staging area. For example, when
Kubernetes code imports a package from the k8s.io/client-go
repository, that
import is resolved to staging/src/k8s.io/client-go
relative to the project
root:
// pkg/example/some_code.go
package example
import (
"k8s.io/client-go/dynamic" // resolves to staging/src/k8s.io/client-go/dynamic
)
Once the change-over to external repositories is complete, these repositories
will actually be vendored from k8s.io/<package-name>
.
Creating a new repository in staging
Adding the staging repository in kubernetes/kubernetes
:
-
Send an email to the SIG Architecture mailing list and the mailing list of the SIG which would own the repo requesting approval for creating the staging repository.
-
Once approval has been granted, create the new staging repository.
-
Add a symlink to the staging repo in
vendor/k8s.io
. -
Update
import-restrictions.yaml
to add the list of other staging repos that this new repo can import. -
Add all mandatory template files to the staging repo as mentioned in https://github.com/kubernetes/kubernetes-template-project.
-
Make sure that the
.github/PULL_REQUEST_TEMPLATE.md
andCONTRIBUTING.md
files mention that PRs are not directly accepted to the repo.
Creating the published repository
-
Create an issue in the
kubernetes/org
repo to request creation of the respective published repository in the Kubernetes org. The published repository must have an initial empty commit. It also needs specific access rules and branch settings. See #kubernetes/org#58 for an example. -
Setup branch protection and enable access to the
stage-bots
team by adding the repo inprow/config.yaml
. See #kubernetes/test-infra#9292 for an example. -
Once the repository has been created in the Kubernetes org, update the publishing-bot to publish the staging repository by updating:
-
rules.yaml
: Make sure that the list of dependencies reflects the staging repos in theGodeps.json
file. -
fetch-all-latest-and-push.sh
: Add the staging repo in the list of repos to be published.
-
-
Add the staging and published repositories as a subproject for the SIG that owns the repos in
sigs.yaml
. -
Add the repo to the list of staging repos in this
README.md
file.