Files
kubernetes/staging
Mike Spreitzer b4a40cd43e Draft weighted and timing histograms
The following investigation occurred during development.

Add TimingHistogram impl that shares lock with WeightedHistogram

Benchmarking and profiling shows that two layers of locking is
noticeably more expensive than one.

After adding this new alternative, I now get the following benchmark
results.

```
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogram-16    	22232037	        52.79 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	k8s.io/component-base/metrics/prometheusextension	1.404s

(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogram-16    	22190997	        54.50 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	k8s.io/component-base/metrics/prometheusextension	1.435s
```

and

```
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramDirect-16    	28863244	        40.99 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	k8s.io/component-base/metrics/prometheusextension	1.890s
(base) mspreitz@mjs12 kubernetes %
(base) mspreitz@mjs12 kubernetes %
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramDirect-16    	27994173	        40.37 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	k8s.io/component-base/metrics/prometheusextension	1.384s
```

So the new implementation is roughly 20% faster than the original.

Add overlooked exception, rename timingHistogram to timingHistogramLayered

Use the direct (one mutex) style of TimingHistogram impl

This is about a 20% gain in CPU speed on my development machine, in
benchmarks without lock contention.  Following are two consecutive
trials.

(base) mspreitz@mjs12 prometheusextension % go test  -benchmem -run=^$ -bench Histogram .
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramLayered-16    	21650905	        51.91 ns/op	       0 B/op	       0 allocs/op
BenchmarkTimingHistogramDirect-16     	29876860	        39.33 ns/op	       0 B/op	       0 allocs/op
BenchmarkWeightedHistogram-16         	49227044	        24.13 ns/op	       0 B/op	       0 allocs/op
BenchmarkHistogram-16                 	41063907	        28.82 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	k8s.io/component-base/metrics/prometheusextension	5.432s

(base) mspreitz@mjs12 prometheusextension % go test  -benchmem -run=^$ -bench Histogram .
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramLayered-16    	22483816	        51.72 ns/op	       0 B/op	       0 allocs/op
BenchmarkTimingHistogramDirect-16     	29697291	        39.39 ns/op	       0 B/op	       0 allocs/op
BenchmarkWeightedHistogram-16         	48919845	        24.03 ns/op	       0 B/op	       0 allocs/op
BenchmarkHistogram-16                 	41153044	        29.26 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	k8s.io/component-base/metrics/prometheusextension	5.044s

Remove layered implementation of TimingHistogram
2022-04-28 17:36:06 -04:00
..

External Repository Staging Area

This directory is the staging area for packages that have been split to their own repository. The content here will be periodically published to respective top-level k8s.io repositories.

Repositories currently staged here:

The code in the staging/ directory is authoritative, i.e. the only copy of the code. You can directly modify such code.

Using staged repositories from Kubernetes code

Kubernetes code uses the repositories in this directory via symlinks in the vendor/k8s.io directory into this staging area. For example, when Kubernetes code imports a package from the k8s.io/client-go repository, that import is resolved to staging/src/k8s.io/client-go relative to the project root:

// pkg/example/some_code.go
package example

import (
  "k8s.io/client-go/dynamic" // resolves to staging/src/k8s.io/client-go/dynamic
)

Once the change-over to external repositories is complete, these repositories will actually be vendored from k8s.io/<package-name>.

Creating a new repository in staging

Adding the staging repository in kubernetes/kubernetes:

  1. Send an email to the SIG Architecture mailing list and the mailing list of the SIG which would own the repo requesting approval for creating the staging repository.

  2. Once approval has been granted, create the new staging repository.

  3. Add a symlink to the staging repo in vendor/k8s.io.

  4. Update import-restrictions.yaml to add the list of other staging repos that this new repo can import.

  5. Add all mandatory template files to the staging repo as mentioned in https://github.com/kubernetes/kubernetes-template-project.

  6. Make sure that the .github/PULL_REQUEST_TEMPLATE.md and CONTRIBUTING.md files mention that PRs are not directly accepted to the repo.

Creating the published repository

  1. Create an issue in the kubernetes/org repo to request creation of the respective published repository in the Kubernetes org. The published repository must have an initial empty commit. It also needs specific access rules and branch settings. See #kubernetes/org#58 for an example.

  2. Setup branch protection and enable access to the stage-bots team by adding the repo in prow/config.yaml. See #kubernetes/test-infra#9292 for an example.

  3. Once the repository has been created in the Kubernetes org, update the publishing-bot to publish the staging repository by updating:

    • rules.yaml: Make sure that the list of dependencies reflects the staging repos in the Godeps.json file.

    • fetch-all-latest-and-push.sh: Add the staging repo in the list of repos to be published.

  4. Add the staging and published repositories as a subproject for the SIG that owns the repos in sigs.yaml.

  5. Add the repo to the list of staging repos in this README.md file.