For certain volume types (e.g. AWS EBS or GCE PD), a limitted
number of such volumes can be attached to a given node. This commit
introduces a predicate with allows cluster admins to cap
the maximum number of volumes matching a particular type attached to a
given node.
The volume type is configurable by passing a pair of filter functions,
and the maximum number of such volumes is configurable to allow node
admins to reserve a certain number of volumes for system use.
By default, the predicate is exposed as MaxEBSVolumeCount and
MaxGCEPDVolumeCount (for AWS ElasticBlocKStore and GCE PersistentDisk
volumes, respectively), each of which can be configured using the
`KUBE_MAX_PD_VOLS` environment variable.
Fixes#7835
... and a quick doc on how to run them
```
$ godep go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch
PASS
BenchmarkWatchHTTP-8 20000 95669 ns/op 15053 B/op 196 allocs/op
BenchmarkWatchWebsocket-8 10000 102871 ns/op 18430 B/op 204 allocs/op
```
- Add Godeps/LICENSES.md
- Add verify-godep-licenses to verify that Godeps/LICENSES.md is up to date
- Trigger verify-godep-licenses in the pre-commit hook only if the Godeps dir has changed
- Exclude verify-godep-licenses in verify-all
- Add verify-godep-licenses to make verify (used by travis)
- Add verify-godep-licenses to shippable
- Update dev docs to mention update-godep-licenses
Add a development guide for measuring performance of node components.
The purpose of this guide is threefold:
1. Document the nuances of measuring kubelet performance so we don't
forget or need to reinvent the wheel.
2. Make it easier for new contributors to analyze performance.
3. Share tips and tricks that current team members might not be aware
of.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).