Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Replace scale down window
**What this PR does / why we need it**:
Replace scale down forbidden window with scale down stabilization window.
This allows scale down based on more than one sample, to avoid rapidly changing size up and down for controllers with fluctuating load.
A bit more in https://docs.google.com/document/d/1IdG3sqgCEaRV3urPLA29IDudCufD89RYCohfBPNeWIM
This PR is copy of #67771 with resolved comments.
**Release note**:
```release-note
Replace scale down forbidden window with scale down stabilization window. Rather than waiting a fixed period of time between scale downs HPA now scales down to the highest recommendation it during the scale down stabilization window.
```
Automatic merge from submit-queue (batch tested with PRs 66916, 67252, 67794, 67619, 67328). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix HPA sample sanitization
**What this PR does / why we need it**: @mwielgus pointed out a case when HPA fails as a result of my changes to HPA algorithm:
- Have pods that use a lot of CPU during initilization, become ready right after they initialize,
- Trigger a scale up,
- When new pods become ready will will count their usage (even though it's not related to any work that needs doing),
- This triggers another scale up, even though existing pods can handle work, no problem.
The fix is:
- Use all samples for non-cpu metrics.
- Only use CPU samples if:
- Pod is ready and was started more than 2 minutes ago, or
- Pod is unready and last readiness change happened more than 10s after it was started.
Reasoning behind this in: https://docs.google.com/document/d/1UdtYedhmCxjaJIQi6hwJMY0eHQQKxlVD8lSHZC1BPOA/edit
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```release-note
Replace scale up forbidden window with disregarding CPU samples collected when pod was initializing.
```
Duration of initialization taint on CPU and window of initial readiness
setting controlled by flags.
Adding API violation exceptions following example of e50340ee23
Automatic merge from submit-queue (batch tested with PRs 59592, 62308, 62523, 62635, 62243). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Separate pod priority from preemption
**What this PR does / why we need it**:
Users request to split priority and preemption feature gate so they can use priority separately.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#62068
**Special notes for your reviewer**:
~~I kept use `ENABLE_POD_PRIORITY` as ENV name for gce cluster scripts for backward compatibility reason. Please let me know if other approach is preffered.~~
~~This is a potential **break change** as existing clusters will be affected, we may need to include this in 1.11 maybe?~~
TODO: update this doc https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
[Update] Usage: in config file for scheduler:
```yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
...
disablePreemption: true
```
**Release note**:
```release-note
Split PodPriority and PodPreemption feature gate
```
Thanks to some great sleuthing by ikruglov!
kube-controller-manager defaults --leader-elect to true. We should
do the same for kube-scheduler. kube-scheduler used to have this
set to true, but it got lost during refactoring in:
efb2bb71cd
* Deprecate the old experimental-fail-swap-on
* Add a new flag fail-swap-on and set it to true
Before this change, we would not fail when swap is on. With this
change we fail for everyone when swap is on, unless they explicitly
set --fail-swap-on to false.
Automatic merge from submit-queue (batch tested with PRs 44606, 46038)
Fix serialization of EnforceNodeAllocatable
EnforceNodeAllocatable being `nil` and `[]` are treated in different
ways by kubelet. Namely, `nil` is replaced with `[]string{"pods"}` by
the defaulting mechanism.
E.g. if you run kubelet in Docker-in-Docker environment
you may need to run it with the following options:
`--cgroups-per-qos=false --enforce-node-allocatable=`
(this corresponds to EnforceNodeAllocatable being empty array and not
null) If you then grab kubelet configuration via /configz and try to
reuse it for dynamic kubelet config, kubelet will think that
EnforceNodeAllocatable is null, failing to run in the
Docker-in-Docker environment.
Encountered this while updating Virtlet for Kubernetes 1.6
(the dev environment is based on kubeadm-dind-cluster)
Automatic merge from submit-queue (batch tested with PRs 45953, 45889)
Add /metrics and profiling handlers to kube-proxy
Also expose "syncProxyRules latency" as a prometheus metrics.
Fix https://github.com/kubernetes/kubernetes/issues/45876
EnforceNodeAllocatable being `nil` and `[]` are treated in different
ways by kubelet. Namely, `nil` is replaced with `[]string{"pods"}` by
the defaulting mechanism.
E.g. if you run kubelet in Docker-in-Docker environment
you may need to run it with the following options:
`--cgroups-per-qos=false --enforce-node-allocatable=`
(this corresponds to EnforceNodeAllocatable being empty array and not
null) If you then grab kubelet configuration via /configz and try to
reuse it for dynamic kubelet config, kubelet will think that
EnforceNodeAllocatable is null, failing to run in the
Docker-in-Docker environment.
Encountered this while updating Virtlet for Kubernetes 1.6
(the dev environment is based on kubeadm-dind-cluster)
Automatic merge from submit-queue
Enable shared PID namespace by default for docker pods
**What this PR does / why we need it**: This PR enables PID namespace sharing for docker pods by default, bringing the behavior of docker in line with the other CRI runtimes when used with docker >= 1.13.1.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #1615
**Special notes for your reviewer**: cc @dchen1107 @yujuhong
**Release note**:
```release-note
Kubernetes now shares a single PID namespace among all containers in a pod when running with docker >= 1.13.1. This means processes can now signal processes in other containers in a pod, but it also means that the `kubectl exec {pod} kill 1` pattern will cause the pod to be restarted rather than a single container.
```
The exported or public functions requires a doc comment to pass golint.
This commit has changes of conversion generated code. The actual doc
changes are added into a separate commit for a clean review.
Automatic merge from submit-queue
Add dockershim only mode
This PR added a `experimental-dockershim` hidden flag in kubelet to run dockershim only.
We introduce this flag mainly for cri validation test. In the future we should compile dockershim into another binary.
@yujuhong @feiskyer @xlgao-zju
/cc @kubernetes/sig-node-pr-reviews