Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not recycle volumes that are used by pods
**What this PR does / why we need it**:
Recycler should wait until all pods that use a volume are finished.
Consider this scenario:
1. User creates a PVC that's bound to a NFS PV.
2. User creates a pod that uses the PVC
3. User deletes the PVC.
Now the PV gets `Released` (the PVC does not exists) and recycled, however the PV is still mounted to a running pod. PVC protection won't help us, because it puts finalizers on PVC that is under user's control and user can remove it.
This PR checks that there is no pod that uses a PV before it recycles it.
**Release note**:
```release-note
NONE
```
/sig storage
Automatic merge from submit-queue (batch tested with PRs 59394, 58769, 59423, 59363, 59245). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kube-scheduler: Use default predicates/prioritizers if they are unspecified in the policy config
**What this PR does / why we need it**:
The scheduler has built-in default sets of predicate/prioritizer that are applied on pod scheduling. It can also take a policy config file where predicate/prioritizer and extender settings can be specified. The current behavior is that if we want to configure an extender using the policy config, we have to also provide the default predicate/prioritizer settings. Otherwise, the empty predicate/prioritizer sets will be used.
This is inconvenient, and it's hard to keep the policy config up to date with the scheduler's defaults. This PR changes the scheduler to use the default predicate/prioritizer sets if they are unspecified in the policy config. But an empty list would bypass non-mandatory predicates/prioritizers.
This will change the behavior of a policy config that does not specify (but not empty list) predicate/prioritizer, but it's unlike someone is using such config in practice.
**Special notes for your reviewer**:
I think it makes sense to have this in 1.9 as well because
- It's safe, given the scope of this change and the fact that it's very unlikely that someone is using a policy config with empty predicates/prioritizers.
- Compared with the risk, asking users to provide the default predicate/prioritizer sets for is error-prone and may cause other issues.
**Release note**:
```release-note
kube-scheduler: Use default predicates/prioritizers if they are unspecified in the policy config
```
/sig scheduling
/assign @bsalamat
/cc @vishh
Automatic merge from submit-queue (batch tested with PRs 55439, 58564, 59028, 59169, 59259). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add basic functionality deployment integration tests
**What this PR does / why we need it**:
This PR adds basic deployment integration tests.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
xref #52113
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make predicate errors more human readable
**What this PR does / why we need it**:
Make predicate errors more human readable
Thanks.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref #58546
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add V1beta1 VolumeAttachment API
**What this PR does / why we need it**:
Add V1beta1 VolumeAttachment API, co-existing with Alpha API object
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#58461
**Special notes for your reviewer**:
**Release note**:
```release-note
Add V1beta1 VolumeAttachment API, co-existing with Alpha API object
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
code cleanup in integration test framework
**What this PR does / why we need it**:
code cleanup
**Special notes for your reviewer**:
/kind cleanup
**Release note**:
```release-note
NONE
```
Right now if a JWT for an unknown issuer, for any subject hits the
serviceaccount token authenticator, we return a errors as if the token
was meant for us but we couldn't find a key to verify it. We should
instead return nil, false, nil.
This change helps us support multiple service account token
authenticators with different issuers.
Automatic merge from submit-queue (batch tested with PRs 58488, 58360). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add get volumeattachment to the node authorizer
Fixes#58355
Adds `get volumeattachment` authorization for nodes to the node authorizer when the CSI feature is enabled
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support for custom tls cipher suites in api server and kubelet
**What this PR does / why we need it**:
This pull request aims to solve the problem of users not able to set custom cipher suites in the api server.
Several users have requested this given that some default ciphers are vulnerable.
There is a discussion in #41038 of how to implement this. The options are:
- Setting a fixed list of ciphers, but users will have different requirements so a fixed list would be problematic.
- Letting the user set them by parameter, this requires adding a new parameter that could be pretty long with the list of all the ciphers.
I implemented the second option, if the ciphers are not passed by parameter, the Go default ones will be used (same behavior as now).
**Which issue this PR fixes**
fixes#41038
**Special notes for your reviewer**:
The ciphers in Go tls config are constants and the ones passed by parameters are a comma-separated list. I needed to create the `type CipherSuitesFlag` to support that conversion/mapping, because i couldn't find any way to do this type of reflection in Go.
If you think there is another way to implement this, let me know.
If you want to test it out, this is a ciphers combination i tested without the weak ones:
```
TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
```
If this is merged i will implement the same for the Kubelet.
**Release note**:
```release-note
kube-apiserver and kubelet now support customizing TLS ciphers via a `--tls-cipher-suites` flag
```
note that we don't change the behaviour of kubectl.
For example it won't scale new resources. That's the end goal.
The first step is to retrofit existing code to use the generic scaler.
This moves plugin/pkg/scheduler to pkg/scheduler and
plugin/cmd/kube-scheduler to cmd/kube-scheduler.
Bulk of the work was done with gomvpkg, except for kube-scheduler main
package.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Split the NodeController into lifecycle and ipam pieces.
**What this PR does / why we need it**: Separates node controller into ipam and lifecycle components.
Prepatory work for removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.
Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #https://github.com/kubernetes/kubernetes/issues/52369
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bump pause container used by kubelet and tests to 3.1
This updates the version of the pause container used by the kubelet and
various test utilities to 3.1.
**What this PR does / why we need it**: The pause container hasn't been rebuilt in quite a while and needs an update to reap zombies (#50865) and for schema2 manifest (#56253).
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#50865, Fixes#56253
**Special notes for your reviewer**:
**Release note**:
```release-note
The kubelet uses a new release 3.1 of the pause container with the Docker runtime. This version will clean up orphaned zombie processes that it inherits.
```
Prepatory work fpr removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.
Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
Fixed linter issues.
Factored in feedback from @gmarek.
Factored in feedback from @mtaufen.
Undoing unneeded change.
This adds new benchmark tests that measure scheduler latency of pods
that use affinity rules. Specifically, this tests affinity rules with
topologyKey="kubernetes.io/hostname".
Under the default behavior of Go benchmark tests, all our scheduler_perf
benchmark tests run with b.N=1, which is lower than we would like. This
adds a lower bound to b.N so that the results are more meaningful.
The alternative to this change is to always run these tests with the
-benchtime flag set to a duration which will force b.N to increase. That
would cause any test setup to be executed repeatedly as b.N ramps up,
and -timout would probably also need to be set higher.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Version bump to etcd v3.2.11, grpc v1.7.5
Fix https://github.com/kubernetes/kubernetes/issues/56114: Update to etcd client 3.2.11
Version bumps:
- etcd from 3.1.10 to 3.2.11
- grpc from 1.3.0 to 1.7.5
- grpc-gateway from v1.1.0-25-g84398b9 to v1.3.0
TODO:
- [x] Apply etcd [3.2 client upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md)
- [x] Apply grpc API changes in 1.6.0 and 1.7.0 [release notes](https://github.com/grpc/grpc-go/releases)
- [x] bbolt was pulled in transitively, why? We have tests that embed etcd, so we must vendor the etcd server and all it's dependencies.
- [x] Upgrade to containerd v1.0.0? Currently kubernetes depends on containerd v1.0.0-beta.2-159-g27d450a0 which depends on grpc v1.3.0, but containerd v1.0.0 depends on grpc 1.7.2. Not needed. The containerd grpc upgrade required [no code changes](ce3e32680d).
- [x] Fix all failing tests
- [x] Ensure we can safely upgrade grpc to 1.7.5 given that docker and cAdvisor still depend on grpc 1.3.0 (both in the versions we vend and on master for both projects). Should we hold off on this change until we have a docker release that uses gprc 1.7.x?
- [x] Wait for grpc 1.7.5 to be released (it will include https://github.com/grpc/grpc-go/pull/1747). Once released, bump grpc version in this PR and remove workarounds in `hack/godep-save.sh`.
Transitive dependencies on grpc:
- docker depends on grpc, but according to the package dependency graph (`go list -f '{{ .Deps }}'`) there are no dependencies from kubernetes to grpc via docker packages.
- containerd v1.0.0 depends on grpc 1.7.2, we should upgrade to containerd v1.0.0 soon, this can be done in a separate PR
- cadvisor depends on grpc 1.3.0 on master, it should upgrade it to grpc 1.7.5, this can be done in a separate PR
**Release note**:
```release-note
Upgrade to etcd client 3.2.11 and grpc 1.7.5 to improve HA etcd cluster stability.
```