Automatic merge from submit-queue (batch tested with PRs 59190, 59360). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding benchmarks to envelop encryption integration tests
**What this PR does / why we need it**:
Adding benchmarks for envelop encryption integration tests.
Allows to estimate how envelop encryption may impact the performance of KubeAPI server.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 57824, 58806, 59410, 59280). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
2nd try at using a vanity GCR name
The 2nd commit here is the changes relative to the reverted PR. Please focus review attention on that.
This is the 2nd attempt. The previous try (#57573) was reverted while we
figured out the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
xref https://github.com/kubernetes/release/issues/281
TL;DR:
* The new `staging-k8s.gcr.io` is where we push images. It is literally an alias to `gcr.io/google_containers` (the existing repo) and is hosted in the US.
* The contents of `staging-k8s.gcr.io` are automatically synced to `{asia,eu,us)-k8s.gcr.io`.
* The new `k8s.gcr.io` will be a read-only alias to whichever regional repo is closest to you.
* In the future, images will be promoted from `staging` to regional "prod" more explicitly and auditably.
```release-note
Use "k8s.gcr.io" for pulling container images rather than "gcr.io/google_containers". Images are already synced, so this should not impact anyone materially.
Documentation and tools should all convert to the new name. Users should take note of this in case they see this new name in the system.
```
This is the 2nd attempt. The previous was reverted while we figured out
the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
Automatic merge from submit-queue (batch tested with PRs 59010, 59212, 59281, 59014, 59297). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Replace nominateNodeName annotation with PodStatus.NominatedNodeName
**What this PR does / why we need it**:
Replaces nominateNodeName annotation with PodStatus.NominatedNodeName in scheudler's logic. We don't expect any logic/behavior changes.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
ref #57471
/sig scheduling
cc: @k82cn @aveshagarwal @resouer
Automatic merge from submit-queue (batch tested with PRs 59010, 59212, 59281, 59014, 59297). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix deployment's collision avoidance mechanism
**What this PR does / why we need it**:
This PR modifies deployment's collision avoidance mechanism to take into account a replicaset's `.Labels` field change. This ensures that the mechanism can be triggered when the user updates only the `.Labels` field of the replicaset without modifying the replicaset's PodTemplateSpec.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
xref #59213
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59276, 51042, 58973, 59377, 59472). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Allow passing request-timeout from NewRequest all the way down
**What this PR does / why we need it**:
Currently if you pass `--request-timeout` it's not passed all the way down to the actual request object. There's a separate field on the `Request` object that allows setting that timeout, but it's not taken from that flag.
@smarterclayton @deads2k ptal, this is coming from https://github.com/openshift/origin/pull/13701
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not recycle volumes that are used by pods
**What this PR does / why we need it**:
Recycler should wait until all pods that use a volume are finished.
Consider this scenario:
1. User creates a PVC that's bound to a NFS PV.
2. User creates a pod that uses the PVC
3. User deletes the PVC.
Now the PV gets `Released` (the PVC does not exists) and recycled, however the PV is still mounted to a running pod. PVC protection won't help us, because it puts finalizers on PVC that is under user's control and user can remove it.
This PR checks that there is no pod that uses a PV before it recycles it.
**Release note**:
```release-note
NONE
```
/sig storage
Automatic merge from submit-queue (batch tested with PRs 59394, 58769, 59423, 59363, 59245). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kube-scheduler: Use default predicates/prioritizers if they are unspecified in the policy config
**What this PR does / why we need it**:
The scheduler has built-in default sets of predicate/prioritizer that are applied on pod scheduling. It can also take a policy config file where predicate/prioritizer and extender settings can be specified. The current behavior is that if we want to configure an extender using the policy config, we have to also provide the default predicate/prioritizer settings. Otherwise, the empty predicate/prioritizer sets will be used.
This is inconvenient, and it's hard to keep the policy config up to date with the scheduler's defaults. This PR changes the scheduler to use the default predicate/prioritizer sets if they are unspecified in the policy config. But an empty list would bypass non-mandatory predicates/prioritizers.
This will change the behavior of a policy config that does not specify (but not empty list) predicate/prioritizer, but it's unlike someone is using such config in practice.
**Special notes for your reviewer**:
I think it makes sense to have this in 1.9 as well because
- It's safe, given the scope of this change and the fact that it's very unlikely that someone is using a policy config with empty predicates/prioritizers.
- Compared with the risk, asking users to provide the default predicate/prioritizer sets for is error-prone and may cause other issues.
**Release note**:
```release-note
kube-scheduler: Use default predicates/prioritizers if they are unspecified in the policy config
```
/sig scheduling
/assign @bsalamat
/cc @vishh
Automatic merge from submit-queue (batch tested with PRs 55439, 58564, 59028, 59169, 59259). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add basic functionality deployment integration tests
**What this PR does / why we need it**:
This PR adds basic deployment integration tests.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
xref #52113
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make predicate errors more human readable
**What this PR does / why we need it**:
Make predicate errors more human readable
Thanks.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref #58546
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add V1beta1 VolumeAttachment API
**What this PR does / why we need it**:
Add V1beta1 VolumeAttachment API, co-existing with Alpha API object
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#58461
**Special notes for your reviewer**:
**Release note**:
```release-note
Add V1beta1 VolumeAttachment API, co-existing with Alpha API object
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
code cleanup in integration test framework
**What this PR does / why we need it**:
code cleanup
**Special notes for your reviewer**:
/kind cleanup
**Release note**:
```release-note
NONE
```
Right now if a JWT for an unknown issuer, for any subject hits the
serviceaccount token authenticator, we return a errors as if the token
was meant for us but we couldn't find a key to verify it. We should
instead return nil, false, nil.
This change helps us support multiple service account token
authenticators with different issuers.
Automatic merge from submit-queue (batch tested with PRs 58488, 58360). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add get volumeattachment to the node authorizer
Fixes#58355
Adds `get volumeattachment` authorization for nodes to the node authorizer when the CSI feature is enabled
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support for custom tls cipher suites in api server and kubelet
**What this PR does / why we need it**:
This pull request aims to solve the problem of users not able to set custom cipher suites in the api server.
Several users have requested this given that some default ciphers are vulnerable.
There is a discussion in #41038 of how to implement this. The options are:
- Setting a fixed list of ciphers, but users will have different requirements so a fixed list would be problematic.
- Letting the user set them by parameter, this requires adding a new parameter that could be pretty long with the list of all the ciphers.
I implemented the second option, if the ciphers are not passed by parameter, the Go default ones will be used (same behavior as now).
**Which issue this PR fixes**
fixes#41038
**Special notes for your reviewer**:
The ciphers in Go tls config are constants and the ones passed by parameters are a comma-separated list. I needed to create the `type CipherSuitesFlag` to support that conversion/mapping, because i couldn't find any way to do this type of reflection in Go.
If you think there is another way to implement this, let me know.
If you want to test it out, this is a ciphers combination i tested without the weak ones:
```
TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
```
If this is merged i will implement the same for the Kubelet.
**Release note**:
```release-note
kube-apiserver and kubelet now support customizing TLS ciphers via a `--tls-cipher-suites` flag
```
note that we don't change the behaviour of kubectl.
For example it won't scale new resources. That's the end goal.
The first step is to retrofit existing code to use the generic scaler.
This moves plugin/pkg/scheduler to pkg/scheduler and
plugin/cmd/kube-scheduler to cmd/kube-scheduler.
Bulk of the work was done with gomvpkg, except for kube-scheduler main
package.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Split the NodeController into lifecycle and ipam pieces.
**What this PR does / why we need it**: Separates node controller into ipam and lifecycle components.
Prepatory work for removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.
Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #https://github.com/kubernetes/kubernetes/issues/52369
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bump pause container used by kubelet and tests to 3.1
This updates the version of the pause container used by the kubelet and
various test utilities to 3.1.
**What this PR does / why we need it**: The pause container hasn't been rebuilt in quite a while and needs an update to reap zombies (#50865) and for schema2 manifest (#56253).
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#50865, Fixes#56253
**Special notes for your reviewer**:
**Release note**:
```release-note
The kubelet uses a new release 3.1 of the pause container with the Docker runtime. This version will clean up orphaned zombie processes that it inherits.
```