Automatic merge from submit-queue (batch tested with PRs 61790, 61808, 60339, 61615, 61757). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Deployment to stop adding pod-template-hash labels/selector on adoption
**What this PR does / why we need it**: This is a blocker for #55714, because ReplicaSet selector becomes immutable in `apps/v1`. With controller ref, Deployment's ReplicaSets and Pods can avoid fighting with each others without unique label/selector (pod-template-hash), so it's safe to stop adding hash label/selector on adoption.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#61433
**Special notes for your reviewer**: This is a behavioral change to Deployment controller that will affect all versions of Deployment APIs (`apps/v1`, `extensions/v1beta1`, `apps/v1beta1`, `apps/v1beta2`).
**Release note**:
```release-note
Deployment will stop adding pod-template-hash labels/selector to ReplicaSets and Pods it adopts. Resources created by Deployments are not affected (will still have pod-template-hash labels/selector).
```
Automatic merge from submit-queue (batch tested with PRs 60455, 61365, 61375, 61597, 61491). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix#61363, Bounded retries for cloud allocator.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#61363
**Special notes for your reviewer**:
Changed the tracking of nodesInProcessing from a set to map[string]int so that we can count the
number of times we re-process the node and not re-queue in case updateMaxRetries exceeded.
**Release note**:
```release-note
Bound cloud allocator to 10 retries with 100 ms delay between retries.
```
Automatic merge from submit-queue (batch tested with PRs 60980, 61273, 60811, 61021, 61367). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use apps/v1 ReplicaSet in controller and tests.
This updates the RS/RC controller and RS integration/e2e tests to use apps/v1 ReplicaSet, as part of #55714.
It does *not* update the Deployment controller, nor its integration/e2e tests, to use apps/v1 ReplicaSet. That will be done in a separate PR (#61419) because Deployment has many more tendrils embedded throughout the system.
```release-note
Conformance: ReplicaSet must be supported in the `apps/v1` version.
```
/assign @janetkuo
Automatic merge from submit-queue (batch tested with PRs 60373, 61098, 61352, 61359, 61362). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add HPA test for FailedGetExternalMetric
**What this PR does / why we need it**:
Add a HPA test for missing external metrics.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 60632, 60806, 59471, 61251, 61013). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
cronjob_remove_getNextStartTimeAfter
**What this PR does / why we need it**:
`getNextStartTimeAfter` has not been used anywhere in Kubernetes and as it is a inter-pkg method, it is safe to remove it.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 60632, 60806, 59471, 61251, 61013). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove method NewCronJobControllerFromClient
**What this PR does / why we need it**:
This method was originally introduced when cronjob was still called scheduledjob: 7a34347f7f
Back then, both init methods had different signatures.
Since the rename to cronjob (41d88d30dd), this method is an alias to the normal initializer, have the same signature and is not used anywhere in the codebase.
Since this method was never actually used for cronjobs, it doesn't seem removing it would need any deprecation notice.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Remove never used NewCronJobControllerFromClient method
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix condition for using network unavailable taint in cloud_cidr_allocator
Ref. #61481
The 'networkUnavailable' condition has, in a sense reverse logic. I.e. we should be trying to allocate CIRD when the condition is "true", i.e. when the taint exists.
```release-note
NONE
```
@shyamjvs @agabet @bowei
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove YEAR field of all generated files and fix kubernetes boilerplate checker
**What this PR does / why we need it**:
Remove YEAR field of all generated files and fix kubernetes boilerplate checker
xref: [remove YEAR fileds in gengo #91](https://github.com/kubernetes/gengo/pull/91)
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes [#gengo/issues/24](https://github.com/kubernetes/gengo/issues/24)
**Special notes for your reviewer**:
/cc @thockin @lavalamp @sttts
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Issue #61123, call syncer.Update on add event.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#61123
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixed#61123 by triggering syncer.Update on all cases including when a syncer is created
on a new add event.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Ignore unready pods when calculating desired replicas
**What this PR does / why we need it**:
This PR causes `GetExternalMetricReplicas` and `GetObjectMetricReplicas` to ignore unready pods when computing the number of desired replicas. If we don't ignore unready pods, there is a risk of overscaling. See the commit messages for examples and implementation info.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59975
**Special notes for your reviewer**:
@MaciekPytel and I consciously chose to save `GetExternalPerPodMetricReplicas` for a separate PR, as we aren't definite on what is the preferred behavior.
**Release note**:
```release-note
Unready pods will no longer impact the number of desired replicas when using horizontal auto-scaling with external metrics or object metrics.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
disable DaemonSet scheduling feature for 1.10
The DaemonSet scheduling feature has blocked the alpha CI job being green and is preventing getting good CI signal for v1.10
It still contains pod scheduling races (#61050) and fundamental issues with the affinity terms it creates (#61410)
As such, there is not significant value in having the feature available in 1.10 in the current state
This PR disables the feature in order to regain green signal on the alpha CI job (reverting commits is likely to be more disruptive at this point)
related to https://github.com/kubernetes/kubernetes/issues/61050
```release-note
DaemonSet scheduling associated with the alpha ScheduleDaemonSetPods feature flag has been removed from the 1.10 release. See https://github.com/kubernetes/features/issues/548 for feature status.
```
Automatic merge from submit-queue (batch tested with PRs 60574, 60666, 60831, 60877, 60357). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix data race in node lifecycle controller
**What this PR does / why we need it**:
Encountered this bug during fixing: https://github.com/kubernetes/kubernetes/pull/60753
There's a data race for `zoneNoExecuteTainter `.
```
--- PASS: TestTaintNodeByCondition (5.72s)
PASS
==================
WARNING: DATA RACE
Write at 0x00c421a8d2f0 by goroutine 1472:
runtime.mapassign_faststr()
/usr/local/go/src/runtime/hashmap_fast.go:598 +0x0
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).addPodEvictorForNewZone()
/root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:1053 +0x37d
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).monitorNodeStatus()
Previous read at 0x00c421a8d2f0 by goroutine 1471:
runtime.mapiterinit()
/usr/local/go/src/runtime/hashmap.go:709 +0x0
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).doNoExecuteTaintingPass()
/root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:459 +0xec
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).(k8s.io/kubernetes/pkg/controller/nodelifecycle.doNoExecuteTaintingPass)-fm()
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix data race in node lifecycle controller
```
Automatic merge from submit-queue (batch tested with PRs 60363, 59208, 59465, 60581, 60702). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
check taints when allocating CIDR for the cloud
check taint when allocating CIDR for the Cloud (for the shared informer cache).
What this PR does / why we need it:
Following the issue #58406 here is a check of taint when when allocating CIDR for the Cloud
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes#58406
Special notes for your reviewer:
/assign @yastij @gmarek
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 60189, 59542, 59931, 60621, 60353). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove todo to consider adding the cronjob name as a label
**What this PR does / why we need it**:
It seems this label shouldn't be added automatically. If so, we should remove the comment.
See #59473
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#41633
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 60457, 60331, 54970, 58731, 60562). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
clean up unused const in node_lifecycle_controller.go
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59740, 59728, 60080, 60086, 58714). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
more concise to merge the slice
**What this PR does / why we need it**:
more concise to merge the slice
**Special notes for your reviewer**:
Automatic merge from submit-queue (batch tested with PRs 61284, 61119, 61201). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Prevent garbage collector from attempting to sync with 0 resources
**What this PR does / why we need it**:
As of #55259 we enabled garbagecollector.GetDeletableResources to return partial discovery results (including an empty set of discovery results).
This had the unintended consequence of allowing the garbage collector to enter a blocked state that can only be fixed by restarting.
From [this comment](https://github.com/kubernetes/kubernetes/issues/60037#issuecomment-372801088):
> 1. The Sync function periodically calls GetDeletableResources
>
> 2. According to the comment above GetDeletableResources, All discovery errors are considered temporary. Upon encountering any error, GetDeletableResources will log and return any discovered resources it was able to process (which may be none)., an error in discovery causes the discovery client to no longer discover resources in the cluster, but instead of failing and returning an error, it simply logs the error as garbagecollector.go:601] failed to discover preferred resources: %vthe server was unable to return a response in the time allotted, but may still be processing the request and returns an empty list of resources
>
> 3. The Sync function, upon recieving an empty resource list from discovery, detects that the resources have changed, and calls resyncMonitors, which calls dependencyGraphBuilder.syncMonitors with map[] as the argument as shown in the log as garbagecollector.go:189] syncing garbage collector with updated resources from discovery: map[], which sets the list of monitors to an empty list because it thinks there are no resources to monitor.
>
> 4. Lastly the Sync function calls controller.WaitForCacheSync, which calls cache.WaitForCacheSync, which will continually retry the garbagecollector.IsSynced function until it returns true, but it will always return false because len(gb.monitors) is 0.
This PR prevents that specific race condition from arising.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#60037
**Release note**:
```release-note
Fix bug allowing garbage collector to enter a broken state that could only be fixed by restarting the controller-manager.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added unschedulable taint
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194; fixes#61050
**Release note**:
```release-note
When `TaintNodesByCondition` enabled, added `node.kubernetes.io/unschedulable:NoSchedule`
taint to the node if `spec.Unschedulable` is true.
When `ScheduleDaemonSetPods` enabled, `node.kubernetes.io/unschedulable:NoSchedule`
toleration is added automatically to DaemonSet Pods; so the `unschedulable` field of
a node is not respected by the DaemonSet controller.
```
Automatic merge from submit-queue (batch tested with PRs 60978, 60985). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Backoff only when failed pod shows up
**What this PR does / why we need it**:
Upon introducing the backoff policy we started to delay sync runs for the job when it failed several times before. This leads to failed jobs not reporting status right away in cases that are not related to failed pods, eg. a successful run. This PR ensures the backoff is applied only when `updatePod` receives a failed pod.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59918#59527
/assign @janetkuo @kow3ns
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 61118, 60579). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Increase loging verbosity for deleting stateful set pods
We should always log reasons for deleting StatefulSet Pods.
@jdumars - what's the current process for putting such changes into the release? It's literally 0-risk change that helps with debugging.
cc @ttz21
```release-note
NONE
```
Similar to the change we made for `GetObjectMetricReplicas` in the
previous commit. Ensure that `GetExternalMetricReplicas` does not
include unready pods when its determining how many replica it desires.
Including unready pods can lead to over-scaling.
We did not change the behavior of `GetExternalPerPodMetricReplicas`, as
it is slightly less clear what is the desired behavior. We did make some
small naming refactorings to this method, which will make it easier to
ignore unready pods if we decide we want to.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Task 2: Schedule DaemonSet Pods by default scheduler.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194https://github.com/kubernetes/features/issues/548
**Release note**:
```release-note
When ScheduleDaemonSetPods is enabled, the DaemonSet controller will delegate Pods scheduling to default scheduler.
```
Previously, when `GetObjectMetricReplicas` calculated the desired
replica count, it multiplied the usage ratio by the current number of replicas.
This method caused over-scaling when there were pods that were not ready
for a long period of time. For example, if there were pods A, B, and C,
and only pod A was ready, and the usage ratio was 500%, we would
previously specify 15 pods as the desired replicas (even though really
only one pod was handling the load).
After this change, we now multiple the usage
ratio by the number of ready pods for `GetObjectMetricReplicas`.
In the example above, we'd only desire 5 replica pods.
This change gives `GetObjectMetricReplicas` the same behavior as the
other replica calculator methods. Only `GetExternalMetricReplicas` and
`GetExternalPerPodMetricRepliacs` still allow unready pods to impact the
number of desired replicas. I will fix this issue in the following
commit.
Automatic merge from submit-queue (batch tested with PRs 60732, 60689, 60648, 60704). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not count failed pods as unready in HPA controller
**What this PR does / why we need it**:
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.
After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.
@MaciekPytel @DirectXMan12
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55630
**Special notes for your reviewer**:
**Release note**:
```
Stop counting failed pods as unready in HPA controller to avoid failed pods incorrectly affecting scale up replica count calculation.
```
Automatic merge from submit-queue (batch tested with PRs 60683, 60386). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added unschedulabe predicate.
Signed-off-by: Da K. Ma <madaxa@cn.ibm.com>
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#60163
**Release note**:
```release-note
None
```
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.
After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.
Automatic merge from submit-queue (batch tested with PRs 60342, 60505, 59218, 52900, 60486). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix nodenames slices comparison para.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```