Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
disable DaemonSet scheduling feature for 1.10
The DaemonSet scheduling feature has blocked the alpha CI job being green and is preventing getting good CI signal for v1.10
It still contains pod scheduling races (#61050) and fundamental issues with the affinity terms it creates (#61410)
As such, there is not significant value in having the feature available in 1.10 in the current state
This PR disables the feature in order to regain green signal on the alpha CI job (reverting commits is likely to be more disruptive at this point)
related to https://github.com/kubernetes/kubernetes/issues/61050
```release-note
DaemonSet scheduling associated with the alpha ScheduleDaemonSetPods feature flag has been removed from the 1.10 release. See https://github.com/kubernetes/features/issues/548 for feature status.
```
Automatic merge from submit-queue (batch tested with PRs 60574, 60666, 60831, 60877, 60357). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix data race in node lifecycle controller
**What this PR does / why we need it**:
Encountered this bug during fixing: https://github.com/kubernetes/kubernetes/pull/60753
There's a data race for `zoneNoExecuteTainter `.
```
--- PASS: TestTaintNodeByCondition (5.72s)
PASS
==================
WARNING: DATA RACE
Write at 0x00c421a8d2f0 by goroutine 1472:
runtime.mapassign_faststr()
/usr/local/go/src/runtime/hashmap_fast.go:598 +0x0
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).addPodEvictorForNewZone()
/root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:1053 +0x37d
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).monitorNodeStatus()
Previous read at 0x00c421a8d2f0 by goroutine 1471:
runtime.mapiterinit()
/usr/local/go/src/runtime/hashmap.go:709 +0x0
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).doNoExecuteTaintingPass()
/root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:459 +0xec
k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).(k8s.io/kubernetes/pkg/controller/nodelifecycle.doNoExecuteTaintingPass)-fm()
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix data race in node lifecycle controller
```
Automatic merge from submit-queue (batch tested with PRs 60363, 59208, 59465, 60581, 60702). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
check taints when allocating CIDR for the cloud
check taint when allocating CIDR for the Cloud (for the shared informer cache).
What this PR does / why we need it:
Following the issue #58406 here is a check of taint when when allocating CIDR for the Cloud
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes#58406
Special notes for your reviewer:
/assign @yastij @gmarek
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 60189, 59542, 59931, 60621, 60353). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove todo to consider adding the cronjob name as a label
**What this PR does / why we need it**:
It seems this label shouldn't be added automatically. If so, we should remove the comment.
See #59473
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#41633
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 60457, 60331, 54970, 58731, 60562). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
clean up unused const in node_lifecycle_controller.go
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59740, 59728, 60080, 60086, 58714). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
more concise to merge the slice
**What this PR does / why we need it**:
more concise to merge the slice
**Special notes for your reviewer**:
Automatic merge from submit-queue (batch tested with PRs 61284, 61119, 61201). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Prevent garbage collector from attempting to sync with 0 resources
**What this PR does / why we need it**:
As of #55259 we enabled garbagecollector.GetDeletableResources to return partial discovery results (including an empty set of discovery results).
This had the unintended consequence of allowing the garbage collector to enter a blocked state that can only be fixed by restarting.
From [this comment](https://github.com/kubernetes/kubernetes/issues/60037#issuecomment-372801088):
> 1. The Sync function periodically calls GetDeletableResources
>
> 2. According to the comment above GetDeletableResources, All discovery errors are considered temporary. Upon encountering any error, GetDeletableResources will log and return any discovered resources it was able to process (which may be none)., an error in discovery causes the discovery client to no longer discover resources in the cluster, but instead of failing and returning an error, it simply logs the error as garbagecollector.go:601] failed to discover preferred resources: %vthe server was unable to return a response in the time allotted, but may still be processing the request and returns an empty list of resources
>
> 3. The Sync function, upon recieving an empty resource list from discovery, detects that the resources have changed, and calls resyncMonitors, which calls dependencyGraphBuilder.syncMonitors with map[] as the argument as shown in the log as garbagecollector.go:189] syncing garbage collector with updated resources from discovery: map[], which sets the list of monitors to an empty list because it thinks there are no resources to monitor.
>
> 4. Lastly the Sync function calls controller.WaitForCacheSync, which calls cache.WaitForCacheSync, which will continually retry the garbagecollector.IsSynced function until it returns true, but it will always return false because len(gb.monitors) is 0.
This PR prevents that specific race condition from arising.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#60037
**Release note**:
```release-note
Fix bug allowing garbage collector to enter a broken state that could only be fixed by restarting the controller-manager.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added unschedulable taint
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194; fixes#61050
**Release note**:
```release-note
When `TaintNodesByCondition` enabled, added `node.kubernetes.io/unschedulable:NoSchedule`
taint to the node if `spec.Unschedulable` is true.
When `ScheduleDaemonSetPods` enabled, `node.kubernetes.io/unschedulable:NoSchedule`
toleration is added automatically to DaemonSet Pods; so the `unschedulable` field of
a node is not respected by the DaemonSet controller.
```
Automatic merge from submit-queue (batch tested with PRs 60978, 60985). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Backoff only when failed pod shows up
**What this PR does / why we need it**:
Upon introducing the backoff policy we started to delay sync runs for the job when it failed several times before. This leads to failed jobs not reporting status right away in cases that are not related to failed pods, eg. a successful run. This PR ensures the backoff is applied only when `updatePod` receives a failed pod.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59918#59527
/assign @janetkuo @kow3ns
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 61118, 60579). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Increase loging verbosity for deleting stateful set pods
We should always log reasons for deleting StatefulSet Pods.
@jdumars - what's the current process for putting such changes into the release? It's literally 0-risk change that helps with debugging.
cc @ttz21
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Task 2: Schedule DaemonSet Pods by default scheduler.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194https://github.com/kubernetes/features/issues/548
**Release note**:
```release-note
When ScheduleDaemonSetPods is enabled, the DaemonSet controller will delegate Pods scheduling to default scheduler.
```
Automatic merge from submit-queue (batch tested with PRs 60732, 60689, 60648, 60704). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not count failed pods as unready in HPA controller
**What this PR does / why we need it**:
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.
After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.
@MaciekPytel @DirectXMan12
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55630
**Special notes for your reviewer**:
**Release note**:
```
Stop counting failed pods as unready in HPA controller to avoid failed pods incorrectly affecting scale up replica count calculation.
```
Automatic merge from submit-queue (batch tested with PRs 60683, 60386). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added unschedulabe predicate.
Signed-off-by: Da K. Ma <madaxa@cn.ibm.com>
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#60163
**Release note**:
```release-note
None
```
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.
After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.
Automatic merge from submit-queue (batch tested with PRs 60342, 60505, 59218, 52900, 60486). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix nodenames slices comparison para.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53689, 56880, 55856, 59289, 60249). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add test/typecheck, a fast typecheck for all build platforms.
Add test/typecheck, a fast typecheck for all build platforms.
Most of the time spent compiling is spent optimizing and linking
binary code. Most errors occur at the syntax or semantic (type) layers.
Go's compiler is importable as a normal package, so we can do fast
syntax and type checking for the 10 platforms we build on.
This currently takes ~6 minutes of CPU time (parallelized).
This makes presubmit cross builds superfluous, since it should catch
most cross-build breaks (generally Unix and 64-bit assumptions).
Example output:
```$ time go run test/typecheck/main.go
type-checking: linux/amd64, windows/386, darwin/amd64, linux/arm,
linux/386, windows/amd64, linux/arm64, linux/ppc64le, linux/s390x, darwin/386
ERROR(windows/amd64) pkg/proxy/ipvs/proxier.go:1708:27: ENXIO not declared by package unix
ERROR(windows/386) pkg/proxy/ipvs/proxier.go:1708:27: ENXIO not declared by package unix
real 0m45.083s
user 6m15.504s
sys 1m14.000s
```
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix StatefulSet set-based selector bug
**What this PR does / why we need it**:
ControllerRevisions were using selectors as the labels, in case of set-based selectors, the helper function to convert selectors to labels would break. This PR uses pod labels for ControllerRevision labels instead of selectors.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59266
**Special notes for your reviewer**:
I'm trying to learn Kubernetes codebase and would be happy to make changes if anything is off.
**Release note**:
```release-note
Fix StatefulSet to work with set-based selectors.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Deployment with Recreate strategy not to wait on Pods in terminal phase
**What this PR does / why we need it**:
Fixes Deployment with Recreate strategy not to wait on Pods in terminal phase. It can happen after eviction or failing to match selector and RS leaves such pod around right now. (Hopefully RC gets fixed separately.)
**Which issue(s) this PR fixes** *:
Fixes https://github.com/kubernetes/kubernetes/issues/60162
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixes a case when Deployment with recreate strategy could get stuck on old failed Pod.
```
/cc @janetkuo
Automatic merge from submit-queue (batch tested with PRs 57326, 60076, 60293, 59756, 60370). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix grammar issues and improve log in volume cache code
**What this PR does / why we need it**:
Fix grammar issues and improve log in volume cache code
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59882, 59434, 57722, 60320, 51249). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[garbage collector] fix log info
typo
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 60054, 60202, 60219, 58090, 60275). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improves backoff policy in JobController
**What this PR does / why we need it**:
This PR is fixing the issue: #56853, It improves the "Job backoff policy" when Job is configure to allow parallelism and few pods' Jobs failed but others succeed.
Now, it checks if the number of pods succeeded increased since the last check. If yes the backoff delay is cleared.
**Which issue(s) this PR fixes**:
Fixes#56853
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```