Automatic merge from submit-queue (batch tested with PRs 46926, 48468)
Added helper funcs to schedulercache.Resource.
**What this PR does / why we need it**:
Avoid duplicated code slice by helper funcs.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46924
**Release note**:
```release-note-none
```
Automatic merge from submit-queue
Cleanup predicates.go
**What this PR does / why we need it**:
cleanup some comments and errors.New().
**Special notes for your reviewer**:
/cc @jayunit100
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Removes alpha feature gate for affinity annotations.
**What this PR does / why we need it**:
In 1.5 we added a backstop to support alpha affinity annotations. This PR removes that support in favor of the Beta fields per discussions.
It also serves as a precursor to some of the component config work that @ncdc has done around @mikedanese design proposal.
xref: https://github.com/kubernetes/kubernetes/pull/41617
**Special notes for your reviewer**:
**Release note**:
```
Removes alpha feature gate for pod affinity annotations.
```
/cc @kubernetes/sig-scheduling-pr-reviews @kubernetes/sig-cluster-lifecycle-misc
Automatic merge from submit-queue (batch tested with PRs 47883, 47179, 46966, 47982, 47945)
Fix local isolation for pod requesting only overlay or scratch
**What this PR does / why we need it**:
Fix overlay resource predicates for pod with only overlay or scratch storage request.
E.g. the following pod can pass predicate even if overlay is only 512Gi.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
storage.kubernetes.io/overlay: 1024Gi
```
similarly, following pod will also pass predicate
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
emptyDir:
sizeLimit: 1024Gi
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/47798
**Special notes for your reviewer**:
**Release note**:
```release-note
```
@jingxu97 @vishh @dashpole
Automatic merge from submit-queue (batch tested with PRs 45877, 46846, 46630, 46087, 47003)
Remove duplicate errors from an aggregate error input.
This PR, in general, removes duplicate errors from an aggregate error input, and returns unique errors with their occurrence count. Specifically, this PR helps with some scheduler errors that fill the log enormously. For example, see the following `truncated` output from a 300-plus nodes cluster, as there was a same error from almost all nodes.
[SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected., SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found.........
After this PR, the output looks like (on a 2-node cluster):
SchedulerPredicates failed due to persistentvolumeclaims "mongodb" not found, which is unexpected.(Count=2)
@derekwaynecarr @smarterclayton @kubernetes/sig-scheduling-pr-reviews
Fixes https://github.com/kubernetes/kubernetes/issues/47145
Automatic merge from submit-queue (batch tested with PRs 47083, 44115, 46881, 47082, 46577)
Scheduler should not log an error when there is no fit
**What this PR does / why we need it**:
The scheduler should not log an error when it is unable to find a fit for a pod as it's an expected situation when resources are unavailable on the cluster that satisfy the pods requirements.
Automatic merge from submit-queue (batch tested with PRs 46787, 46876, 46621, 46907, 46819)
Highlight nodeSelector when checking nodeSelector for Pod.
**What this PR does / why we need it**:
Currently, we are using function name as `PodSelectorMatches` to check if `nodeSelector` matches for a Pod, it is better update the function name a bit to reflect it is checking `nodeSelector` for a Pod.
The proposal is rename `PodSelectorMatches` as `PodMatchNodeSelector`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Add local storage (scratch space) allocatable support
This PR adds the support for allocatable local storage (scratch space).
This feature is only for root file system which is shared by kubernetes
componenets, users' containers and/or images. User could use
--kube-reserved flag to reserve the storage for kube system components.
If the allocatable storage for user's pods is used up, some pods will be
evicted to free the storage resource.
This feature is part of local storage capacity isolation and described in the proposal https://github.com/kubernetes/community/pull/306
**Release note**:
```release-note
This feature exposes local storage capacity for the primary partitions, and supports & enforces storage reservation in Node Allocatable
```
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)
move labels to components which own the APIs
During the apimachinery split in 1.6, we accidentally moved several label APIs into apimachinery. They don't belong there, since the individual APIs are not general machinery concerns, but instead are the concern of particular components: most commonly the kubelet. This pull moves the labels into their owning components and out of API machinery.
@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @kubernetes/api-approvers
@derekwaynecarr since most of these are related to the kubelet
This PR adds the check for local storage request when admitting pods. If
the local storage request exceeds the available resource, pod will be
rejected.
This PR adds the support for allocatable local storage (scratch space).
This feature is only for root file system which is shared by kubernetes
componenets, users' containers and/or images. User could use
--kube-reserved flag to reserve the storage for kube system components.
If the allocatable storage for user's pods is used up, some pods will be
evicted to free the storage resource.
Automatic merge from submit-queue (batch tested with PRs 46076, 43879, 44897, 46556, 46654)
Local storage plugin
**What this PR does / why we need it**:
Volume plugin implementation for local persistent volumes. Scheduler predicate will direct already-bound PVCs to the node that the local PV is at. PVC binding still happens independently.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Part of #43640
**Release note**:
```
Alpha feature: Local volume plugin allows local directories to be created and consumed as a Persistent Volume. These volumes have node affinity and pods will only be scheduled to the node that the volume is at.
```
Automatic merge from submit-queue
Move hardPodAffinitySymmetricWeight to scheduler policy config
**What this PR does / why we need it**:
Move hardPodAffinitySymmetricWeight to scheduler policy config
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43845
**Special notes for your reviewer**:
If you like this, will add test later
**Release note**:
```
Move hardPodAffinitySymmetricWeight from KubeSchedulerConfiguration to scheduler Policy config
```
Automatic merge from submit-queue
removing generic_scheduler todo after discussion (#46027)
**What this PR does / why we need it**:
**Which issue this PR fixes** #46027
**Special notes for your reviewer**: just a quick clean cc @wojtek-t
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 45766, 46223)
Scheduler should use a shared informer, and fix broken watch behavior for cached watches
Can be used either from a true shared informer or a local shared
informer created just for the scheduler.
Fixes a bug in the cache watcher where we were returning the "current" object from a watch event, not the historic event. This means that we broke behavior when introducing the watch cache. This may have API implications for filtering watch consumers - but on the other hand, it prevents clients filtering from seeing objects outside of their watch correctly, which can lead to other subtle bugs.
```release-note
The behavior of some watch calls to the server when filtering on fields was incorrect. If watching objects with a filter, when an update was made that no longer matched the filter a DELETE event was correctly sent. However, the object that was returned by that delete was not the (correct) version before the update, but instead, the newer version. That meant the new object was not matched by the filter. This was a regression from behavior between cached watches on the server side and uncached watches, and thus broke downstream API clients.
```
Automatic merge from submit-queue
Moved qos to api.helpers.
**What this PR does / why we need it**:
The `GetPodQoS` is also used by other components, e.g. kube-scheduler and it's not bound to kubelet; moved it to api helpers so client-go.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #N/A
**Release note**:
```release-note-none
```
Automatic merge from submit-queue
Disambiguate schedule, assume, and bind in functions as well as in
Addresses my comments in #45972 about how these metrics need to be disambiguated.
- separates schedule, assume, and bind.
- renames variables like `dest` to be explicit.
- removes the logging statement to occur outside of the timed portion of the metric measurement.
Generally makes `sheduleOne` a happy function to read :)
Previously, the scheduler created two separate list watchers. This
changes the scheduler to be able to leverage a shared informer, whether
passed in externally or spawned using the new in place method. This
removes the last use of a "special" informer in the codebase.
Allows someone wrapping the scheduler to use a shared informer if they
have more information avaliable.
Automatic merge from submit-queue (batch tested with PRs 41903, 45311, 45474, 45472, 45501)
Removed old scheduler constructor.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A
**Release note**:
```release-note-none
```
Automatic merge from submit-queue (batch tested with PRs 44727, 45409, 44968, 45122, 45493)
Total priority overflow check
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#24720
**Special notes for your reviewer**:
@adohe. I have borrowed some parts of your code in the closed PR and created this one.
**Release note**:
```release-note
This fixes the overflow for priorityconfig- valid range {1, 9223372036854775806}.
```
Automatic merge from submit-queue (batch tested with PRs 45100, 45152, 42513, 44796, 45222)
Added InterPodAffinity unit test case with Namespace.
**What this PR does / why we need it**:
Added InterPodAffinity unit test case with Namespace: unit test case for #45098
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A
**Release note**:
```release-note-none
```
Automatic merge from submit-queue
Align Extender's validation with prioritizers.
**What this PR does / why we need it**:
Align Extender's validation with prioritizers.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A
**Release note**:
```release-note-none
```
Automatic merge from submit-queue (batch tested with PRs 41530, 44814, 43620, 41985)
no need check is nil, because has checked before
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Improved code coverage for plugin/pkg/scheduler/algorithm
**What this PR does / why we need it**:
Part of #39559 , code coverage improved from 0% to 100%
**Special notes for your reviewer**:
Improved coverage for scheduler/algorithm to 100%
Test cover output:
```
make test WHAT=./plugin/pkg/scheduler/algorithm KUBE_COVER=y
Running tests for APIVersion: v1,apps/v1beta1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2alpha1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1
+++ [0302 10:43:05] Saving coverage output in '/tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2alpha1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170302-104305'
skipped k8s.io/kubernetes/cmd/libs/go2idl/generator
skipped k8s.io/kubernetes/vendor/k8s.io/client-go/1.4/rest
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm 0.061s coverage: 100.0% of statements
+++ [0302 10:43:07] Combined coverage report: /tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2alpha1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170302-104305/combined-coverage.html
```
Automatic merge from submit-queue
Prepare for move zz_generated_deepcopy.go to k8s.io/api
This is in preparation to move deep copies to with the types to the types repo (see https://github.com/kubernetes/gengo/pull/47#issuecomment-296855818). The init() function is referring the `SchemeBuilder` defined in the register.go in the same packge, so we need to revert the dependency.
This PR depends on https://github.com/kubernetes/gengo/pull/49, otherwise verification will fail.
Automatic merge from submit-queue (batch tested with PRs 43900, 44152, 44324)
Fix: check "ok" first to avoid panic
Check "ok" and then check if "currState.pod.Spec.NodeName != pod.Spec.NodeName", here if currState is nil, it will panic.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Scheduler can recieve its policy configuration from a ConfigMap
**What this PR does / why we need it**: This PR adds the ability to scheduler to receive its policy configuration from a ConfigMap. Before this, scheduler could receive its policy config only from a file. The logic to watch the ConfigMap object will be added in a subsequent PR.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```Add the ability to the default scheduler to receive its policy configuration from a ConfigMap object.
```
Automatic merge from submit-queue (batch tested with PRs 41775, 39678, 42629, 42524, 43028)
Aggregated used ports at the NodeInfo level.
fixes#42523
```release-note
Aggregated used ports at the NodeInfo level for `PodFitsHostPorts` predicate.
```
If we get the scheduling metrics, which is "SchedulingAlgorithmLatency, E2eSchedulingLatency, BindingLatency". The E2eSchedulingLatency should be the sum of SchedulingAlgorithmLatency and BindingLatency, while we found E2eSchedulingLatency is almost the same as E2eSchedulingLatency for some optimization.
Automatic merge from submit-queue
Fix a typo
Fix a typo
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 42662, 43035, 42578, 43682)
Selector spreading - improving code readability.
**What this PR does / why we need it**:
To improve code readability in selector spreading.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42577
```release-note
```
Automatic merge from submit-queue
Add plugin/pkg/scheduler to linted packages
**What this PR does / why we need it**:
Adds plugin/pkg/scheduler to linted packages to improve style correctness.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#41868
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43681, 40423, 43562, 43008, 43381)
Changes for removing deadcode in taint_tolerations
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43007
Code cleanup with some modifications and a test-case in taints and tolerations
Code cleanup with some modifications and a test-case in taints and tolerations
Removed unnecessary code from my last commit
Code cleanup with some modifications and a test-case in taints and tolerations
SUggested changes for taints_tolerations
Changes for removing deadcode in taint_tolerations
small changes again
small changes again
Small changes for clear documentation.
Automatic merge from submit-queue (batch tested with PRs 40404, 43134, 43117)
Improve code coverage for scheduler/api/validation
**What this PR does / why we need it**:
Improve code coverage for scheduler/api/validation from #39559
Thanks
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 42762, 42739, 42425, 42778)
Fixed potential OutOfSync of nodeInfo.
The cloned NodeInfo still share the same resource objects in cache; it may make `requestedResource` and Pods OutOfSync, for example, if the pod was deleted, the `requestedResource` is updated by Pods are not in cloned info. Found this when investigating #32531 , but seems not the root cause, as nodeInfo are readonly in predicts & priorities.
Sample codes for `&(*)`:
```
package main
import (
"fmt"
)
type Resource struct {
A int
}
type Node struct {
Res *Resource
}
func main() {
r1 := &Resource { A:10 }
n1 := &Node{Res: r1}
r2 := &(*n1.Res)
r2.A = 11
fmt.Printf("%t, %d %d\n", r1==r2, r1, r2)
}
```
Output:
```
true, &{11} &{11}
```
- Added schedulercache.Resource.SetOpaque helper.
- Amend kubelet allocatable sync so that when OIRs are removed from capacity
they are also removed from allocatable.
- Fixes#41861.
Automatic merge from submit-queue (batch tested with PRs 42200, 39535, 41708, 41487, 41335)
Add support for statefulset spreading to the scheduler
**What this PR does / why we need it**:
The scheduler SelectorSpread priority funtion didn't have the code to spread pods of StatefulSets. This PR adds StatefulSets to the list of controllers that SelectorSpread supports.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#41513
**Special notes for your reviewer**:
**Release note**:
```release-note
Add the support to the scheduler for spreading pods of StatefulSets.
```
Automatic merge from submit-queue (batch tested with PRs 41937, 41151, 42092, 40269, 42135)
Improve code coverage for scheduler/algorithmprovider/defaults
**What this PR does / why we need it**:
Improve code coverage for scheduler/algorithmprovider/defaults from #39559
Thanks for your review.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 41954, 40528, 41875, 41165, 41877)
Move the scheduler Fake* to testing folder
Address this issue: https://github.com/kubernetes/kubernetes/issues/41662
@timothysc I moved the interface to the types.go, and the Fake* to the testing package, not sure whether you like or not.
Automatic merge from submit-queue (batch tested with PRs 41701, 41818, 41897, 41119, 41562)
Optimization of on-wire information sent to scheduler extender interfaces that are stateful
The changes are to address the issue described in https://github.com/kubernetes/kubernetes/issues/40991
```release-note
Added support to minimize sending verbose node information to scheduler extender by sending only node names and expecting extenders to cache the rest of the node information
```
cc @ravigadde
Automatic merge from submit-queue (batch tested with PRs 41701, 41818, 41897, 41119, 41562)
Allow updates to pod tolerations.
Opening this PR to continue discussion for pod spec tolerations updates when a pod has been scheduled already. This PR is built on top of https://github.com/kubernetes/kubernetes/pull/38957.
@kubernetes/sig-scheduling-pr-reviews @liggitt @davidopp @derekwaynecarr @kubernetes/rh-cluster-infra
Automatic merge from submit-queue (batch tested with PRs 41994, 41969, 41997, 40952, 40576)
Guaranteed admission for Critical Pods
This is the first step in implementing node-level preemption for critical pods.
It defines the AdmissionFailureHandler interface, which allows callers, like the kubelet, to define how failed predicates are handled, and take steps to correct failures if necessary.
In the kubelet's implementation, it triggers preemption if the pod being admitted is critical, and if the only failed predicates are InsufficientResourceErrors, then it prempts (not yet implemented) other other pods to allow admission of the critical pod.
cc: @vishh
node cache don't need to get all the information about every
candidate node. For huge clusters, sending full node information
of all nodes in the cluster on the wire every time a pod is scheduled
is expensive. If the scheduler is capable of caching node information
along with its capabilities, sending node name alone is sufficient.
These changes provide that optimization in a backward compatible way
- removed the inadvertent signature change of Prioritize() function
- added defensive checks as suggested
- added annotation specific test case
- updated the comments in the scheduler types
- got rid of apiVersion thats unused
- using *v1.NodeList as suggested
- backing out pod annotation update related changes made in the
1st commit
- Adjusted the comments in types.go and v1/types.go as suggested
in the code review
Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
Automatic merge from submit-queue
Add scheduler predicate to filter for max Azure disks attached
**What this PR does / why we need it**: This PR adds scheduler predicates for maximum Azure Disks count. This allows to use the environment variable KUBE_MAX_PD_VOLS on scheduler the same as it's already possible with GCE and AWS.
This is needed as we need a way to specify the maximum attachable disks on Azure to avoid permanently failing disk attachment in cases k8s scheduled too many PODs with AzureDisk volumes onto the same node.
I've chosen 16 as the default value for DefaultMaxAzureDiskVolumes even though it may be too high for many smaller VM types and too low for the larger VM types. This means, the default behavior may change for clusters with large VM types. For smaller VM types, the behavior will not change (it will keep failing attaching).
In the future, the value should be determined at run time on a per node basis, depending on the VM size. I know that this is already implemented in the ongoing Azure Managed Disks work, but I don't remember where to find this anymore and also forgot who was working on this. Maybe @colemickens can help here.
**Release note**:
```release-note
Support KUBE_MAX_PD_VOLS on Azure
```
CC @colemickens @brendandburns
Automatic merge from submit-queue
Adjust nodiskconflict support based on iscsi multipath.
With the multipath support is in place, to declare whether both iscsi disks are same, we need to only depend on IQN.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
Improved code coverage for plugin/pkg/scheduler/algorithm/priorities…
…/most_requested.go
**What this PR does / why we need it**:
Part of #39559 , code coverage improved from 70+% to 80+%
Automatic merge from submit-queue (batch tested with PRs 41378, 41413, 40743, 41155, 41385)
'core' package to prevent dependency creep and isolate core functiona…
**What this PR does / why we need it**:
Solves these two problems:
- Top level Scheduler root directory has several files in it that are needed really by the factory and algorithm implementations. Thus they should be subpackages of scheduler.
- In addition scheduler.go and generic_scheduler.go don't naturally differentiate themselves when they are in the same package. scheduler.go is eseentially the daemon entry point and so it should be isolated from the core
*No release note needed*
Automatic merge from submit-queue
Removed a space in portforward.go.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 40696, 39914, 40374)
Forgiveness library changes
**What this PR does / why we need it**:
Splited from #34825, contains library changes that are needed to implement forgiveness:
1. ~~make taints-tolerations matching respect timestamps, so that one toleration can just tolerate a taint for only a period of time.~~ As TaintManager is caching taints and observing taint changes, time-based checking is now outside the library (in TaintManager). see #40355.
2. make tolerations respect wildcard key.
3. add/refresh some related functions to wrap taints-tolerations operation.
**Which issue this PR fixes**:
Related issue: #1574
Related PR: #34825, #39469
~~Please note that the first 2 commits in this PR come from #39469 .~~
**Special notes for your reviewer**:
~~Since currently we have `pkg/api/helpers.go` and `pkg/api/v1/helpers.go`, there are some duplicated periods of code laying in these two files.~~
~~Ideally we should move taints-tolerations related functions into a separate package (pkg/util/taints), and make it a unified set of implementations. But I'd just suggest to do it in a follow-up PR after Forgiveness ones done, in case of feature Forgiveness getting blocked to long.~~
**Release note**:
```release-note
make tolerations respect wildcard key
```
Automatic merge from submit-queue (batch tested with PRs 40405, 38601, 40083, 40730)
fix typo
**What this PR does / why we need it**:
fix typo.
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 34543, 40606)
sync client-go and move util/workqueue
The vision of client-go is that it provides enough utilities to build a reasonable controller. It has been copying `util/workqueue`. This makes it authoritative.
@liggitt I'm getting really close to making client-go authoritative ptal.
approved based on https://github.com/kubernetes/kubernetes/issues/40363