node cache don't need to get all the information about every
candidate node. For huge clusters, sending full node information
of all nodes in the cluster on the wire every time a pod is scheduled
is expensive. If the scheduler is capable of caching node information
along with its capabilities, sending node name alone is sufficient.
These changes provide that optimization in a backward compatible way
- removed the inadvertent signature change of Prioritize() function
- added defensive checks as suggested
- added annotation specific test case
- updated the comments in the scheduler types
- got rid of apiVersion thats unused
- using *v1.NodeList as suggested
- backing out pod annotation update related changes made in the
1st commit
- Adjusted the comments in types.go and v1/types.go as suggested
in the code review
Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
Automatic merge from submit-queue
Add scheduler predicate to filter for max Azure disks attached
**What this PR does / why we need it**: This PR adds scheduler predicates for maximum Azure Disks count. This allows to use the environment variable KUBE_MAX_PD_VOLS on scheduler the same as it's already possible with GCE and AWS.
This is needed as we need a way to specify the maximum attachable disks on Azure to avoid permanently failing disk attachment in cases k8s scheduled too many PODs with AzureDisk volumes onto the same node.
I've chosen 16 as the default value for DefaultMaxAzureDiskVolumes even though it may be too high for many smaller VM types and too low for the larger VM types. This means, the default behavior may change for clusters with large VM types. For smaller VM types, the behavior will not change (it will keep failing attaching).
In the future, the value should be determined at run time on a per node basis, depending on the VM size. I know that this is already implemented in the ongoing Azure Managed Disks work, but I don't remember where to find this anymore and also forgot who was working on this. Maybe @colemickens can help here.
**Release note**:
```release-note
Support KUBE_MAX_PD_VOLS on Azure
```
CC @colemickens @brendandburns
Automatic merge from submit-queue
Adjust nodiskconflict support based on iscsi multipath.
With the multipath support is in place, to declare whether both iscsi disks are same, we need to only depend on IQN.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
Improved code coverage for plugin/pkg/scheduler/algorithm/priorities…
…/most_requested.go
**What this PR does / why we need it**:
Part of #39559 , code coverage improved from 70+% to 80+%
Automatic merge from submit-queue
Removed a space in portforward.go.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 40405, 38601, 40083, 40730)
fix typo
**What this PR does / why we need it**:
fix typo.
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 34543, 40606)
sync client-go and move util/workqueue
The vision of client-go is that it provides enough utilities to build a reasonable controller. It has been copying `util/workqueue`. This makes it authoritative.
@liggitt I'm getting really close to making client-go authoritative ptal.
approved based on https://github.com/kubernetes/kubernetes/issues/40363
Automatic merge from submit-queue
Don't require failureDomains in PodAffinityChecker
`failureDomains` are only used for `PreferredDuringScheduling` pod
anti-affinity, which is ignored by `PodAffinityChecker`.
This unnecessary requirement was making it hard to move
`PodAffinityChecker` to `GeneralPredicates` because that would require
passing `--failure-domains` to both `kubelet` and `kube-controller-manager`.
Automatic merge from submit-queue (batch tested with PRs 40543, 39999)
Improve code coverage for scheduler/algorithm/priorities
**What this PR does / why we need it**:
Improve code coverage for scheduler/algorithm/priorities from #39559
This is my first unit test for kubernetes , thanks for your review.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 36693, 40154, 40170, 39033)
Minor hygiene in scheduler.
**What this PR does / why we need it**:
Minor cleanups in scheduler, related to PR #31652.
- Unified lazy opaque resource caching.
- Deleted a commented-out line of code.
**Release note**:
```release-note
N/A
```
Automatic merge from submit-queue (batch tested with PRs 39945, 39601)
bugfix for PodToleratesNodeTaints
`PodToleratesNodeTaints`predicate func should return true if pod has no toleration annotations and node's taint effect is `PreferNoSchedule`
Automatic merge from submit-queue
Fix comment and optimize code
**What this PR does / why we need it**:
Fix comment and optimize code.
Thanks.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Avoid unnecessary memory allocations
Low-hanging fruits in saving memory allocations. During our 5000-node kubemark runs I've see this:
ControllerManager:
- 40.17% k8s.io/kubernetes/pkg/util/system.IsMasterNode
- 19.04% k8s.io/kubernetes/pkg/controller.(*PodControllerRefManager).Classify
Scheduler:
- 42.74% k8s.io/kubernetes/plugin/pkg/scheduler/algrorithm/predicates.(*MaxPDVolumeCountChecker).filterVolumes
This PR is eliminating all of those.
Automatic merge from submit-queue
Optimize pod affinity when predicate
Optimize by returning as early as possible to avoid invoking priorityutil.PodMatchesTermsNamespaceAndSelector.
Automatic merge from submit-queue
Add use case to service affinity
Also part of nits in refactoring predicates, I found the explanation of `serviceaffinity` in its comment is very hard to understand. So I added example instead here to help user/developer to digest it.
Automatic merge from submit-queue (batch tested with PRs 35436, 37090, 38700)
Make iscsi pv/pvc aware of nodiskconflict feature
Being iscsi a `RWO, ROX` volume we should conflict if more than one pod is using same iscsi LUN.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)
Do not create selector and namespaces in a loop where possible
With 1000 nodes and 5000 pods (5 pods per node) with anti-affinity a lot of CPU wasted on creating LabelSelector and sets.String (map).
With this change we are able to deploy that number of pods in ~25 minutes. Without - it takes 30 minutes to deploy 500 pods with anti-affinity configured.
failureDomains are only used for PreferredDuringScheduling pod
anti-affinity, which is ignored by PodAffinityChecker.
This unnecessary requirement was making it hard to move
PodAffinityChecker to GeneralPredicates because that would require
passing --failure-domains to both kubelet and kube-controller-manager.
Automatic merge from submit-queue (batch tested with PRs 38076, 38137, 36882, 37634, 37558)
[scheduler] Use V(10) for anything which may be O(N*P) logging
Fixes#37014
This PR makes sure that logging statements which are capable of being called on a perNode / perPod basis (i.e. non essential ones that will just clog up logs at large scale) are at V(10) level.
I dreamt of a levenstein filter that built a weak map of word frequencies and alerted once log throughput increased w/o varying information content.... but then I woke up and realized this is probably all we really need for now :)
Automatic merge from submit-queue
split scheduler priorities into separate files
In the current state it's really hard to find a thing one is looking for, if he doesn't know already know where to look. cc @davidopp
- Prevents kubelet from overwriting capacity during sync.
- Handles opaque integer resources in the scheduler.
- Adds scheduler predicate tests for opaque resources.
- Validates opaque int resources:
- Ensures supplied opaque int quantities in node capacity,
node allocatable, pod request and pod limit are integers.
- Adds tests for new validation logic (node update and pod spec).
- Added e2e tests for opaque integer resources.
Automatic merge from submit-queue
Predicate cacheing and cleanup
Fix to #31795
First pass @ cleanup and caching of the CheckServiceAffinity function.
The cleanup IMO is necessary because the logic around the pod listing and the use of the "implicit selector" (which is reverse engineered to enable the homogenous pod groups).
Should still pass the E2Es.
@timothysc @wojtek-t
Comments addressed, Make emptyMetadataProducer a func to avoid casting,
FakeSvcLister: remove error return for len(svc)=0. New test for
predicatePrecomp to make method semantics explictly enforced when meta
is missing. Precompute wrapper.
Automatic merge from submit-queue
Refactor scheduler to enable switch on equivalence cache
Part 2 of #30844
Refactoring to enable easier switch on of equivalence cache.
Automatic merge from submit-queue
Fix typos and englishify plugin/pkg
**What this PR does / why we need it**: Just typos
**Which issue this PR fixes**: `None`
**Special notes for your reviewer**: Just typos
**Release note**: `NONE`
Automatic merge from submit-queue
[Part 1] Implementation of equivalence pod
Part 1 of #30844
This PR:
- Refactored `predicate.go`, so that `GetResourceRequest` can be used in other places to `GetEquivalencePod`.
- Implement a `equivalence_cache.go` to deal with all information we need to calculate an equivalent pod.
- Define and register a `RegisterGetEquivalencePodFunction `.
Work in next PR:
- The update of `equivalence_cache.go`
- Unit test
- Integration/e2e test
I think we can begin from the `equivalence_cache.go`? Thanks. cc @wojtek-t @davidopp
If I missed any other necessary part, please point it out.
Automatic merge from submit-queue
Add kubelet awareness to taint tolerant match caculator.
Add kubelet awareness to taint tolerant match caculator.
Ref: #25320
This is required by `TaintEffectNoScheduleNoAdmit` & `TaintEffectNoScheduleNoAdmitNoExecute `, so that node will know if it should expect the taint&tolerant
Automatic merge from submit-queue
Turn down hootloop logs in priorities.
Excessive spam is output once we near cluster capacity, sometimes a panic is triggered but that data is clipped in the logs see #33935 for more details.