- snapshot equivalence cache generation numbers before snapshotting the
scheduler cache
- skip update when generation does not match live generation
- keep the node and increment its generation to invalidate it instead of
deletion
- use predicates order ID as key to improve performance
Automatic merge from submit-queue (batch tested with PRs 65570, 65616). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Retry scheduling on StorageClass events
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56163
**Special notes for your reviewer**:
I have taken over #60006.
It's hard to test in e2e, because we cannot know reschedule of pod is triggered by which event (periodically service/node events will move pods to active queue too). ~~I'll add integration tests for this functionality after [this PR](https://github.com/kubernetes/kubernetes/pull/65296) get merged.~~ (already added)
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66540, 66599). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Invalidate CheckVolumeBinding predicate only when VolumeScheduling feature is enabled
**What this PR does / why we need it**:
Invalidate CheckVolumeBinding predicate only when VolumeScheduling feature is enabled.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Re-design equivalence class cache to two level cache
**What this PR does / why we need it**:
The current ecache introduced a global lock across all the nodes, and this patch tried to assign ecache per node to eliminate that global lock. The improvement of scheduling performance and throughput are both significant.
**CPU Profile Result**
Machine: 32-core 60GB GCE VM
1k nodes 10k pods bench test (we've highlighted the critical function):
1. Current default scheduler with ecache enabled:

2. Current default scheduler with ecache disabled:

3. Current default scheduler with this patch and ecache enabled:

**Throughput Test Result**
1k nodes 3k pods `scheduler_perf` test:
Current default scheduler, ecache is disabled:
```bash
Minimal observed throughput for 3k pod test: 200
PASS
ok k8s.io/kubernetes/test/integration/scheduler_perf 30.091s
```
With this patch, ecache is enabled:
```bash
Minimal observed throughput for 3k pod test: 556
PASS
ok k8s.io/kubernetes/test/integration/scheduler_perf 11.119s
```
**Design and implementation:**
The idea is: we re-designed ecache into a "two level cache".
The first level cache holds the global lock across nodes and sync is needed only when node is added or deleted, which is of much lower frequency.
The second level cache is assigned per node and its lock is restricted to per node level, thus there's no need to bother the global lock during whole predicate process cycle. For more detail, please check [the original discussion](https://github.com/kubernetes/kubernetes/issues/63784#issuecomment-399848349).
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#63784
**Special notes for your reviewer**:
~~Tagged as WIP to make sure this does not break existing code and tests, we can start review after CI is happy.~~
**Release note**:
```release-note
Re-design equivalence class cache to two level cache
```
Automatic merge from submit-queue (batch tested with PRs 58487, 63666). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use subtest for table units (pkg/scheduler/factory)
**What this PR does / why we need it**: Update scheduler's unit table tests to use subtest
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
breaks up PR: https://github.com/kubernetes/kubernetes/pull/63281
/ref #63267
**Release note**:
```release-note
This PR will leverage subtests on the existing table tests for the scheduler units.
Some refactoring of error/status messages and functions to align with new approach.
```
This moves the equivalence cache implementation out of the 'core'
package and into k8s.io/kubernetes/pkg/scheduler/core/equivalence.
Separating the equiv. cache from the genericScheduler implementation
make their interaction points easier to follow, and prevents us from
accidentally accessing unexported fields.
Automatic merge from submit-queue (batch tested with PRs 64142, 64426, 62910, 63942, 64548). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
scheduler: further cleanup of equivalence cache
**What this PR does / why we need it**:
This improves comments and simplifies some names/logic in equivalence_cache.go, as well as changing the order of some items in the file.
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/kind cleanup
Part of https://github.com/kubernetes/kubernetes/pull/63040 is the
assumption that scheduler cache updates must happen before equivalence
cache updates for any given informer event.
The reason for this is that the equivalence cache implementation checks
the main cache for staleness while holding the equiv. cache write lock.
case 1: If an informer invalidates an equiv. cache entry before the
staleness check, then we know that the main cache update completed.
case 2: If an informer blocks trying to grab the equiv. cache lock, then
invalidation will occur right after the potentially stale update is
written.
This patch adds a note to places where we invalidate the equivalence
cache so that hopefully nobody violates this invariant.