Commit Graph

19 Commits

Author SHA1 Message Date
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
b34f5fd721 Rename LOOKUP_MAPPED to LOOKUP_INSERTED
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Michal Mielewczyk
7f3f2ad115 Evict from overflown pinned ioclass
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-16 04:06:07 -05:00
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Adam Rutkowski
746b32c47d Evict from overflown partitions first
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 12:51:39 -06:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Michal Mielewczyk
0dc8b5811c Store min and max ioclass size as percentage val
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.

To prevent those errors, min and max should be calculated dynamically.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
60680b15b2 Accessors for req->info.mapping_error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
21e98a6dbc Evict request's target partition in regrular order
Instead of evicting target partition as the last one, respect eviction
priorities

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e999de7232 Don't roundup when evicting single part
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
718dc743c8 Enable particular ioclass eviction
If partition's occupancy limit is reached, cachelines should be evicted from
 request's target partition.

Information whether particular partition eviction should be triggered is
carried as a flag by request which triggered eviction.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:23 -05:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Adam Rutkowski
13cf871a13 Per-execution-context freelists
Global free cacheline list is divided into a set of freelists, one
per execution context. When attempting to map addres to cache, first
the freelist for current execution context is considered (fast path).
If current execution context freelist is empty (fast path failure),
mapping function attempts to get freelist from other execution context
list (slow path).

The purpose of this change is improve concurrency in freelist access.
It is part of fine granularity metadata lock implementation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-09 16:19:52 -04:00
Jan Musial
917cbd859a Add promotion policy API and use it in I/O path
Promotion policy is supposed to perform ALRU noise filtering by
eliminating one-hit wonders being added to cache and polluting it.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-07-19 13:52:00 +02:00
Robert Baldyga
0490dd8bd4 ocf_reqest: Store core handle instead of core_id
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-21 12:30:29 +02:00
Michal Mielewczyk
e53944d472 Dynamic I/O queue management
- Queue allocation is now separated from starting cache.
- Queue can be created and destroyed in runtime.
- All queue ops accept queue handle instead of queue id.
- Cache stores queues as list instead of array.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-02-26 17:36:19 +01:00
Robert Baldyga
8d127e6351 Move eviction stuff to eviction/ directory
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-12-31 14:51:51 +01:00
Robert Baldyga
a8e1ce8cc5 Initial commit
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-11-29 15:14:21 +01:00