Commit Graph

625 Commits

Author SHA1 Message Date
Adam Rutkowski
0476511c00 probe: return dirty and shutdown status despite metadata mismatch
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Adam Rutkowski
ff4842482e Fix setting cache dirty flag during stop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Jan Musial
67f80d813c Avoid nullptr dereference in ocf_io_put
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-06 13:38:34 +02:00
Robert Baldyga
9a3f64df28 seq_cutoff: Ignore invalid streams
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-04-01 18:46:28 +02:00
Adam Rutkowski
2fadd5a22a Fix eviction occupancy stats decrement
Eviction should decrement occupancy statistics for the
core from which a cacheline is being evicted rather than
from the I/O target core.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-01 18:01:28 -05:00
Robert Baldyga
49b9b36d13 cleaner: Don't check for valid if cache line is not dirty
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-04-01 13:28:19 +02:00
Michał Mielewczyk
642794dcd7
Merge pull request #483 from arutk/repart_fix
Fix repartitioning in request refresh path
2021-03-31 11:32:28 +02:00
Adam Rutkowski
719676c444 Fix repartitioning in request refresh path
update_req_info() should include REMAPPED cachelines
in repart stats (number of cachelines within request
belonging to other than the target partition).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-31 12:13:48 -05:00
Adam Rutkowski
521258bcc8 Remove dirty check from LRU cleaner getter callback
This check is incorrect as cacheline status may change
from dirty to clean at any point during cleaning, except for
when the hash bucket is locked.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-30 13:10:28 -05:00
Michał Mielewczyk
c2e588be9d
Merge pull request #476 from mmkayPL/cacheline-alignment
Cacheline alignment
2021-03-26 12:01:55 +01:00
Michał Mielewczyk
2aa8922fea
Merge pull request #478 from Open-CAS/fix-freeing-discard-reqs
Fix freeing oversized discard requests
2021-03-26 11:33:40 +01:00
Michał Mielewczyk
a6c8cbb1ac
Merge pull request #479 from arutk/lru_fix3
Always call LRU_set_hot() under hash bucket lock
2021-03-26 11:04:59 +01:00
Michał Mielewczyk
78d7e5294f
Merge pull request #480 from arutk/lru_fix4
Clear hot flag when removing node from LRU list
2021-03-26 11:04:47 +01:00
Adam Rutkowski
b87008dc67 Clear hot flag when removing node from LRU list
This isn't strictly required in current implementation as
nodes are always re-initialized before inserting to LRU list.
However it seems to make sense to zero the flag anyway to
make the code easier to reason about.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 10:25:03 -05:00
Adam Rutkowski
9486b7796f Remove early return from engine_map()
Removing conditional early return from engine_map() function
in case of insufficient free cachelines. The reasons are:

1. current implementation does not treat unssufficient free
cachelines condition as an error,
2. the check is based on stale request info, so it is inaccurate,
3. it is easier to hit more paths with functional tests,
4. partially mapping request from the freelist becomes more common
rather than being a corner case dependent on racy timings between
threads

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 07:40:24 -05:00
Kozlowski Mateusz
e054949cbb Metadata updater mutex alignment
Avoids trashing of (mostly) static and often used entries

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
e391fc2c13 Queue alignment
Metadata reshuffling

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
642527d72a ref count alignment
Move ref counts to their own cacheline - otherwise they pollute and cause
false sharing to fields nearby and cause a lot of cache bouncing between
physical CPUs.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fd2fd335a0 ocf_cache alignment
Grouping static fields together, while often changing ones get their own
cacheline, or some not used often/important fields.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
33f29e43bc Aligned ocf_volume
Force cacheline alignment to avoid cacheline trashing on static fields
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Adam Rutkowski
a3f2a214b6 Always call LRU_set_hot() under hash bucket lock
set_hot() depends on cacheline metadata status to determine
on which list the element is located (dirty cs clean list).
Thus at least hash bucket lock is required when calling
set_hot().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-25 18:50:13 -05:00
Jan Musial
9c070c1d25 Fix freeing oversized discard requests
When issuing discard request over 512KiB OCF would trim this request and
overwrite req->core_line_count which would then cause this request to be
freed from wrong mpool.

This is fixed now by saving core_line_count that was set when allocating
this request that is never overwritten. This alloc_core_line_count is
then used to free the request from correct mpool.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-25 15:16:57 +01:00
Kozlowski Mateusz
365f4f0d19 Use read/write locks in queue sequential cutoffs
If user thread is preempted during tree/list update and another IO
is issued on the same CPU, the structure will be in undefined state.
This may result in hung tasks, if the tree stops being a tree and a loop exists -
tree search functions won't be able to end properly; or panics if a NULL value appears
suddenly in the preempted thread, after a null-check was already done.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-23 09:54:56 +01:00
Michal Mielewczyk
92a5ddd524 ut framework: don't mock env functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:08:27 +01:00
Michal Mielewczyk
0d3f3cde14 Return error when modifying default ioclass rule
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:06:23 +01:00
Robert Baldyga
5f77db5c85
Merge pull request #473 from robertbaldyga/deallocate-request-properly
ocf_request: Deallocate request with separately allocated map properly
2021-03-19 10:31:07 +01:00
Robert Baldyga
87244c04d7
Merge pull request #472 from mmichal10/lock-on-setting-hot
Update cleaning lru under metadata lock
2021-03-19 09:54:32 +01:00
Robert Baldyga
74d61785e9 ocf_request: Deallocate request with separately allocated map properly
When allocation of request with map fails, we fallback to allocating
request with no map, and then allocate map separately. During request
put we need to distinguish between those two cases in order to deallocate
request properly.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-19 09:49:29 +01:00
Robert Baldyga
296e98e39c
Merge pull request #471 from arutk/lru_fix_2
Prevent remapping cachelines within single request
2021-03-18 20:33:11 +01:00
Adam Rutkowski
a232488c7a Prevent remapping cachelines within single request
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-18 17:11:45 -05:00
Robert Baldyga
8020e7fd67
Merge pull request #457 from Ostrokrzew/false_stats
Fix broken 'dirty_for' stats
2021-03-18 10:24:02 +01:00
Michal Mielewczyk
841f8122d7 Update cleaning lru under metadata lock
This prevents deinitializing cleaning policy structures during IO.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-18 09:55:21 +01:00
Michał Mielewczyk
df969cde16
Merge pull request #470 from arutk/lru_fix
Parallel eviction fixes
2021-03-17 11:41:07 +01:00
Adam Rutkowski
c565c5c3f5 Add comments warning about stale request map info
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:28:02 -05:00
Adam Rutkowski
98124aa13d Add missing lookup in engine_map()
Early return from engine_map() in case of insufficient free
cachelines on the freelist is opportunistic, as both request
map info and freelist count are not accurate. Map info is stale
as it is to be refreshed in engine_map() after hash bucket
lock had been upgraded. Freelist count on other hand is subject
to change asynchronously.

The implementation assumption however is that after engine_map()
request is fully traversed (engine_map() is equivalent to
engine_lookup() followed by an attempt to map missing cachelines).
So in case of early return we must take care of repeating the
lookup.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:23:24 -05:00
Adam Rutkowski
e5fa15bdb2 Remove early return from engine_map() in case of hit
At this point cacheline status in request map is stale,
as lookup was performed before upgrading hash bucket lock.
If indeed all cachelines are mapped, this will be determined
in the main loop of engine_map().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:21:03 -05:00
Michał Mielewczyk
e3d5439d9f
Merge pull request #469 from mmichal10/fix-unmapped
Fix `ocf_engine_unmapped_count()`
2021-03-17 10:30:23 +01:00
Adam Rutkowski
736fb2efc0 Call LRU set_hot() immediately after cache insert
This assures that cacheline with LOOKUP_INSERTED status
is always present on the LRU list.

This fixes an ENV_BUG() caused by an attempt to remove
a cacheline from LRU list which was not there. This
happened when cacheline was mapped from freelist
(LOOKUP_INSERTED) but the entire request mapping failed
and generic cleanup routines attempted to invalidate cacheline,
including removing it from the LRU list. As engine_set_hot()
is called after successfull mapping, the inserted cacheline was
not yet present on the LRU list.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-16 19:59:09 -05:00
Michal Mielewczyk
71ec08c158 Assert number of cachelines to evict
Number of cachelines to evcit can't be greater than the number of unmapped
entries in request.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 16:29:05 +01:00
Michal Mielewczyk
4e8c037d7b Fix ocf_engine_unmapped_count()
Inserted entries should be considered mapped.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 15:36:47 +01:00
Robert Baldyga
415a778c03 ocf_request: Fix use after free bug
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-15 19:41:14 +01:00
Robert Baldyga
b25ea7c8ec seq_cutoff: Fix stream promotion fastpath
Now req_count starts from 1.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-11 14:41:13 +01:00
Jan Musial
2dc36657bf Use mpool to allocate metadata_io requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
c243ad3df0 Use mpool to allocate ocf_requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
8e21aa6441 Remove not needed req allocator size table
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
b47ef2c386 Change vmalloc in metadata asynch io to kmalloc
Vmalloc is very slow in comparison to kmalloc

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
9f8802e833 Decrease memory requirements for metadata io
Magic child metadata request count (33) was deducted
experimentally.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Michał Mielewczyk
d2b5de7970
Merge pull request #448 from robertbaldyga/perqueue-seq-cutoff
Per-queue multi-stream sequential cutoff
2021-03-05 14:38:21 +01:00
Adam Rutkowski
7927b0b74f Optimize set_hot calls in eviction path
Split traversation into two distinct phases: lookup()
and lru set_hot(). prepare_cachelines() now only calls
set_hot() once after lookup and insert are finished.
lookup() is called explicitly only once in
prepare_cachelines() at the very beginning of the
procedure. If request is a miss then then map()
performs operations equivalent to lookup() supplemented
by an attempt to map cachelines. Both lookup() and
set_hot() are called via traverse() from the engines
which do not attempt mapping, and thus do not call
prepare_clines().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1c6168d82b Do not unmap inserted cachelines before eviction
Unmapping cachelines previously mapped from freelist before
eviction is a waste of resources. Also if map does not erarly
exit upon first mapping error, we can have request fully
traversed (and partially mapped) after mapping and thus
skip lookup in eviction.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
ce2ff14150 Move request engine callbacks to req structure
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
0e699fc982 Refactor ocf_engine_remap
.. so that the main part, responsible strictly for mapping
given LBA to given collision index, is encapsulated in
a function ocf_map_cache_line with external linkage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
3bd0f6b6c4 Change sequential request detection logic
Changing sequential request detection so that a miss request is
recognized as sequential after needed cachelines are evicted
and mapped to the request in a sequential order.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
c40e36456b Add missing hash bucket lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
69c0f20b6e Remove global metadata lock from cleaner metadata update step
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a80eea454f Add function to determine hash collisions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
07d1079baa Add LOOKUP_REMAPPED status to allow iterative cacheline lock
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
b34f5fd721 Rename LOOKUP_MAPPED to LOOKUP_INSERTED
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a09587f521 Introduce ocf_cache_line_is_locked_exclusively
Function returns true if cacheline is locked (read
or write) by exactly one entity with no waiters.

This is usefull for eviction. Assuming caller holds
hash bucket write lock, having exlusive cacheline
lock (either read or write) allows holder to remap
cacheline safely. Typically during eviction hash
bucket is unknown until resolved under cacheline lock,
so locking cacheline exclusively (instead of locking
and checking for exclusive lock) is not possible.

More specifically this is the flow for synchronizing
cacheline remap using ocf_cache_line_is_locked_exclusively:
1. acquire a cacheline (read or write) lock
2. resolve hash bucket
3. write-lock hash bucket
4. verify cacheline lock is exclusive

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Robert Baldyga
3ee253cc4e Per-queue multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-04 16:38:31 +01:00
Robert Baldyga
ac9bd5b094
Merge pull request #453 from arutk/no_cl_gl_lock
Skip cacheline concurrency global lock in fast path
2021-03-04 12:33:50 +01:00
Michal Mielewczyk
f1012b020b Validate ioclass config
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
95d756de91 Remove ioclass min_size from public API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
f61472c3f4 Validate seq cutoff threshold value
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Slawomir Jankowski
eeda1f3f0f Unify type of dirty_for in info structs
Reformat function that calculates how long cache/core is dirty
Update `dirty_for` types in functional tests

Values stored in info structs fields (both in cache and core structs)
are unsigned 64-bits ints but `dirty_for`s were unsigned 32-bits ints.

Use existing function to transform returned value to seconds.
Replace seconds stored in metadata with seconds.
Replacement was done if old value of replaced field was equal to zero.
Acquiring monotonic high precission timestamp is potentially
slow and it makes sense to compare the field's value
to zero before calling atomic function.

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2021-02-25 14:51:53 +01:00
Adam Rutkowski
c7fc4fff39 Change cacheline concurrency constructor params
Provide number of cachelines as the cacheline concurrency
construtor param instead of reading it from cache.

The purpose of this change is to improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:45 -06:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
0f34e46375 Fix error handling in cacheline concurrency init
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:36 -06:00
Adam Rutkowski
d8f25d2742 Skip cacheline concurrency global lock in fast path
The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.

Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
 * concurrent fast path will also only try_lock
   cachelines, releasing all acquired locks if failed
   to immediately acquire lock for any cacheline
 * concurrent slow path is guaranteed to have
   precedence in lock acquisition when conditions
   for deadlock occure (both slowpath and fastpath
   have acquired some locks required by the other
   thread). This is because the fastpath thread will
   back off (release acquired locks) if any one of the
   cacheline locks is not acquired.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:28 -06:00
Adam Rutkowski
c95f6358ab Get rid of status bits lock
All the status bits operations are now protectec by
hash bucket locks

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:05:53 -06:00
Adam Rutkowski
cd9e42f987 Properly lock hash bucket for status bits operations
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:02:50 -06:00
Robert Baldyga
75baec5aa5
Merge pull request #456 from arutk/aalru
Relax LRU list ordering to minimize list updates
2021-02-18 13:48:54 +01:00
Michal Mielewczyk
7f3f2ad115 Evict from overflown pinned ioclass
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-16 04:06:07 -05:00
Adam Rutkowski
0748f33a9d Align each global metadata lock to 64B
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
10c3c3de36 Renaming hash bucket locking functions
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
   hold metadata shared global lock ("naked") vs the ones
   which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
   accepting hash bucket id ("id"), core line and lba ("cline")
   and entire request ("req").

Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Adam Rutkowski
c822c953ed Fix return status from hash bucket trylock wr
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-11 15:02:06 -06:00
Robert Baldyga
af8177d2ba
Merge pull request #458 from mmichal10/fix-cleaning
Fix updating hot cachelines cleaning list
2021-02-11 11:30:07 +01:00
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Michal Mielewczyk
fa41d4fc88 Fix updating hot cachelines cleaning list
Update cacheline's timestamp each time it's being written.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-10 10:02:57 -05:00
Adam Rutkowski
9e98eec361 Only acquire read lock to verify lru elem hotness
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
c04bfa3962 Add macros to read lock eviction list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
9690f13bef Change eviction spin lock to RW lock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
b4daac11c2 Track hot items on LRU list
Top 50% least recently used cachelines are not promoted
to list head upon access. Only after cacheline drops to
bottom 50% it is considered as a candidate to promote
to list head.

The purpose of this change is to reduce overhead of
LRU list maintanance for hot cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:22:55 -06:00
Adam Rutkowski
746b32c47d Evict from overflown partitions first
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 12:51:39 -06:00
Adam Rutkowski
5538a5a95d Only request evict size equal to request unmapped count
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 11:11:15 -06:00
Michal Mielewczyk
93eccc862a Reset per-partition counters when adding core
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-03 06:18:44 -05:00
Robert Baldyga
3543f5c5cc
Merge pull request #443 from rafalste/update_copyright
Update copyright statements (2021)
2021-02-03 11:59:39 +01:00
Michal Mielewczyk
3a7b55c4c2 Don't evict on hit
If request is hit, simply try to acquire cachelines instead of verifying
whether target partition's size is not exceeded.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-29 17:15:32 -05:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Adam Rutkowski
012438c279 Add missing collision page lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-20 19:28:41 -06:00
Robert Baldyga
5a88ab2d61 Flush metadata collision segment on core remove
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.

This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.

This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-19 13:34:28 +01:00
Adam Rutkowski
f206c64ff6 Fine granularity lock in cache_mngt_core_deinit_attached_meta
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 03:11:46 -05:00
Michal Mielewczyk
6d962b38e9 API for cacheline write trylock
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
bd20d6119b External linkage for function to sparse single cline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
93bda499c7 Add functions to lock specific hash bucket
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-12 15:42:21 -05:00