Commit Graph

135 Commits

Author SHA1 Message Date
Michal Mielewczyk
26194fc536 Use cleaning ops wrapper functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Kozlowski Mateusz
bd7a89c819 Single map alloc location
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-07-21 08:27:27 +02:00
Robert Baldyga
a2b300d465 Avoid stack overflow when pending read misses list is blocked
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-07-14 13:04:03 +02:00
Michal Mielewczyk
a394dd06a8 Unlock cachelines after failed remap
All remapped cachelines are write locked. If the operation fails cachelines has
to be unlocked during rollback

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-06 15:09:50 +02:00
Robert Baldyga
aa3677da10
Merge pull request #530 from arutk/remove_eviction
Remove remaining stale references to "eviction" and "evp"
2021-06-30 09:47:35 +02:00
Adam Rutkowski
a9ab5fbafd Fix comments in ocf_engine_common.h
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-26 19:09:09 +02:00
Robert Baldyga
059b845df8 Unlock request after invalidating cache lines
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-06-25 17:50:38 +02:00
Michal Mielewczyk
f0564dcf75 Avoid unnecessary metadata flushes in WT
Flushing metadata in WT is required only if at least of the request's cacheline
changed its state to clean.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-06-23 09:51:16 +02:00
Michal Mielewczyk
c9294d1f06 Reorder metadata updating pattern in WT mode
There's a possibility that WT write is performed to dirty cache line (i.e. after
switching WB->WT without flush) and status bits change from dirty to clean. If
power failure occurs it might happen that recovery would ignore recent data from
cache and would assume that data is clean while backend storage data is out of
date.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-06-22 15:35:50 +02:00
Michal Mielewczyk
0192ce23dd Reorder metadata updating patter in WB mode
In WB mode metadata should be updated only if the actuall data had been saved
on disk. Otherwise metadata might be flushed too early and consequently data
corruption might occur.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-06-22 09:04:56 +02:00
Adam Rutkowski
33e2beac24 Rename "evp_lru*" functions to "ocf_lru*"
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
b1143374a8 Move eviction files to new locations
src/eviction/lru.c -> src/ocf_lru.c
src/eviction/lru.h -> src/ocf_lru.h
src/eviction/lru_structs.h -> src/ocf_lru_structs.h
src/eviction/eviction.c -> src/ocf_space.c
src/eviction/eviction.h -> src/ocf_space.h

.. as well as corresponding UT files.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>

... in UT as well

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
88e04a4204 Remove eviction policy abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
fac35d34a2 Rename "evict" to "remap" across the entire repo
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
4f217b91a5 Remove partition list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
87f834c793 Move common user and freelist partition data to a new struct
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:07:10 +02:00
Robert Baldyga
c0b76f9e01
Merge pull request #517 from arutk/hit_shortcut
Check for hit after upgrading hash bucket lock
2021-06-17 12:16:18 +02:00
Kozlowski Mateusz
ce316cc67c Change alock API to include slow/fast lock callbacks
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:35 +02:00
Adam Rutkowski
d5b16c273e Check for hit after upgrading hash bucket lock
Lookup is repeated after request is identified as miss and hash bucket
lock is upgraded (in order to map missing cachelines). At this point
cachelines status might change and the request might turn out to be
a hit after all. Adding check for this condition removes unnecessary
calls to remap logic.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 23:11:02 +02:00
Adam Rutkowski
d22a3ad0e0 Rename cacheline concurrency struct to alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:15:43 +02:00
Adam Rutkowski
719676c444 Fix repartitioning in request refresh path
update_req_info() should include REMAPPED cachelines
in repart stats (number of cachelines within request
belonging to other than the target partition).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-31 12:13:48 -05:00
Michał Mielewczyk
a6c8cbb1ac
Merge pull request #479 from arutk/lru_fix3
Always call LRU_set_hot() under hash bucket lock
2021-03-26 11:04:59 +01:00
Adam Rutkowski
9486b7796f Remove early return from engine_map()
Removing conditional early return from engine_map() function
in case of insufficient free cachelines. The reasons are:

1. current implementation does not treat unssufficient free
cachelines condition as an error,
2. the check is based on stale request info, so it is inaccurate,
3. it is easier to hit more paths with functional tests,
4. partially mapping request from the freelist becomes more common
rather than being a corner case dependent on racy timings between
threads

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 07:40:24 -05:00
Adam Rutkowski
a3f2a214b6 Always call LRU_set_hot() under hash bucket lock
set_hot() depends on cacheline metadata status to determine
on which list the element is located (dirty cs clean list).
Thus at least hash bucket lock is required when calling
set_hot().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-25 18:50:13 -05:00
Robert Baldyga
87244c04d7
Merge pull request #472 from mmichal10/lock-on-setting-hot
Update cleaning lru under metadata lock
2021-03-19 09:54:32 +01:00
Michal Mielewczyk
841f8122d7 Update cleaning lru under metadata lock
This prevents deinitializing cleaning policy structures during IO.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-18 09:55:21 +01:00
Michał Mielewczyk
df969cde16
Merge pull request #470 from arutk/lru_fix
Parallel eviction fixes
2021-03-17 11:41:07 +01:00
Adam Rutkowski
c565c5c3f5 Add comments warning about stale request map info
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:28:02 -05:00
Adam Rutkowski
98124aa13d Add missing lookup in engine_map()
Early return from engine_map() in case of insufficient free
cachelines on the freelist is opportunistic, as both request
map info and freelist count are not accurate. Map info is stale
as it is to be refreshed in engine_map() after hash bucket
lock had been upgraded. Freelist count on other hand is subject
to change asynchronously.

The implementation assumption however is that after engine_map()
request is fully traversed (engine_map() is equivalent to
engine_lookup() followed by an attempt to map missing cachelines).
So in case of early return we must take care of repeating the
lookup.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:23:24 -05:00
Adam Rutkowski
e5fa15bdb2 Remove early return from engine_map() in case of hit
At this point cacheline status in request map is stale,
as lookup was performed before upgrading hash bucket lock.
If indeed all cachelines are mapped, this will be determined
in the main loop of engine_map().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:21:03 -05:00
Adam Rutkowski
736fb2efc0 Call LRU set_hot() immediately after cache insert
This assures that cacheline with LOOKUP_INSERTED status
is always present on the LRU list.

This fixes an ENV_BUG() caused by an attempt to remove
a cacheline from LRU list which was not there. This
happened when cacheline was mapped from freelist
(LOOKUP_INSERTED) but the entire request mapping failed
and generic cleanup routines attempted to invalidate cacheline,
including removing it from the LRU list. As engine_set_hot()
is called after successfull mapping, the inserted cacheline was
not yet present on the LRU list.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-16 19:59:09 -05:00
Michal Mielewczyk
4e8c037d7b Fix ocf_engine_unmapped_count()
Inserted entries should be considered mapped.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 15:36:47 +01:00
Adam Rutkowski
7927b0b74f Optimize set_hot calls in eviction path
Split traversation into two distinct phases: lookup()
and lru set_hot(). prepare_cachelines() now only calls
set_hot() once after lookup and insert are finished.
lookup() is called explicitly only once in
prepare_cachelines() at the very beginning of the
procedure. If request is a miss then then map()
performs operations equivalent to lookup() supplemented
by an attempt to map cachelines. Both lookup() and
set_hot() are called via traverse() from the engines
which do not attempt mapping, and thus do not call
prepare_clines().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1c6168d82b Do not unmap inserted cachelines before eviction
Unmapping cachelines previously mapped from freelist before
eviction is a waste of resources. Also if map does not erarly
exit upon first mapping error, we can have request fully
traversed (and partially mapped) after mapping and thus
skip lookup in eviction.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
ce2ff14150 Move request engine callbacks to req structure
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
0e699fc982 Refactor ocf_engine_remap
.. so that the main part, responsible strictly for mapping
given LBA to given collision index, is encapsulated in
a function ocf_map_cache_line with external linkage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
3bd0f6b6c4 Change sequential request detection logic
Changing sequential request detection so that a miss request is
recognized as sequential after needed cachelines are evicted
and mapped to the request in a sequential order.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
07d1079baa Add LOOKUP_REMAPPED status to allow iterative cacheline lock
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
b34f5fd721 Rename LOOKUP_MAPPED to LOOKUP_INSERTED
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
cd9e42f987 Properly lock hash bucket for status bits operations
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:02:50 -06:00
Adam Rutkowski
10c3c3de36 Renaming hash bucket locking functions
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
   hold metadata shared global lock ("naked") vs the ones
   which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
   accepting hash bucket id ("id"), core line and lba ("cline")
   and entire request ("req").

Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Robert Baldyga
af8177d2ba
Merge pull request #458 from mmichal10/fix-cleaning
Fix updating hot cachelines cleaning list
2021-02-11 11:30:07 +01:00
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Michal Mielewczyk
fa41d4fc88 Fix updating hot cachelines cleaning list
Update cacheline's timestamp each time it's being written.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-10 10:02:57 -05:00
Adam Rutkowski
5538a5a95d Only request evict size equal to request unmapped count
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 11:11:15 -06:00
Robert Baldyga
3543f5c5cc
Merge pull request #443 from rafalste/update_copyright
Update copyright statements (2021)
2021-02-03 11:59:39 +01:00