Commit Graph

50 Commits

Author SHA1 Message Date
Michal Mielewczyk
1d9776481c Shorten allocator name
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-04-27 08:40:51 +02:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Adam Rutkowski
7927b0b74f Optimize set_hot calls in eviction path
Split traversation into two distinct phases: lookup()
and lru set_hot(). prepare_cachelines() now only calls
set_hot() once after lookup and insert are finished.
lookup() is called explicitly only once in
prepare_cachelines() at the very beginning of the
procedure. If request is a miss then then map()
performs operations equivalent to lookup() supplemented
by an attempt to map cachelines. Both lookup() and
set_hot() are called via traverse() from the engines
which do not attempt mapping, and thus do not call
prepare_clines().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
a80eea454f Add function to determine hash collisions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
07d1079baa Add LOOKUP_REMAPPED status to allow iterative cacheline lock
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a09587f521 Introduce ocf_cache_line_is_locked_exclusively
Function returns true if cacheline is locked (read
or write) by exactly one entity with no waiters.

This is usefull for eviction. Assuming caller holds
hash bucket write lock, having exlusive cacheline
lock (either read or write) allows holder to remap
cacheline safely. Typically during eviction hash
bucket is unknown until resolved under cacheline lock,
so locking cacheline exclusively (instead of locking
and checking for exclusive lock) is not possible.

More specifically this is the flow for synchronizing
cacheline remap using ocf_cache_line_is_locked_exclusively:
1. acquire a cacheline (read or write) lock
2. resolve hash bucket
3. write-lock hash bucket
4. verify cacheline lock is exclusive

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
c7fc4fff39 Change cacheline concurrency constructor params
Provide number of cachelines as the cacheline concurrency
construtor param instead of reading it from cache.

The purpose of this change is to improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:45 -06:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
0f34e46375 Fix error handling in cacheline concurrency init
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:36 -06:00
Adam Rutkowski
d8f25d2742 Skip cacheline concurrency global lock in fast path
The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.

Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
 * concurrent fast path will also only try_lock
   cachelines, releasing all acquired locks if failed
   to immediately acquire lock for any cacheline
 * concurrent slow path is guaranteed to have
   precedence in lock acquisition when conditions
   for deadlock occure (both slowpath and fastpath
   have acquired some locks required by the other
   thread). This is because the fastpath thread will
   back off (release acquired locks) if any one of the
   cacheline locks is not acquired.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:28 -06:00
Adam Rutkowski
c95f6358ab Get rid of status bits lock
All the status bits operations are now protectec by
hash bucket locks

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:05:53 -06:00
Robert Baldyga
75baec5aa5
Merge pull request #456 from arutk/aalru
Relax LRU list ordering to minimize list updates
2021-02-18 13:48:54 +01:00
Adam Rutkowski
0748f33a9d Align each global metadata lock to 64B
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
10c3c3de36 Renaming hash bucket locking functions
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
   hold metadata shared global lock ("naked") vs the ones
   which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
   accepting hash bucket id ("id"), core line and lba ("cline")
   and entire request ("req").

Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Adam Rutkowski
c822c953ed Fix return status from hash bucket trylock wr
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-11 15:02:06 -06:00
Adam Rutkowski
c04bfa3962 Add macros to read lock eviction list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
9690f13bef Change eviction spin lock to RW lock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Michal Mielewczyk
6d962b38e9 API for cacheline write trylock
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
93bda499c7 Add functions to lock specific hash bucket
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-12 15:42:21 -05:00
Adam Rutkowski
41a767de97 Multiple LRU lists
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Adam Rutkowski
6423c48dfe cacheline concurrency: move allocation outside critical section
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-18 18:41:01 -04:00
Adam Rutkowski
07b1f0c064 Replace global concurrency rw spinlock with rw semaphore
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-18 18:19:05 -04:00
Adam Rutkowski
94a0b5392b Fix hash bucket iterator
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-02 19:03:52 -04:00
Adam Rutkowski
09b68297b2 Revert "Optimize cacheline locking in ocf_engine_prepare_clines"
This change introduced a race condition. In some code paths after
cacheline trylock failed, hash bucket lock needed bo be upgraded
in order to obtain asynchronous lock. During hash bucket lock
upgrade, hash read locks were released followed by obtaining
hash write locks. After read locks were released, concurrent
thread could obtain hash bucket locks and modify cacheline
state. The thread upgrading hash bucket lock would need to
repeat traversation in order to safely continue.

This reverts commit 30f22d4f47.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-30 23:53:10 -04:00
Michal Rakowski
2575be83fa Error handling for env_rwsem_init added
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:16:37 +02:00
Michal Rakowski
b78557a2cc Change env_spinlock_init to non-void function
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Adam Rutkowski
a934b43aec Add missing error handling in hash bucket locks initialization
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:17 -04:00
Adam Rutkowski
6de280283a Fix hash_table_entries param type in ocf_metadata_concurrency_attached_init
Number of hash buckets is 32 bit integer.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:17 -04:00
Adam Rutkowski
5e28474322 Adding partition locks
Adding locks to assure partition list consistency. Partition
lists is typically modified under cacheline or hash bucket lock.
This is not sufficient synchronization as adding/removing cacheline
from partition list affects neighbouring cachelines state as well.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:17 -04:00
Adam Rutkowski
5684b53d9b Adding collision table locks
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-24 22:59:35 -04:00
Adam Rutkowski
30f22d4f47 Optimize cacheline locking in ocf_engine_prepare_clines
Hash bucket read/write lock is sufficient to safely attempt
cacheline trylock/lock. This change removes cacheline lock
global RW semaprhore and moves cacheline trylock/lock under
hash bucket read/write lock respectively.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
d2bd807e49 Remove calls to OCF_METADATA_(UN)LOCK_WR(RD)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
2333d837fb Add single hash bucket lock interface
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
d91012f4b4 Introduce hash bucket locks
There is one RW lock per hash bucket. Write lock is required
to map cacheline, read lock is sufficient for traversing.
Hash bucket locks are always acquired under global metadata
read lock. This assures mutual exclusion with eviction and
management paths, where global metadata write lock is held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
f34cacf150 Move resume callback to async lock function params (refactoring)
This is a step towards common async lock interface in OCF.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Jan Musiał
0a9f197b10
Merge pull request #238 from arutk/split_cacheline_lock_trylock
Split cacheline locking implementation into trylock and lock
2019-08-29 09:49:53 +02:00
Adam Rutkowski
6dd1af7df7 Split cacheline locking implementation into trylock and lock
This change refactors the code in order to prepare for removing
global concurrency lock, which won't be needed after per-bucket
metadata locking is in place.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-08-28 18:54:19 -04:00
Firas Medini
1f979f630b Adding synchronization primitives destroyers
Environment should provide calls for destroying primitives (i.e. env_mutex_destroy()) and OCF should call these functions in its cleanup paths.

Signed-off-by: Firas Medini <mdnfiras@yahoo.com>
2019-08-13 05:13:11 -07:00
Adam Rutkowski
494861c994 Rename cache_concurrency to cache_line_concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-07-24 15:32:12 -04:00
Adam Rutkowski
e5bed8825c Move metadata concurrency to a separate file
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-07-24 15:32:09 -04:00
Robert Baldyga
f240f81641 Make request structure more compressed
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-06-03 19:19:29 +02:00
Robert Baldyga
ab2fc6d3c3 Rename utils_allocator to utils_realloc
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 13:10:17 +02:00
Robert Baldyga
6cd84476f6 Handle metadata asynchronously
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-03-26 12:49:23 +01:00
Michal Mielewczyk
916f46c893 Code refactoring
Removed few dead assignments and added variables initialization.
2019-01-16 06:46:00 -05:00
Robert Baldyga
db92083432 Unify req naming convention (rq -> req)
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-12-12 13:36:34 +01:00
Robert Baldyga
a8e1ce8cc5 Initial commit
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-11-29 15:14:21 +01:00