Commit Graph

40 Commits

Author SHA1 Message Date
Jan Musial
60a6da7ee6 Extend alock API with entries_count method
Right now alock assumes that number of locks taken will equal number of
core lines. This is not the case in pio, where only parts of metadata
are under locks. If pio request overlaps locked and not-locked metadata
section it will have different core lines number and awaited locks
number. To remedy this discrepancy additional method which gets count of
locks that will be taken/waited on is added to alock API.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-05-16 16:21:08 +02:00
Rafal Stefanowski
3cc0d07197 License change to be approved by contributors
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-10-27 12:48:20 +02:00
Adam Rutkowski
4f217b91a5 Remove partition list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Kozlowski Mateusz
367fcbfe4e Update debug prints and methods
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:37 +02:00
Kozlowski Mateusz
ce316cc67c Change alock API to include slow/fast lock callbacks
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:35 +02:00
Kozlowski Mateusz
f49e9d2d6a Save alock callbacks during initialization
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
69c3c6761b Add alock ptr to callbacks params
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
fae620a070 Add entry abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
9a1646c8a1 Move alock implementation to separate file
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 13:22:29 +02:00
Adam Rutkowski
2cb7270f63 Finish separating cacheline conc and alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:17:06 +02:00
Adam Rutkowski
d22a3ad0e0 Rename cacheline concurrency struct to alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:15:43 +02:00
Adam Rutkowski
927bc805fe Rename generic cacheline cocncurrency to alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:14:40 +02:00
Adam Rutkowski
3845da1de8 Rename entry_idx to idx
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Adam Rutkowski
9d94c0b213 Make cacheline concurrency lock implementation more generic
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Adam Rutkowski
fdbdcbb4a6 Rename cb to cmpl in cacheline concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Adam Rutkowski
4634885111 Use request in instead of opaque ctx in cacheline concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Jan Musial
f25d9a8e40 Use new non-zeroing allocator APIs
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:38:44 +02:00
Michal Mielewczyk
1d9776481c Shorten allocator name
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-04-27 08:40:51 +02:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
07d1079baa Add LOOKUP_REMAPPED status to allow iterative cacheline lock
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a09587f521 Introduce ocf_cache_line_is_locked_exclusively
Function returns true if cacheline is locked (read
or write) by exactly one entity with no waiters.

This is usefull for eviction. Assuming caller holds
hash bucket write lock, having exlusive cacheline
lock (either read or write) allows holder to remap
cacheline safely. Typically during eviction hash
bucket is unknown until resolved under cacheline lock,
so locking cacheline exclusively (instead of locking
and checking for exclusive lock) is not possible.

More specifically this is the flow for synchronizing
cacheline remap using ocf_cache_line_is_locked_exclusively:
1. acquire a cacheline (read or write) lock
2. resolve hash bucket
3. write-lock hash bucket
4. verify cacheline lock is exclusive

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
c7fc4fff39 Change cacheline concurrency constructor params
Provide number of cachelines as the cacheline concurrency
construtor param instead of reading it from cache.

The purpose of this change is to improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:45 -06:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
0f34e46375 Fix error handling in cacheline concurrency init
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:36 -06:00
Adam Rutkowski
d8f25d2742 Skip cacheline concurrency global lock in fast path
The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.

Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
 * concurrent fast path will also only try_lock
   cachelines, releasing all acquired locks if failed
   to immediately acquire lock for any cacheline
 * concurrent slow path is guaranteed to have
   precedence in lock acquisition when conditions
   for deadlock occure (both slowpath and fastpath
   have acquired some locks required by the other
   thread). This is because the fastpath thread will
   back off (release acquired locks) if any one of the
   cacheline locks is not acquired.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:28 -06:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Michal Mielewczyk
6d962b38e9 API for cacheline write trylock
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 02:10:42 -05:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Adam Rutkowski
6423c48dfe cacheline concurrency: move allocation outside critical section
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-18 18:41:01 -04:00
Adam Rutkowski
07b1f0c064 Replace global concurrency rw spinlock with rw semaphore
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-18 18:19:05 -04:00
Adam Rutkowski
09b68297b2 Revert "Optimize cacheline locking in ocf_engine_prepare_clines"
This change introduced a race condition. In some code paths after
cacheline trylock failed, hash bucket lock needed bo be upgraded
in order to obtain asynchronous lock. During hash bucket lock
upgrade, hash read locks were released followed by obtaining
hash write locks. After read locks were released, concurrent
thread could obtain hash bucket locks and modify cacheline
state. The thread upgrading hash bucket lock would need to
repeat traversation in order to safely continue.

This reverts commit 30f22d4f47.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-30 23:53:10 -04:00
Michal Rakowski
b78557a2cc Change env_spinlock_init to non-void function
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Adam Rutkowski
5684b53d9b Adding collision table locks
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-24 22:59:35 -04:00
Adam Rutkowski
30f22d4f47 Optimize cacheline locking in ocf_engine_prepare_clines
Hash bucket read/write lock is sufficient to safely attempt
cacheline trylock/lock. This change removes cacheline lock
global RW semaprhore and moves cacheline trylock/lock under
hash bucket read/write lock respectively.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
f34cacf150 Move resume callback to async lock function params (refactoring)
This is a step towards common async lock interface in OCF.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Jan Musiał
0a9f197b10
Merge pull request #238 from arutk/split_cacheline_lock_trylock
Split cacheline locking implementation into trylock and lock
2019-08-29 09:49:53 +02:00
Adam Rutkowski
6dd1af7df7 Split cacheline locking implementation into trylock and lock
This change refactors the code in order to prepare for removing
global concurrency lock, which won't be needed after per-bucket
metadata locking is in place.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-08-28 18:54:19 -04:00
Firas Medini
1f979f630b Adding synchronization primitives destroyers
Environment should provide calls for destroying primitives (i.e. env_mutex_destroy()) and OCF should call these functions in its cleanup paths.

Signed-off-by: Firas Medini <mdnfiras@yahoo.com>
2019-08-13 05:13:11 -07:00
Adam Rutkowski
494861c994 Rename cache_concurrency to cache_line_concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-07-24 15:32:12 -04:00