Cleaning policy callbacks are typically called with hash buckets or
cache lines locked. However cleaning policies maintain structures
which are shared across multiple cache lines (e.g. ALRU list).
Additional synchronization is required for theese structures to
maintain integrity.
ACP already implements hash bucket locks. This patch is adding
ALRU list mutex.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Adding locks to assure partition list consistency. Partition
lists is typically modified under cacheline or hash bucket lock.
This is not sufficient synchronization as adding/removing cacheline
from partition list affects neighbouring cachelines state as well.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Adding synchronization around metadata collision segment pages.
This part of metadata is modified when cacheline is mapped/unmapped
and when dirty status changes.
Synchronization on page level is required on top of cacheline
and hash bucket locks to assure metadata flush always reads
consistent state when copying entire collision table memory
page.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
The ocf_async_lock may be used in atomic context, thus we need
to replace synchronization primitives to non-sleeping variants.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Hash bucket read/write lock is sufficient to safely attempt
cacheline trylock/lock. This change removes cacheline lock
global RW semaprhore and moves cacheline trylock/lock under
hash bucket read/write lock respectively.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This temporarily increases amount of boiler-plate code, but
this is going to be mitigated in the following commits.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
There is one RW lock per hash bucket. Write lock is required
to map cacheline, read lock is sufficient for traversing.
Hash bucket locks are always acquired under global metadata
read lock. This assures mutual exclusion with eviction and
management paths, where global metadata write lock is held.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Core is not assigned to request in cleaner, so to increase it's stats it has to
be retrieved from mapping.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Inactive core stats should be caluculated and returned to adapter in unified
from, just like all stats are.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Since stats builder is implemented for retrieving cache, core and ioclass stats,
adapters should use it instead.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Global free cacheline list is divided into a set of freelists, one
per execution context. When attempting to map addres to cache, first
the freelist for current execution context is considered (fast path).
If current execution context freelist is empty (fast path failure),
mapping function attempts to get freelist from other execution context
list (slow path).
The purpose of this change is improve concurrency in freelist access.
It is part of fine granularity metadata lock implementation.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>