Provide number of cachelines as the cacheline concurrency
construtor param instead of reading it from cache.
The purpose of this change is to improve testability.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.
Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).
The purpose of this change is to facilitate unit testing.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.
Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
* concurrent fast path will also only try_lock
cachelines, releasing all acquired locks if failed
to immediately acquire lock for any cacheline
* concurrent slow path is guaranteed to have
precedence in lock acquisition when conditions
for deadlock occure (both slowpath and fastpath
have acquired some locks required by the other
thread). This is because the fastpath thread will
back off (release acquired locks) if any one of the
cacheline locks is not acquired.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
hold metadata shared global lock ("naked") vs the ones
which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
accepting hash bucket id ("id"), core line and lba ("cline")
and entire request ("req").
Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Top 50% least recently used cachelines are not promoted
to list head upon access. Only after cacheline drops to
bottom 50% it is considered as a candidate to promote
to list head.
The purpose of this change is to reduce overhead of
LRU list maintanance for hot cachelines.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
If request is hit, simply try to acquire cachelines instead of verifying
whether target partition's size is not exceeded.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.
This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.
This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>