.. so that the main part, responsible strictly for mapping
given LBA to given collision index, is encapsulated in
a function ocf_map_cache_line with external linkage.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Changing sequential request detection so that a miss request is
recognized as sequential after needed cachelines are evicted
and mapped to the request in a sequential order.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Function returns true if cacheline is locked (read
or write) by exactly one entity with no waiters.
This is usefull for eviction. Assuming caller holds
hash bucket write lock, having exlusive cacheline
lock (either read or write) allows holder to remap
cacheline safely. Typically during eviction hash
bucket is unknown until resolved under cacheline lock,
so locking cacheline exclusively (instead of locking
and checking for exclusive lock) is not possible.
More specifically this is the flow for synchronizing
cacheline remap using ocf_cache_line_is_locked_exclusively:
1. acquire a cacheline (read or write) lock
2. resolve hash bucket
3. write-lock hash bucket
4. verify cacheline lock is exclusive
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Reformat function that calculates how long cache/core is dirty
Update `dirty_for` types in functional tests
Values stored in info structs fields (both in cache and core structs)
are unsigned 64-bits ints but `dirty_for`s were unsigned 32-bits ints.
Use existing function to transform returned value to seconds.
Replace seconds stored in metadata with seconds.
Replacement was done if old value of replaced field was equal to zero.
Acquiring monotonic high precission timestamp is potentially
slow and it makes sense to compare the field's value
to zero before calling atomic function.
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
Provide number of cachelines as the cacheline concurrency
construtor param instead of reading it from cache.
The purpose of this change is to improve testability.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.
Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).
The purpose of this change is to facilitate unit testing.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.
Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
* concurrent fast path will also only try_lock
cachelines, releasing all acquired locks if failed
to immediately acquire lock for any cacheline
* concurrent slow path is guaranteed to have
precedence in lock acquisition when conditions
for deadlock occure (both slowpath and fastpath
have acquired some locks required by the other
thread). This is because the fastpath thread will
back off (release acquired locks) if any one of the
cacheline locks is not acquired.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
hold metadata shared global lock ("naked") vs the ones
which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
accepting hash bucket id ("id"), core line and lba ("cline")
and entire request ("req").
Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>