When IO queues are used for parallelized management operations,
e.g. changing cleaning policy, a deadlock may occur due to global
metadata lock interfering with taking request from IO queue,
as they might be run on the same thread. As a workaround using
a management queue specifically for such operations eliminates
this problem.
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
The flush_data is used by ocf_cleaner_do_flush_data_async(), which means
that callers of ocf_cleaner_fire() are now expected to guarantee that
entries are returned by getter in a sorted order. Currently the only case
when ocf_cleaner_fire() is called directly is for request cleaning, and
the request map is sorted by definition.
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Cache mngt lock cannot be unlocked from io completion context (which is
potentially atomic context) as it may involve sleeping operations.
Modify cleaner utility to support rescheduling to queue context before
calling the completion. Update cleaning policies to use that option.
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Don't populate cleaning policies during initialization procedure so the user
has to call the latter explicitly.
Until now cleaning policies could be populated in two ways:
- implicitly during cleaning policy initialization,
- explicitly be calling populate.
The difference was that the former was single threaded.
This patch removes the functionally redundant and less efficient code.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
The function not only recovers cleaning policy metadata but is also utilized
to initialize data structures so more generic name is actually more accurate
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
It's required, because environments other than Linux kernel may not define
their own DIV_ROUND_UP. Moving it to env would just generate boilerplate,
because its implementation is trivial and portable.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Since the threshold for the first bucket is always zero and the condition to
exit from the loop is never met in the first iteration it is save to start
iterating from `1`
This change is meant to avoid confusing static code analyzers
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.
As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.
Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.
Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).
The purpose of this change is to facilitate unit testing.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
hold metadata shared global lock ("naked") vs the ones
which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
accepting hash bucket id ("id"), core line and lba ("cline")
and entire request ("req").
Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Cleaning policy callbacks are typically called with hash buckets or
cache lines locked. However cleaning policies maintain structures
which are shared across multiple cache lines (e.g. ALRU list).
Additional synchronization is required for theese structures to
maintain integrity.
ACP already implements hash bucket locks. This patch is adding
ALRU list mutex.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Environment should provide calls for destroying primitives (i.e. env_mutex_destroy()) and OCF should call these functions in its cleanup paths.
Signed-off-by: Firas Medini <mdnfiras@yahoo.com>
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
ocf_kick_cleaner() allows to perfom cleaning immediately.
Nop cleaning policy now returns new 'OCF_CLEANER_DISABLE' macro which indicates
that cleaing shouldn't be performed. To enable it back, ocf_kick_cleaner()
should be called.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
- Queue allocation is now separated from starting cache.
- Queue can be created and destroyed in runtime.
- All queue ops accept queue handle instead of queue id.
- Cache stores queues as list instead of array.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>