Error for an invalid cache operation while in passive mode added
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
Error name correction
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
API changes for passive cache mode
Moved the passive cache error return source to the api for flush and
set_param
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
Further API changes for passive cache mode
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
Passive api - review changes
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
To prevent deinitializing cleaner context (i.e. during switching policy) during
processing requests, access to cleaner should be protected with reference
counter
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Making the operation asynchronous will allow to use refcnt utility as an
synchronization mechanism between processing cachelines and deinitializing
cleaning policy.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.
As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.
Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
There is no need to constantly hold metadata global lock
during collecting cachelines to clean. Since we are past
freezing dirty request counter, we know for sure that the
number of dirty cache lines will not increase. So the worst
case is when the loop realxes and releases the lock,
a concurent IO to CAS is serviced in WT mode, possibly
inserting and/or evicting cachelines. This does not interfere
with scanning for dirty cachelines. And the lower layer will
handle synchronization with concurrent I/O by acquiring
asynchronous read lock on each cleaned cacheline.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
In current implementation in case of fast media flushning
container may starve all concurrent containers flushing
due to continous rescheduling of offender requests to the
front of I/O queue. Pushing request to the back of IO
queue ensures FIFO handling and removes possibility of
starvation.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Lower layer is prepared to handle used cachelines by
acquiring asynchronus read lock. It is very likely that
by the time the cacheline is actually cleaned its lock
state changes. So checking the lock at the moment of
constructing dirty cachelines list makes little sense.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Moving _ocf_mngt_flush_containers outside global metadata
critical section. All this function does is sort core lines
and add queue request.
This fixes stalls reported by Linux scheduler due to
IO threads waiting on global metadata RW semaprhore for
several minutes.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
As non-interruptible flushes are no longer triggered from OCF
internals, we can get rid of "interruption" argument and let
adapters handle it themselves.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
This simplifies code by allowing to express programmer intent
explicitly and helps to avoid missing return statements (this patch
fixes at least one bug related to this).
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
This simplifies cases when we want to call completion callback
and immediately return from void-returning function, by allowing
to explicitly express programmers intent. That way we can avoid
cases when return statement is missing by mistake (this patch
fixes at least one bug related to this).
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
For flush/purge entry points to be fully asynchronous we still
need to rework flush mutex and waiting for outstanding dirty
requests.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
NOTE: This is still not the real asynchronism. Metadata interfaces
are still not fully asynchronous.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
NOTE: This patch only changes API that pretends to be asynchronous.
Most of management operations are still performed synchronously.
The real asynchronism will be introduced in the next patches.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Instead of switching write policy to pass-through, barrier is rised
by incrementing counter in ocf_cache_t structure.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>