Since the request carries an explicit information about number of the
cacheliens to be reparted, no need of keeping the boolean information if some
of the request's cachelines are assigned to a wrong partition
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Instead of redunant calculating number of cachlines to be reparted, keep this
information in request's info
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
WA write must follow follow the same two-pass pattern
as WI does. This change modifies WA engine to default to
WI in case of any miss (either partial or full), not only
partial miss.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Add second pass of write invalidate. It is necessary only
if concurrent I/O had inserted target LBAs to cache after
WI request did traversation. These LBAs might have been
written by WI request behind the concurrent I/O's back,
resulting in making these sectors effectively invalid.
In this case we must update these sectors' metadata to
reflect this. However we won't know about this after we
traverse the request again - hence calling ocf_write_wi
again with req->wi_second_pass set to indicate that this
is the second pass (core write should be skipped).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
core_id should be set in this function. The fact that
it is missing might lead to incorrect behaviour e.g. in
case of promotion policy.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
In case of partial hit WO engine first reads data for the entire
request address range from core device. Then it plumbs it by fetching
dirty sectors from cache device.
For unindentified reason this leads to a data corruption in YCSB
workload A. After flushing dirty data and re-loading cache the
data is correct.
This change modifies WO read handler to read clean data from the
cache. This is not optimal, as the clean sectors are now read twice
in case of partial hit. For now it seems to be good enough work-around
for the data corruption problem.
The symptoms, combined with the fact that this change seems to make
the problem go away, indicates that at some point WB write handler
(and/or special I/O request handlers like discard) puts CAS in a
state where in-memory medata wrongly indicates that a sector is
clean while in fact it is dirty, as marked in the on-disk metadata.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
ocf_engine_push_req_(front|back) must not dereference req
pointer after putting the request on queue list and unlocking
the queue. At this point handler interface may asynchronously
pick up the request, handle it and deallocate.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
During error handling in WB write insert we didn't invalidate affected
cache lines. Because of that the cache stopped properly (as it's
supposed to), but cache lines were marked as inserted which caused
occupancy stats to increase even though nothing was succesfully
inserted.
Signed-off-by: Jan Musial <jan.musial@intel.com>
This change introduced a race condition. In some code paths after
cacheline trylock failed, hash bucket lock needed bo be upgraded
in order to obtain asynchronous lock. During hash bucket lock
upgrade, hash read locks were released followed by obtaining
hash write locks. After read locks were released, concurrent
thread could obtain hash bucket locks and modify cacheline
state. The thread upgrading hash bucket lock would need to
repeat traversation in order to safely continue.
This reverts commit 30f22d4f47.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Adding synchronization around metadata collision segment pages.
This part of metadata is modified when cacheline is mapped/unmapped
and when dirty status changes.
Synchronization on page level is required on top of cacheline
and hash bucket locks to assure metadata flush always reads
consistent state when copying entire collision table memory
page.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Hash bucket read/write lock is sufficient to safely attempt
cacheline trylock/lock. This change removes cacheline lock
global RW semaprhore and moves cacheline trylock/lock under
hash bucket read/write lock respectively.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This temporarily increases amount of boiler-plate code, but
this is going to be mitigated in the following commits.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Global free cacheline list is divided into a set of freelists, one
per execution context. When attempting to map addres to cache, first
the freelist for current execution context is considered (fast path).
If current execution context freelist is empty (fast path failure),
mapping function attempts to get freelist from other execution context
list (slow path).
The purpose of this change is improve concurrency in freelist access.
It is part of fine granularity metadata lock implementation.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Promotion policy is supposed to perform ALRU noise filtering by
eliminating one-hit wonders being added to cache and polluting it.
Signed-off-by: Jan Musial <jan.musial@intel.com>
WO cache mode should not repartition cachelines nor affect cacheline
status in any way when servicing read. Reading data from the cache
is just an internal optimization. Also WO cache mode is designed to
be used with partitioning based on write life-time hints and read
requests do not carry write lifetime hint by definition.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Static code analyzers fail to understand that this variable
is always assigned to before usage.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>