The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.
Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
* concurrent fast path will also only try_lock
cachelines, releasing all acquired locks if failed
to immediately acquire lock for any cacheline
* concurrent slow path is guaranteed to have
precedence in lock acquisition when conditions
for deadlock occure (both slowpath and fastpath
have acquired some locks required by the other
thread). This is because the fastpath thread will
back off (release acquired locks) if any one of the
cacheline locks is not acquired.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
hold metadata shared global lock ("naked") vs the ones
which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
accepting hash bucket id ("id"), core line and lba ("cline")
and entire request ("req").
Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Top 50% least recently used cachelines are not promoted
to list head upon access. Only after cacheline drops to
bottom 50% it is considered as a candidate to promote
to list head.
The purpose of this change is to reduce overhead of
LRU list maintanance for hot cachelines.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
If request is hit, simply try to acquire cachelines instead of verifying
whether target partition's size is not exceeded.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.
This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.
This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.
To prevent those errors, min and max should be calculated dynamically.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Since the request carries an explicit information about number of the
cacheliens to be reparted, no need of keeping the boolean information if some
of the request's cachelines are assigned to a wrong partition
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Instead of redunant calculating number of cachlines to be reparted, keep this
information in request's info
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
If partition's occupancy limit is reached, cachelines should be evicted from
request's target partition.
Information whether particular partition eviction should be triggered is
carried as a flag by request which triggered eviction.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Moving metadata implementation out of obsolete metadata_hash.c
to .c files corresponding to function declaration header files.
This requires adding shared header for metadata implementation
metadata_internal.h. Some metadata header files did not have
a corresponding .c file - in this case it is added in this
commit.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Metadata wrapper functions (calling iface->func) in header
files are changed to be declarations only. Hash interface
implementation functions in metadata_hash.c are given an
external linkage and are renamed to drop "hash" prefix.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Locks acquired in ocf_metadata_flush(/load)_all are
acquired only for the duration of queueing asynch
service for flush/load, no actual metadata accesses
are performed there.
Also flush/load all are always performed with metadata
marked as deinitialized (metadata reference counter freezed),
so no I/O is reading nor writing the metadata. The only source
of potential concurrent metadata accesses are other management
operations, which should be synchronized using management lock.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Initializing each stream with unique LBA ensures there are no initial
rbtree collisions, and thus helps to avoid clustering of all the streams
into one big linked list instead of forming performance friendly proper
tree structure.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
After writing metadata configuration to disk we must
send a flush request to make sure configuration sections
are commited to non-volatile storage.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Load properties before checking memory needs and obtain cache line size
from context rather than from cache state.
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
Rather then passing whole structs, supply
_ocf_mngt_calculate_ram_needed() with just the values it actually uses.
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
During recovery procedure there is no guarantee that checksums
of runtime sections were flushed correctly before dirty shutdown.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
WA write must follow follow the same two-pass pattern
as WI does. This change modifies WA engine to default to
WI in case of any miss (either partial or full), not only
partial miss.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Add second pass of write invalidate. It is necessary only
if concurrent I/O had inserted target LBAs to cache after
WI request did traversation. These LBAs might have been
written by WI request behind the concurrent I/O's back,
resulting in making these sectors effectively invalid.
In this case we must update these sectors' metadata to
reflect this. However we won't know about this after we
traverse the request again - hence calling ocf_write_wi
again with req->wi_second_pass set to indicate that this
is the second pass (core write should be skipped).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>