There is one RW lock per hash bucket. Write lock is required
to map cacheline, read lock is sufficient for traversing.
Hash bucket locks are always acquired under global metadata
read lock. This assures mutual exclusion with eviction and
management paths, where global metadata write lock is held.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Global free cacheline list is divided into a set of freelists, one
per execution context. When attempting to map addres to cache, first
the freelist for current execution context is considered (fast path).
If current execution context freelist is empty (fast path failure),
mapping function attempts to get freelist from other execution context
list (slow path).
The purpose of this change is improve concurrency in freelist access.
It is part of fine granularity metadata lock implementation.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Fix problem introduced by increasing partition name size to 1024 bytes,
which effectively made superblock bigger than one page. Due to this
flushing superblock required more than one io, which in case of dirty
shutdown between these ios resulted in CRC missmatch and made cache
recovery impossible.
Moving parts metadata to separate sections makes superblock fitting
in one page, effectively solving described problem.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Metadata layout (seq / striping) is already encapsulated
in ocf_metadata_layout_iface, no need to double this logic
in freelist initialization.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Environment should provide calls for destroying primitives (i.e. env_mutex_destroy()) and OCF should call these functions in its cleanup paths.
Signed-off-by: Firas Medini <mdnfiras@yahoo.com>
Don't try to remove invalid cores
If valid cache metadata was read, but environment has changed (i.e. number of cache lines has changed) ocf (in error handling path) was trying to close cores which were not opened. It happened due to cores were marked in cache metadata as added, but any cache inserting operation didn't take place.
In this patch 'added' flag in cache metadata was replaced with more meaningful 'valid' - it is set if given core is stored in cache metadata. Moreover, new 'added' flag was added to core run-time metadata and it is set if given core is added to cache.
Modify ocf_metadata_hash_func to return consecutive (modulo @hash_table_entries)
values for consecutive @core_line_num. This way it is trivial to sort all
core lines within a single request according to their hash value. This kind
of sorting will be required to assure that future hash bucket metadata locks
are always acquired in fixed order, eliminating the risk of dead locks.
This change is part of fine granularity metadata lock implementation.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Promotion policy is supposed to perform ALRU noise filtering by
eliminating one-hit wonders being added to cache and polluting it.
Signed-off-by: Jan Musial <jan.musial@intel.com>
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
This simplifies code by allowing to express programmer intent
explicitly and helps to avoid missing return statements (this patch
fixes at least one bug related to this).
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
This simplifies cases when we want to call completion callback
and immediately return from void-returning function, by allowing
to explicitly express programmers intent. That way we can avoid
cases when return statement is missing by mistake (this patch
fixes at least one bug related to this).
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Don't reassign value of cache without any previus use.
It produced warnings when analyzing with scanbuild.
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Adapter can opt to take additional steps to securely allocate
memory used by OCF to store cache metadata. Typically this would
involve mlocking pages and zeroing memory before deallocation.
Memory allocated using secure_alloc is not expected to be zeroed
or physically continous.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Before async metadata we used to return 0 on flushing volatile
containers, this way we didn't have to special-case them.
This change brings back this behavior.
Signed-off-by: Jan Musial <jan.musial@intel.com>