Since we do not expect that incrementing cache's reference counter
during cache init will fail at any condition it is can be changed
to an assert instead of error handling.
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
pointer type to array which is 32 characters long;
**core.py**: Add missing import and modify class' field type
to keep consistency;
**ocf_mngt_core**: Remove local variable 'name';
remove env_vmalloc for 'name' - isn't no longer needed;
remove initialization 'name' - as above;
remove env_vfree for context->cfg.name - variable isn't no allocated
in memory;
check if cfg->name exists;
change label in goto from deleted err_name to the closest err_pipeline.
Signed-off-by: Slawomir_Jankowski <slawomir.jankowski@intel.com>
It allows to modify and retrieve particular PP params event if it isn't active
and store values between cache stop and load.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Cache lock waiters hold cache refcount. Because of that,
if there were some waiters, deinitialization of cache
lock on the last put did never happen and putting the
cache was effectively impossible.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Cleaning policy callbacks are typically called with hash buckets or
cache lines locked. However cleaning policies maintain structures
which are shared across multiple cache lines (e.g. ALRU list).
Additional synchronization is required for theese structures to
maintain integrity.
ACP already implements hash bucket locks. This patch is adding
ALRU list mutex.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Adding locks to assure partition list consistency. Partition
lists is typically modified under cacheline or hash bucket lock.
This is not sufficient synchronization as adding/removing cacheline
from partition list affects neighbouring cachelines state as well.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Since ocf requires loading cache with the same name as it was stopped, it should
also allow to read name from metadata.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
This change allows to check if specified cache name name is unique. To prevent
adding cache instance with the same name, context lock is acquired until name
isn't set.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Adding synchronization around metadata collision segment pages.
This part of metadata is modified when cacheline is mapped/unmapped
and when dirty status changes.
Synchronization on page level is required on top of cacheline
and hash bucket locks to assure metadata flush always reads
consistent state when copying entire collision table memory
page.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Modifying _ocf_mngt_io_class_configure and _ocf_mngt_io_class_remove
to never return -OCF_ERR_IO_CLASS_NOT_EXIST error code. This
return code was ignored by the caller anyway. In _ocf_mngt_io_class_remove
-OCF_ERR_IO_CLASS_NOT_EXIST indicated the IO class is already
removed, which is not an error. In _ocf_mngt_io_class_configure
-OCF_ERR_IO_CLASS_NOT_EXIST indicated empty IO class name, which
is actualy invalid input. This change made it possible to remove
erroneous error handling for -OCF_ERR_IO_CLASS_NOT_EXIST case in
ocf_mngt_cache_io_classes_configure.
This change fixes IO class configuration.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
The ocf_async_lock may be used in atomic context, thus we need
to replace synchronization primitives to non-sleeping variants.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Hash bucket read/write lock is sufficient to safely attempt
cacheline trylock/lock. This change removes cacheline lock
global RW semaprhore and moves cacheline trylock/lock under
hash bucket read/write lock respectively.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This temporarily increases amount of boiler-plate code, but
this is going to be mitigated in the following commits.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
There is one RW lock per hash bucket. Write lock is required
to map cacheline, read lock is sufficient for traversing.
Hash bucket locks are always acquired under global metadata
read lock. This assures mutual exclusion with eviction and
management paths, where global metadata write lock is held.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Core is not assigned to request in cleaner, so to increase it's stats it has to
be retrieved from mapping.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Inactive core stats should be caluculated and returned to adapter in unified
from, just like all stats are.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Since stats builder is implemented for retrieving cache, core and ioclass stats,
adapters should use it instead.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Global free cacheline list is divided into a set of freelists, one
per execution context. When attempting to map addres to cache, first
the freelist for current execution context is considered (fast path).
If current execution context freelist is empty (fast path failure),
mapping function attempts to get freelist from other execution context
list (slow path).
The purpose of this change is improve concurrency in freelist access.
It is part of fine granularity metadata lock implementation.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Fix problem introduced by increasing partition name size to 1024 bytes,
which effectively made superblock bigger than one page. Due to this
flushing superblock required more than one io, which in case of dirty
shutdown between these ios resulted in CRC missmatch and made cache
recovery impossible.
Moving parts metadata to separate sections makes superblock fitting
in one page, effectively solving described problem.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
This change refactors the code in order to prepare for removing
global concurrency lock, which won't be needed after per-bucket
metadata locking is in place.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Metadata layout (seq / striping) is already encapsulated
in ocf_metadata_layout_iface, no need to double this logic
in freelist initialization.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Environment should provide calls for destroying primitives (i.e. env_mutex_destroy()) and OCF should call these functions in its cleanup paths.
Signed-off-by: Firas Medini <mdnfiras@yahoo.com>
Don't try to remove invalid cores
If valid cache metadata was read, but environment has changed (i.e. number of cache lines has changed) ocf (in error handling path) was trying to close cores which were not opened. It happened due to cores were marked in cache metadata as added, but any cache inserting operation didn't take place.
In this patch 'added' flag in cache metadata was replaced with more meaningful 'valid' - it is set if given core is stored in cache metadata. Moreover, new 'added' flag was added to core run-time metadata and it is set if given core is added to cache.
If cleaning policy didn't have init() function,
'_ocf_mngt_init_instance_load_complete' returned early and promotion policy
wasn't initialized at all.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Adapter can provide ops for data type deinitialization, if any additional
operations are required on deinitialization procedure.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>