When cache is detached we cannot assume there is a management
queue created. This change introduces simplified cache stop
path, performing all the necessary deinit without using
IO queues.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This reverts commit 5ad5c521df.
This change broke setting IO-classes with allocation. We use max as a
special value to indicate that the partition should use cache global
caching mode.
WO cache mode should not repartition cachelines nor affect cacheline
status in any way when servicing read. Reading data from the cache
is just an internal optimization. Also WO cache mode is designed to
be used with partitioning based on write life-time hints and read
requests do not carry write lifetime hint by definition.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Refactoring ocf_submit_cache_reqs to make it clear that
req->map is accessed at index derived from offset argument,
not necesarily starting at 0.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Static code analyzers fail to understand that this variable
is always assigned to before usage.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Write-only cache mode is similar to writeback, however read
operations do not promote data to cache. Reads are mostly serviced
by the core device, only dirty sectors are fetched from the cache.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
ocf_kick_cleaner() allows to perfom cleaning immediately.
Nop cleaning policy now returns new 'OCF_CLEANER_DISABLE' macro which indicates
that cleaing shouldn't be performed. To enable it back, ocf_kick_cleaner()
should be called.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
If cfg->core_id is OCF_CORE_MAX and that core matches the UUID of
existing not opened one, then set cfg->core_id to id of that core.
This is useful when loading cache from metadata: if user does not store
the ids of cores but relies on OCF to assign them, there is no need to
not reassign them on load.
Previus behaviur when cfg->core_id != id of core with matching UUID is
maintained.
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Passing int constant directly to OCF_PL_NEXT_ON_SUCCESS_RET() macro caused
following compilation error (on GCC 7.4.0):
src/ocf/mngt/ocf_mngt_core.c:599:33: error: ?:
using integer constants in boolean context [-Werror=int-in-bool-context]
error ? -OCF_ERR_WRITE_CACHE : 0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
src/ocf/mngt/../utils/utils_pipeline.h:145:6: note:
in definition of macro ‘OCF_PL_NEXT_ON_SUCCESS_RET’
if (error) \
^~~~~
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
As non-interruptible flushes are no longer triggered from OCF
internals, we can get rid of "interruption" argument and let
adapters handle it themselves.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
This simplifies code by allowing to express programmer intent
explicitly and helps to avoid missing return statements (this patch
fixes at least one bug related to this).
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
This simplifies cases when we want to call completion callback
and immediately return from void-returning function, by allowing
to explicitly express programmers intent. That way we can avoid
cases when return statement is missing by mistake (this patch
fixes at least one bug related to this).
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
_ocf_mngt_cache_unplug context is now provided by the caller.
This way _ocf_mngt_cache_unplug returns only non-critical (cache write)
errors, allowing stop/detach operation to always proceed and optionally
finish with error. This eliminates the need for rolling back previous
stop/detach operations, which might turn out to be impossible e.g.
under memory pressure.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Don't reassign value of cache without any previus use.
It produced warnings when analyzing with scanbuild.
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Adapter can opt to take additional steps to securely allocate
memory used by OCF to store cache metadata. Typically this would
involve mlocking pages and zeroing memory before deallocation.
Memory allocated using secure_alloc is not expected to be zeroed
or physically continous.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Volume close should not close underlying device until all
I/O targeting this volume are deallocated. To achieve this
a reference counter is added to volume. Counter value
matches number of I/O objects associated with volume. Counter
is freezed when volume is closed, blocking allocation of new
I/O objects.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This is useful when reference counter is initialized in non-zeroed
memory (or assuming atomic variable is not properly initialized by
memseting to zero).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Due the aggresive security checks in compiler 'printf' might be substituded with
'__printf_chk'. However it does not differentiate whether substituted string is
library function call whether field in structure.
By renaming field we prevent it to be unintentionally subustituted by the
preprocessor.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Errors not related to cache disk I/O failure should force
cache stop to return with error without deinitializing cache
instance.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Before async metadata we used to return 0 on flushing volatile
containers, this way we didn't have to special-case them.
This change brings back this behavior.
Signed-off-by: Jan Musial <jan.musial@intel.com>
Core volume I/O must not be queued on management queue - this would
break I/O accounting code, resulting in use-after-free type of errors
after cache detach, core remove etc.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
There is a risk that without uuid.data pointer denitialization it will
be freed during original volume deinit, zeroing uuid.data pointer
prevents that.
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
This pointer is used to provide optional volume specific data
from the user down to bottom volume open callback. volume_params
is provided to OCF in ocf_mngt_cache_device_config.volume_params.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
In order to synchronize management operations with I/O OCF
maintains in-flight request counters. For example such ref
counters are used during ocf_mngt_detach to drain requests
accessing cache metadata (cache requests counter) and in
ocf_mngt_flush where we wait for outstanding requests sent
in write back mode (dirty requests counter).
Typically I/O threads increment cache/dirty counter when
creating request and decrement counter on request completion.
Management thread sets atomic variable to signal the start of
management operation. I/O threads react to this by changing
I/O requests mode so that the cache/dirty reference counter
is not incremented. As a result reference counter keeps getting
decremented. Management thread waits for the counter to drop to 0
and proceeds with management operation with assumption that no
cache/dirty requests are in progress.
This patch introduces a handy utility for requests reference
counting logic. ocf_refcnt_inc / dec are used to increment/
decrement counter. ocf_refcnt_freeze() makes subsequent
ocf_refcnt_inc() calls to return false, indicating that counter
cannot be incremented at this moment. ocf_refcnt_register_zero_cb
can be used to asynchronously wait for counter to drop to 0.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Cache is used during pipeline_destroy which means that
doing put_cache before destroying pipeline may result in
accessing memory that was freed.
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
For flush/purge entry points to be fully asynchronous we still
need to rework flush mutex and waiting for outstanding dirty
requests.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Send cleaner IOs with appropriate queue set
This solves the issue of bottom adapter getting NULL in io->io_queue
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
This allows to reuse same step functions giving them different parameters
on each step.
Additionally move pipeline to utils, to make it accessible to other
subsystems of OCF (e.g. metadata).
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
NOTE: This is still not the real asynchronism. Metadata interfaces
are still not fully asynchronous.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
NOTE: This patch only changes API that pretends to be asynchronous.
Most of management operations are still performed synchronously.
The real asynchronism will be introduced in the next patches.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Instead of calling flush separatly for each IO class, it is called after
collecting number of dirty cache lines defined by user or after iterating
through all IO classes.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Unlocking cache and putting queue are perormed in cleaning completion, so all
cleaning policies has to call completion.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
PyOCF is a tool written with testing OCF functionality in mind.
It is a Python3 (3.6 version required) package which wraps OCF
by providing Python objects in place of OCF objects (volumes, queues,
etc). Thin layer of translation between OCF objects and PyOCF objects
enables using customized behaviors for OCF primitives by subclassing
PyOCF classes.
This initial version implements only WT and WI modes and single,
synchronously operating Queue.
TO DO:
- Queues/Cleaner/MetadataUpdater implemented as Python threads
- Loading of caches from PyOCF Volumes (fix bugs in OCF)
- Make sure it works multi-threaded for more sophisticated tests
Co-authored-by: Jan Musial <jan.musial@intel.com>
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
Signed-off-by: Jan Musial <jan.musial@intel.com>
- Add cache trylock and read trylock functions.
- Introduce new error code -OCF_ERR_NO_LOCK.
- Change trylock functions in env to return this code in case of
lock contention.
[ENV CHANGES REQUIRED]
Following functions should return 0 on success or -OCF_ERR_NO_LOCK
in case of lock contention:
- env_mutex_trylock()
- env_rwsem_up_read_trylock()
- env_rwsem_up_write_trylock()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
- Queue allocation is now separated from starting cache.
- Queue can be created and destroyed in runtime.
- All queue ops accept queue handle instead of queue id.
- Cache stores queues as list instead of array.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Instead of switching write policy to pass-through, barrier is rised
by incrementing counter in ocf_cache_t structure.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
This may be used in logger implementations that need file
name or descriptor to initialize properly.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Propagate id of ocf_queue for discard io
This resolves the issue of
bottom adapter always getting an ocf_io with io_queue = 0,
no matter from which queue function to bottom adapter was called
This is a follow up on e69894e398
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
This looks like during change from DIV_ROUND_UP to OCF_DIV_ROUND_UP
this one occurrence has been missed.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Propagate id of ocf_queue where it is possible
This resolves the issue of
bottom adapter always getting an ocf_io with io_queue = 0,
no matter from which queue function to bottom adapter was called
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Replace assembler code with C equivalent and slightly simplify
logic searching for first free core.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>