Commit Graph

169 Commits

Author SHA1 Message Date
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Robert Baldyga
8020e7fd67
Merge pull request #457 from Ostrokrzew/false_stats
Fix broken 'dirty_for' stats
2021-03-18 10:24:02 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
c40e36456b Add missing hash bucket lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
69c0f20b6e Remove global metadata lock from cleaner metadata update step
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Slawomir Jankowski
eeda1f3f0f Unify type of dirty_for in info structs
Reformat function that calculates how long cache/core is dirty
Update `dirty_for` types in functional tests

Values stored in info structs fields (both in cache and core structs)
are unsigned 64-bits ints but `dirty_for`s were unsigned 32-bits ints.

Use existing function to transform returned value to seconds.
Replace seconds stored in metadata with seconds.
Replacement was done if old value of replaced field was equal to zero.
Acquiring monotonic high precission timestamp is potentially
slow and it makes sense to compare the field's value
to zero before calling atomic function.

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2021-02-25 14:51:53 +01:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Robert Baldyga
af8177d2ba
Merge pull request #458 from mmichal10/fix-cleaning
Fix updating hot cachelines cleaning list
2021-02-11 11:30:07 +01:00
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Michal Mielewczyk
fa41d4fc88 Fix updating hot cachelines cleaning list
Update cacheline's timestamp each time it's being written.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-10 10:02:57 -05:00
Adam Rutkowski
5538a5a95d Only request evict size equal to request unmapped count
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 11:11:15 -06:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Adam Rutkowski
012438c279 Add missing collision page lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-20 19:28:41 -06:00
Adam Rutkowski
bd20d6119b External linkage for function to sparse single cline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-15 02:10:42 -05:00
Robert Baldyga
fd88c2c3a4
Merge pull request #436 from mmichal10/metadata-assert
Metadata assert
2021-01-08 10:15:08 +01:00
Michal Mielewczyk
d0225ef1cb Prevent uint32_t overflow
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 02:45:05 -05:00
Robert Baldyga
ea1fc7a6d4 seq-cutoff: Don't modify node list under read lock
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-05 19:46:37 +01:00
Michal Mielewczyk
0dc8b5811c Store min and max ioclass size as percentage val
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.

To prevent those errors, min and max should be calculated dynamically.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
05f3c22dad Occupancy per ioclass utilities
Functions to check space availability and to manage cachelines reservation

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:18:47 -05:00
Michal Mielewczyk
600bd1d859 Access partition's metadata counters via functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Adam Rutkowski
41a767de97 Multiple LRU lists
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
7af386681d
Merge pull request #418 from robertbaldyga/inc-dep-env-headers
Remove dependency to full ocf_env.h from inc/ headers
2020-11-30 17:16:32 +01:00
Robert Baldyga
b8735f6517 rbtree: Fix swapping out-of-tree node with root
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-30 15:58:18 +01:00
Robert Baldyga
c8e7e0053c Remove dependency to full ocf_env.h from inc/ headers
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-19 13:07:16 +01:00
Robert Baldyga
8b03271626 rbtree: Introduce list find callback
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
0ae4f4b5b2 rbtree: Add equal nodes to linked list
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
50c4de0495 rbtree: Make swap resistant to nodes outside the tree
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:20:45 +01:00
Robert Baldyga
694224971c rbtree: Replace spaces with tabs
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-09 17:32:03 +01:00
Robert Baldyga
1c9312842a
Merge pull request #369 from rafalste/copyright_update
Update copyright statements
2020-05-06 12:42:10 +02:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Michal Rakowski
67577fc1ef Force pass-through for requests bigger than cache
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-04-24 15:34:27 +02:00
Robert Baldyga
935df23c74 Introduce red-black trees utility
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-24 18:17:15 +01:00
Michal Mielewczyk
d9c987e068 Flush metadata after changing status of each sector
In case of cleaning metadata used to be flushed only when status of whole cache
line changed to clean.

This patch ensures that metadata flush is triggered after changin status of each
single sector is cache line.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Michal Mielewczyk
2f10365086 Flush metadata after setting dirty status of each sector.
After second dirty write to cache line which was already dirty, metadata flush
was not triggered. In case of dirty shutdown, this led to data corruption.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Robert Baldyga
d1c2fc0c67 discard: Make max_length aligned to sector size
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-21 12:44:04 +01:00
Robert Baldyga
e06832426d cleaner: Retrieve core object properly
Cleaner doesn't set core object in req as it works in domain of cache
lines, which may belong to various cores. It this case should retrieve
core object not from the req, but from the map instead.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-19 14:44:04 +01:00
Michal Mielewczyk
db06783d56 Fix cache stats updating.
When single request to cache was issued, stats updating function was called with
0 bytes as value to update. In case of many request issued to cache, stats were
updated only in case of error.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-10-08 08:49:24 -04:00
Robert Baldyga
f51f7f7e1e Update stats before calling completion callback
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-10-02 18:49:59 +02:00
Michal Rakowski
2575be83fa Error handling for env_rwsem_init added
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:16:37 +02:00
Michal Rakowski
b78557a2cc Change env_spinlock_init to non-void function
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Michał Wysoczański
aafe870e44
Merge pull request #280 from arutk/metadata_sync_2
Additional metadata synchronization
2019-09-25 14:11:59 +02:00
Adam Rutkowski
be3b402162 Synchronization of collision table
Adding synchronization around metadata collision segment pages.
This part of metadata is modified when cacheline is mapped/unmapped
and when dirty status changes.

Synchronization on page level is required on top of cacheline
and hash bucket locks to assure metadata flush always reads
consistent state when copying entire collision table memory
page.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 00:26:29 -04:00
Robert Baldyga
b8f5f135fe ocf_async_lock: Replace mutex with spinlocks
The ocf_async_lock may be used in atomic context, thus we need
to replace synchronization primitives to non-sleeping variants.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-09-23 15:54:25 +02:00
Adam Rutkowski
d2bd807e49 Remove calls to OCF_METADATA_(UN)LOCK_WR(RD)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
f34cacf150 Move resume callback to async lock function params (refactoring)
This is a step towards common async lock interface in OCF.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Michał Mielewczyk
f86287ef06
Merge pull request #261 from micrakow/coverity_19_9
Fixed some bugs found by the coverity tool
2019-09-17 09:25:18 +02:00
Michal Rakowski
83e23c5593 Fixed some bugs found by the coverity tool
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-16 15:49:37 +02:00
Michal Mielewczyk
c5edc60345 Fix stats update in cleaner.
Core is not assigned to request in cleaner, so to increase it's stats it has to
be retrieved from mapping.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-16 05:59:28 -04:00
Michal Mielewczyk
494a1ccc79 Extract stats builder utils to separate file.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-12 05:21:17 -04:00
Michal Mielewczyk
01ce586e6a Use API instead of raw variables to update block stats.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-10 08:01:09 -04:00
Michal Mielewczyk
b4c384eb2d Use API instead of raw variables to update error stats.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-10 08:01:09 -04:00
Michal Mielewczyk
2450d3da4b Move block stats counters to ioclass section.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-09 02:46:45 -04:00
Firas Medini
1f979f630b Adding synchronization primitives destroyers
Environment should provide calls for destroying primitives (i.e. env_mutex_destroy()) and OCF should call these functions in its cleanup paths.

Signed-off-by: Firas Medini <mdnfiras@yahoo.com>
2019-08-13 05:13:11 -07:00
Kamil Łepek
05be67a72b
Merge pull request #220 from arutk/metadata_offset_hash
New hash function formula
2019-07-29 13:03:04 +02:00
Adam Rutkowski
494861c994 Rename cache_concurrency to cache_line_concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-07-24 15:32:12 -04:00
Adam Rutkowski
7184f7787c Cleanup map_info struct
1. Rename hash_key -> hash
2. Better comments for hash and coll_idx

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-07-24 13:12:05 -04:00
Jan Musiał
232f0cd8d7
Merge pull request #216 from robertbaldyga/io-and-req-in-single-allocation
Allocate io and req in single allocation
2019-07-23 11:40:30 +02:00
Jan Musial
917cbd859a Add promotion policy API and use it in I/O path
Promotion policy is supposed to perform ALRU noise filtering by
eliminating one-hit wonders being added to cache and polluting it.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-07-19 13:52:00 +02:00
Robert Baldyga
e64bb50a4b Allocate io and request in single allocation
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-07-17 10:31:23 +02:00
Robert Baldyga
61414f889e Introduce OCF IO allocator
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-07-17 10:31:23 +02:00
Robert Baldyga
e254c9b587 Merge new_io and configure into one function
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-07-17 10:31:23 +02:00
Adam Rutkowski
9dc1381b77 Refactor ocf_submit_cache_reqs map indexing
Refactoring ocf_submit_cache_reqs to make it clear that
req->map is accessed at index derived from offset argument,
not necesarily starting at 0.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-06-12 18:56:40 -04:00
Adam Rutkowski
82e8c55f4a Write-only cache mode
Write-only cache mode is similar to writeback, however read
operations do not promote data to cache. Reads are mostly serviced
by the core device, only dirty sectors are fetched from the cache.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-06-12 12:07:02 -04:00
Adam Rutkowski
ae6164a49c Helper functions to get request start/end sector in cacheline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-06-12 12:07:02 -04:00
Michal Mielewczyk
6cdbac82bc Check for valid core_id value 2019-06-11 12:12:07 +02:00
Michal Rakowski
b1cf6c4642 Changed always returning 0 to void foo
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-06-11 09:53:40 +02:00
Michal Rakowski
9f4536c6e3 Error codes in IO path changed to OCF-specific
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-06-05 09:10:54 +02:00
Robert Baldyga
f240f81641 Make request structure more compressed
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-06-03 19:19:29 +02:00
Robert Baldyga
711de86bff Associate core metadata with core object
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-28 14:49:59 +02:00
Robert Baldyga
8a82be339f Introduce asynchronous cache lock
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-28 11:20:48 +02:00
Robert Baldyga
f9447fda75 Remove all the trailing whitespaces
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 17:00:39 +02:00
Robert Baldyga
bdcd4df0ef Remove utils_device.h
Move core mngt related code to ocf_mngt_core.c

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 17:00:39 +02:00
Robert Baldyga
7de56940a4 Move ocf_request from utils
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 15:51:27 +02:00
Robert Baldyga
ab2fc6d3c3 Rename utils_allocator to utils_realloc
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 13:10:17 +02:00
Robert Baldyga
cda536a14a Remove mpool from OCF utils
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 13:04:40 +02:00
Robert Baldyga
7ff4a349ec Use enum cache line values when validating config
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-23 13:29:49 +02:00
Michal Rakowski
4216ff78f1
Merge pull request #163 from robertbaldyga/put-queue-after-deallocating-req
Put queue after deallocating request
2019-05-23 10:38:13 +02:00
Robert Baldyga
dbef4b721c Put queue after deallocating request
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-21 16:09:58 +02:00
Robert Baldyga
0490dd8bd4 ocf_reqest: Store core handle instead of core_id
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-21 12:30:29 +02:00
Adam Rutkowski
348b0f9ab8 Async wait for cleaner completion
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-05-06 19:37:49 -04:00
Adam Rutkowski
0e15c693fd Return post modification value from ocf_refcnt_inc/dec
If counter is frozen then increment returns 0.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-05-06 19:23:11 -04:00
Adam Rutkowski
962f9d17d1 Remove functions to wait for cache pending requests
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-05-06 19:23:11 -04:00
Adam Rutkowski
dc716d6a08 Use ref counter to track attach state
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-05-06 19:23:10 -04:00
Adam Rutkowski
555f477248 Do not increment attached metadata counter on behalf of mngt requests
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-05-06 19:22:29 -04:00
Robert Baldyga
c2aea209db Introduce pipeline *_RET macros
This simplifies code by allowing to express programmer intent
explicitly and helps to avoid missing return statements (this patch
fixes at least one bug related to this).

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-02 17:22:43 +02:00
Robert Baldyga
1373471af7 Introduce OCF_CMPL_RET() macro
This simplifies cases when we want to call completion callback
and immediately return from void-returning function, by allowing
to explicitly express programmers intent. That way we can avoid
cases when return statement is missing by mistake (this patch
fixes at least one bug related to this).

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-02 17:22:36 +02:00
Adam Rutkowski
407470c25d Add reference counter init routine
This is useful when reference counter is initialized in non-zeroed
memory (or assuming atomic variable is not properly initialized by
memseting to zero).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-04-17 23:12:04 -04:00
Michał Wysoczański
03c95d36f0
Merge pull request #82 from robertbaldyga/asynchronous-metadata
Handle metadata asynchronously
2019-03-26 12:54:26 +01:00
Robert Baldyga
6cd84476f6 Handle metadata asynchronously
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-03-26 12:49:23 +01:00
Adam Rutkowski
7003d38b44 Encapsulate request reference counting logic in utils_refcnt
In order to synchronize management operations with I/O OCF
maintains in-flight request counters. For example such ref
counters are used during ocf_mngt_detach to drain requests
accessing cache metadata (cache requests counter) and in
ocf_mngt_flush where we wait for outstanding requests sent
in write back mode (dirty requests counter).

Typically I/O threads increment cache/dirty counter when
creating request and decrement counter on request completion.
Management thread sets atomic variable to signal the start of
management operation. I/O threads react to this by changing
I/O requests mode so that the cache/dirty reference counter
is not incremented. As a result reference counter keeps getting
decremented. Management thread waits for the counter to drop to 0
and proceeds with management operation with assumption that no
cache/dirty requests are in progress.

This patch introduces a handy utility for requests reference
counting logic. ocf_refcnt_inc / dec are used to increment/
decrement counter. ocf_refcnt_freeze() makes subsequent
ocf_refcnt_inc() calls to return false, indicating that counter
cannot be incremented at this moment. ocf_refcnt_register_zero_cb
can be used to asynchronously wait for counter to drop to 0.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-03-26 15:22:31 -04:00
Adam Rutkowski
0d0fd0be75 Asynchronous implementation of flush and purge - part 1
For flush/purge entry points to be fully asynchronous we still
need to rework flush mutex and waiting for outstanding dirty
requests.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-03-22 17:04:33 -04:00
Robert Bałdyga
70df1d80a1
Merge pull request #79 from Donaim/cleaner-set-queue
Set io_queue for cleaner IOs
2019-03-21 10:08:39 +01:00
Vitaliy Mysak
b9cae8b164 Set io_queue for cleaner IOs
Send cleaner IOs with appropriate queue set
This solves the issue of bottom adapter getting NULL in io->io_queue

Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
2019-03-21 10:03:56 +01:00
Robert Baldyga
23b0a32aec Parametrize pipeline steps
This allows to reuse same step functions giving them different parameters
on each step.

Additionally move pipeline to utils, to make it accessible to other
subsystems of OCF (e.g. metadata).

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-03-20 17:06:56 +01:00
Robert Baldyga
91e0345b78 Implement asynchronous attach, load, detach and stop
NOTE: This is still not the real asynchronism. Metadata interfaces
are still not fully asynchronous.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-03-18 15:06:57 +01:00
Robert Baldyga
c5df82f2cb Make management API asynchronous
NOTE: This patch only changes API that pretends to be asynchronous.
Most of management operations are still performed synchronously.
The real asynchronism will be introduced in the next patches.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-03-12 10:33:26 +01:00
Robert Baldyga
9c0390a6ce Make metadata probe asynchronous
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-03-01 15:46:16 +01:00