Commit Graph

1023 Commits

Author SHA1 Message Date
Daniel Madej
a15003d43e Split count_pages and update on metadata deinit
This resets count_pages_variable on cache-detach, so during the following
cache-attach metadata size is calculated properly.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-06 11:54:53 +01:00
Daniel Madej
95c9c8987e Zero metadata superblock on detach
Metadata from a detached cache device is not meant to be loaded.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-06 09:24:40 +01:00
Robert Baldyga
c200c24344
Merge pull request #851 from robertbaldyga/core-stat-fix-inv-waiter
Decrement core stats only if the cache line mapping is cleaned
2025-02-06 06:15:33 +01:00
Robert Baldyga
be068df400
Merge pull request #853 from mmichal10/repart
Repart
2025-02-04 16:39:49 +01:00
Robert Baldyga
08eb00665c
Merge pull request #854 from robertbaldyga/request-cleanup
A little cleanup between ocf_request and ocf_io
2025-02-04 15:20:44 +01:00
Robert Baldyga
c029f78e95 Make alock_rw a bit field
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-02-04 12:05:26 +01:00
Daniel Madej
8ce129de06 ocf_cleaner_refcnt_unfreeze bug fix
During core remove/detach ocf_cleaner_refcnt_freeze was called only
when cache was attached, but ocf_cleaner_refcnt_unfreeze was called
regardless of cache state.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-01-15 18:51:42 +01:00
Robert Baldyga
0d06b3a597 Fix race condition during cache attach
After attaching new cache device handle all the IOs in Pass-Through mode
until all the d2c requests are completed.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-11-21 21:26:00 +01:00
Robert Baldyga
2da05343e2 Avoid UBSAN false positive - continuation
Address issue in two more instances.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-30 11:33:17 +01:00
Michal Mielewczyk
6bfd2122d2 Repart all misclassified cache lines
Remapped cache lines my require repartitioning as well

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-16 18:37:42 +02:00
Michal Mielewczyk
b3fa1fc96a Reset repart flag during refreshing request status
The flag isn't reset before retraversation so it might be true even if cache
line was reparted in the meantime

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-16 18:37:42 +02:00
Robert Baldyga
b3332793bb Remove unused request field
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-15 16:15:02 +02:00
Robert Baldyga
0cf4e8124b Remove io.ref_count
There is already refcounting on the request. No need for additional one.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-15 09:55:56 +02:00
Robert Baldyga
85513332d7 Remove ocf_io_get()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-15 09:55:56 +02:00
Robert Baldyga
a9718eeab1 Simplify fastpath handling
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-15 09:55:56 +02:00
Robert Baldyga
1ab882aa61
Merge pull request #852 from robertbaldyga/forward_io-fix-error-accounting
Fix error accounting in forward_io
2024-10-15 08:44:47 +02:00
Robert Baldyga
a1af1809d8 Fix error accounting in forward_io
Resetting cache_error/core_error in ocf_req_forward_* functions may lead
to overwriting already reported error if the forward is being done in the
loop.

To avoid this potential problem, introduce set of forward init functions
intended to be called before the entire forward operation, which resets
the error code and sets a forward callback.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-14 16:54:51 +02:00
Robert Baldyga
3959f58f1c Decrement core stats only if the cache line mapping is cleaned
We skip cleaning the mapping and moving the cache line to the free list
in a situation when there are waiters on the cache line lock.
Refrain from decrementing core stats in that situation as well, and let
repart of the waiter request do the updates as needed.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-14 13:28:56 +02:00
Michal Mielewczyk
1b2a9e03c3 Add missing cache unlock in init rollback
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-10 07:32:38 +02:00
Michal Mielewczyk
c82fd173c6 Remove redundant list_del(ctx->caches) during init
New caches are added to the list at the point where they are already
initialized and no errors are possible at this point, hence list_del() in error
handling is redundant.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-10 07:32:38 +02:00
Michal Mielewczyk
e8e7a1600c Log errors on cache init
The more (logging) the merrier

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-10 07:32:38 +02:00
Michal Mielewczyk
f6bdd354d0 Don't bug on cache init
Even if locking the new cache should never fail it's not an unrecoverable
state.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-08 11:14:20 +02:00
Roel Apfelbaum
1d1561649c Remove redundant fallback-PT counter accesses
The fewer (atomic variable accesses on IO path) the better fare

Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-08 11:14:20 +02:00
Robert Baldyga
6f02a625ad pipeline: Introduce debug logging
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-08 11:14:20 +02:00
Michal Mielewczyk
40a850cd9c Remove dead code
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-08 11:14:20 +02:00
Michal Mielewczyk
0bb2621c50 Increment ctx.refcnt before creating a new cache
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-08 11:14:20 +02:00
Michal Mielewczyk
fae30462b1 Decrement cache.refcnt if locking cache failed
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-08 11:14:20 +02:00
Robert Baldyga
75fb6a59e0 Avoid UBSAN false positive
UBSAN: array-index-out-of-bounds in src/ocf_request.c:230:44
index 1 is out of range for type 'ocf_map_info [*]'

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-02 16:47:17 +02:00
Michal Mielewczyk
237f6c708a Don't modify req->error for IOs outside io engines
`req->error` should be modified by the request's owner only; hence, if
the request has been allocated to handle IO (i.e. it's not an internal OCF
request) only engines can change it

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-01 14:06:26 +02:00
Robert Baldyga
3f2b382a2c Place "inline" before the type declaration
It's needed to make OCF compile with kernel 6.11.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-09-28 09:42:55 +02:00
Michal Mielewczyk
e42df078c3 Add missing 'static' identifier
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-27 11:25:21 +02:00
Robert Baldyga
34778d4529
Merge pull request #838 from robertbaldyga/fix-wi-double-completion
Fix double completion in engine_wi
2024-09-25 09:20:06 +02:00
Robert Baldyga
3e21b11703 Remove unnecessary references to req->req_remaining
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-09-24 20:06:24 +02:00
Robert Baldyga
fd508435d6 Fix double completion in engine_wi
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-09-24 20:04:09 +02:00
Rafal Stefanowski
97ee3af8f7 Use management queue for parallelized management operations
When IO queues are used for parallelized management operations,
e.g. changing cleaning policy, a deadlock may occur due to global
metadata lock interfering with taking request from IO queue,
as they might be run on the same thread. As a workaround using
a management queue specifically for such operations eliminates
this problem.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2024-09-23 14:05:33 +02:00
Michal Mielewczyk
c6d2436622 volume: add description to 'uuid_copy' filed
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-23 12:15:23 +02:00
Michal Mielewczyk
82c8d4f45c composite volume: add subvolume iterator API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 17:59:37 +02:00
Robert Baldyga
3fbb75756e Consolidate ocf_request_io and ocf_request - io properties
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Michal Mielewczyk
f72b92211d Remove struct ocf_io
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
ff5c20b395 Redirect ocf_volume_submit_* operations to forward
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
c5741df0ed Bind ocf_io to ocf_request
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
7fb6b62825 Drop support for submit_* ops in backend volumes
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
39d566c280 Replace submit with forward in ocf_metadata_read_sb()
This one is quite special, because it can be called before cache is
instantiated, which means we can not allocate the request using
ocf_req_new_mngt() due to absence of mngt_queue. For that reason we
simply allocate request using env_zalloc() and then release it with
env_free(). The lifecycle of the request is very straightforward and
the only used fields are forward counter and callback.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
9404716c3c Replace submit with forward in metadata_raw_atomic
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
af3b379bb8 Replace submit with forward in metadata_io
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
29ca8fbbe4 Replace submit with forward in standby
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
322ae2687d Replace submit with forward in cleaner
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
834786866c Replace submit with forward in mngt
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
1c2d5bbcf3 Introduce forward_io_simple
It's intended to be used in a context, where cache is not initialized
and the io_queue is not available yet.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
1f26ceb563 composite: Add forward_io_simple support
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00