Commit Graph

33 Commits

Author SHA1 Message Date
Michal Mielewczyk
237f6c708a Don't modify req->error for IOs outside io engines
`req->error` should be modified by the request's owner only; hence, if
the request has been allocated to handle IO (i.e. it's not an internal OCF
request) only engines can change it

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-10-01 14:06:26 +02:00
Robert Baldyga
3fbb75756e Consolidate ocf_request_io and ocf_request - io properties
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Michal Mielewczyk
f72b92211d Remove struct ocf_io
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
1c2d5bbcf3 Introduce forward_io_simple
It's intended to be used in a context, where cache is not initialized
and the io_queue is not available yet.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
2d303e8d09 Replace ocf_forward_get_io() with more specific ops
struct ocf_io is going to be removed soon (consolidated with ocf_request).

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
6aa141c247 Introduce ocf_forward_get_data()
Skip the ocf_io abstraction and get the data directly from the request.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
54f75ba492 Introduce ocf_req_forward_volume_*()
Those are meant to be used in context where no cache nor queue is
available (typically at very early stage of initialization). We reuse
cache_forward* callback and counter, because they will not be used
in this context anyway.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
5859e432c8 Introduce ocf_forward_metadata()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
cd544e8ee5 Introduce ocf_forward_write_zeros()
This is meant to be used in atomic mode to avoid allocating huge buffers
for zeroing data on drive.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
7e73de0d51 volume: Introduce general IO forward mechanism
Allow the core volume IOs to be forwarded directly to backend volumes to
avoid unnecessary allocations.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-19 15:55:19 +02:00
Robert Baldyga
dc58eeae9b Introduce d2c request
This avoids unnecessary map allocation and initialization of unused fields of
request structure. It also allows to track thier number separately from
the regular requests

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-09 12:45:51 +02:00
Robert Baldyga
8b93b699c3 Eliminate queue -> cache mapping
Eliminate need to resolve cache based on the queue. This allows to share
the queue between cache instances. The queue still holds pointer to
a cache that owns the queue, but no management or io path relies on the
queue -> cache mapping.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-09 12:45:51 +02:00
Robert Baldyga
460cd461d3 Allocate requests for management path separately
Management path does not benefit much from mpools, as number of requests
allocated is very small. It's less restrictive (mngt_queue does not have
single-CPU affinity) thus avoiding mpool usage in management path allows
to introduce additional restrictions on mpool, leading to I/O performance
improvement.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-09 12:45:51 +02:00
Adam Rutkowski
0cfb8077c5 allocate fixed map status alongside request struct
It is wastefull to allocate a full 1B to store 1 bit of
alock status per cacheline. Fixed allocation of 128 bits
seems more reasonable.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-08-29 20:02:18 +02:00
Adam Rutkowski
2f3e0b0fd0 more precise req->alock_status size calculations
1. On 1 bit per cacheline is required for the status
2. ... however the size must be 8B aligned

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-08-29 20:01:52 +02:00
Rafal Stefanowski
f22da1cde7 Fix license
Change license to BSD-3-Clause

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-10-28 13:08:50 +02:00
Kozlowski Mateusz
ce316cc67c Change alock API to include slow/fast lock callbacks
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:35 +02:00
Jan Musial
f25d9a8e40 Use new non-zeroing allocator APIs
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:38:44 +02:00
Jan Musial
9c070c1d25 Fix freeing oversized discard requests
When issuing discard request over 512KiB OCF would trim this request and
overwrite req->core_line_count which would then cause this request to be
freed from wrong mpool.

This is fixed now by saving core_line_count that was set when allocating
this request that is never overwritten. This alloc_core_line_count is
then used to free the request from correct mpool.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-25 15:16:57 +01:00
Robert Baldyga
74d61785e9 ocf_request: Deallocate request with separately allocated map properly
When allocation of request with map fails, we fallback to allocating
request with no map, and then allocate map separately. During request
put we need to distinguish between those two cases in order to deallocate
request properly.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-19 09:49:29 +01:00
Robert Baldyga
415a778c03 ocf_request: Fix use after free bug
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-15 19:41:14 +01:00
Jan Musial
c243ad3df0 Use mpool to allocate ocf_requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
8e21aa6441 Remove not needed req allocator size table
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Michal Rakowski
2edd05c812 Change get_effective_cache_mode to operate on req instead of io
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Adam Rutkowski
ee37391e97 Fix discard request map allocation
Discard handling splits large request into several steps.
However the actual size of request map for discard was
determined based on original request size, not step request
size, resulting in waste of memory and allocations > 4K.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 17:47:11 -05:00
Adam Rutkowski
26fd938ccf Reduce max trim request size to 512K
512K is the maximum request size for which request map
fits into one page (4K) regardless of cacheline size.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 15:57:34 -05:00
Adam Rutkowski
d91012f4b4 Introduce hash bucket locks
There is one RW lock per hash bucket. Write lock is required
to map cacheline, read lock is sufficient for traversing.
Hash bucket locks are always acquired under global metadata
read lock. This assures mutual exclusion with eviction and
management paths, where global metadata write lock is held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Robert Baldyga
e64bb50a4b Allocate io and request in single allocation
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-07-17 10:31:23 +02:00
Michal Rakowski
9f4536c6e3 Error codes in IO path changed to OCF-specific
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-06-05 09:10:54 +02:00
Robert Baldyga
7de56940a4 Move ocf_request from utils
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 15:51:27 +02:00