Commit Graph

66 Commits

Author SHA1 Message Date
Robert Baldyga
c029f78e95 Make alock_rw a bit field
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-02-04 12:05:26 +01:00
Robert Baldyga
b3332793bb Remove unused request field
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-15 16:15:02 +02:00
Robert Baldyga
0cf4e8124b Remove io.ref_count
There is already refcounting on the request. No need for additional one.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-15 09:55:56 +02:00
Robert Baldyga
a1af1809d8 Fix error accounting in forward_io
Resetting cache_error/core_error in ocf_req_forward_* functions may lead
to overwriting already reported error if the forward is being done in the
loop.

To avoid this potential problem, introduce set of forward init functions
intended to be called before the entire forward operation, which resets
the error code and sets a forward callback.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-14 16:54:51 +02:00
Robert Baldyga
3fbb75756e Consolidate ocf_request_io and ocf_request - io properties
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Michal Mielewczyk
f72b92211d Remove struct ocf_io
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
1c2d5bbcf3 Introduce forward_io_simple
It's intended to be used in a context, where cache is not initialized
and the io_queue is not available yet.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
54f75ba492 Introduce ocf_req_forward_volume_*()
Those are meant to be used in context where no cache nor queue is
available (typically at very early stage of initialization). We reuse
cache_forward* callback and counter, because they will not be used
in this context anyway.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
5859e432c8 Introduce ocf_forward_metadata()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
cd544e8ee5 Introduce ocf_forward_write_zeros()
This is meant to be used in atomic mode to avoid allocating huge buffers
for zeroing data on drive.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-20 13:59:46 +02:00
Robert Baldyga
7e73de0d51 volume: Introduce general IO forward mechanism
Allow the core volume IOs to be forwarded directly to backend volumes to
avoid unnecessary allocations.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-19 15:55:19 +02:00
Rafal Stefanowski
194e5a9172 Use cache_error and core_error flags only in WT
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-09-18 14:04:08 +02:00
Rafal Stefanowski
2761540326 Report cache and core errors separately
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-18 08:01:08 +02:00
Roel Apfelbaum
73387c8f26 Support set_data() with offset > 0 for core
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-17 16:26:27 +02:00
Robert Baldyga
8b93b699c3 Eliminate queue -> cache mapping
Eliminate need to resolve cache based on the queue. This allows to share
the queue between cache instances. The queue still holds pointer to
a cache that owns the queue, but no management or io path relies on the
queue -> cache mapping.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-09 12:45:51 +02:00
Robert Baldyga
460cd461d3 Allocate requests for management path separately
Management path does not benefit much from mpools, as number of requests
allocated is very small. It's less restrictive (mngt_queue does not have
single-CPU affinity) thus avoiding mpool usage in management path allows
to introduce additional restrictions on mpool, leading to I/O performance
improvement.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-09 12:45:51 +02:00
Michael Lyulko
470204ac70 Count deferred requests as full miss
Otherwise, it may increase the number of hits, while the overall performance
has not been improved. This way, the hit rate is more correlated with
the performance changes.

Signed-off-by: Michael Lyulko <michael.lyulko@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-02 09:21:28 +02:00
Ian Levine
de32a9649a Rename ocf_engine_cb to ocf_req_cb and move it from engine_common.h to ocf_request.h
Signed-off-by: Ian Levine <ian.levine@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-08-02 12:53:10 +02:00
Robert Baldyga
218e9c5723 Revert "cleaner: Remove complete_queue"
This functionality is used by cleaning policies via cmpl_queue
to reschedule the completion, so that we avoid unlocking mutex in
the cleaner completion from interrupt context of IO completion.

This reverts commit 1e5eda68a7.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-07-26 10:35:23 +02:00
Robert Baldyga
1e5eda68a7 cleaner: Remove complete_queue
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-07-12 17:38:17 +02:00
Robert Baldyga
2398412622 cleaner: Unlock cache mngt lock from queue context
Cache mngt lock cannot be unlocked from io completion context (which is
potentially atomic context) as it may involve sleeping operations.
Modify cleaner utility to support rescheduling to queue context before
calling the completion. Update cleaning policies to use that option.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-03-21 15:25:18 +01:00
Robert Baldyga
228c5fc891 Get rid of req->io_if
Remove one callback indirection level. I/O never changes it's direction
so there is no point in storing both read and write callbacks for each
request.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-09-07 23:07:04 +02:00
Krzysztof Majzerowicz-Jaszcz
133ea307c8 Fix for issues #988 and #997
This patch fixes the issue 988 (and 997) causing a kernel stack
overflow.

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2021-11-24 08:15:07 +01:00
Rafal Stefanowski
f22da1cde7 Fix license
Change license to BSD-3-Clause

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-10-28 13:08:50 +02:00
Robert Baldyga
c1e9c1fa96 Remove trace API
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-08-09 10:00:01 +02:00
Robert Baldyga
a2b300d465 Avoid stack overflow when pending read misses list is blocked
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-07-14 13:04:03 +02:00
Kozlowski Mateusz
f494448f97 Align structures to cacheline
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-07-08 12:46:39 +02:00
Adam Rutkowski
fac35d34a2 Rename "evict" to "remap" across the entire repo
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Kozlowski Mateusz
ce316cc67c Change alock API to include slow/fast lock callbacks
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:35 +02:00
Michał Mielewczyk
c2e588be9d
Merge pull request #476 from mmkayPL/cacheline-alignment
Cacheline alignment
2021-03-26 12:01:55 +01:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Jan Musial
9c070c1d25 Fix freeing oversized discard requests
When issuing discard request over 512KiB OCF would trim this request and
overwrite req->core_line_count which would then cause this request to be
freed from wrong mpool.

This is fixed now by saving core_line_count that was set when allocating
this request that is never overwritten. This alloc_core_line_count is
then used to free the request from correct mpool.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-25 15:16:57 +01:00
Jan Musial
c243ad3df0 Use mpool to allocate ocf_requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Michał Mielewczyk
d2b5de7970
Merge pull request #448 from robertbaldyga/perqueue-seq-cutoff
Per-queue multi-stream sequential cutoff
2021-03-05 14:38:21 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
ce2ff14150 Move request engine callbacks to req structure
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
3bd0f6b6c4 Change sequential request detection logic
Changing sequential request detection so that a miss request is
recognized as sequential after needed cachelines are evicted
and mapped to the request in a sequential order.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Robert Baldyga
3ee253cc4e Per-queue multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-04 16:38:31 +01:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Michal Mielewczyk
60680b15b2 Accessors for req->info.mapping_error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
9d80882b00 Remove re_part field from struct ocf_req_info
Since the request carries an explicit information about number of the
cacheliens to be reparted, no need of keeping the boolean information if some
of the request's cachelines are assigned to a wrong partition

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e26ca30399 Track explicit number of cachelines to be reparted
Instead of redunant calculating number of cachlines to be reparted, keep this
information in request's info

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
718dc743c8 Enable particular ioclass eviction
If partition's occupancy limit is reached, cachelines should be evicted from
 request's target partition.

Information whether particular partition eviction should be triggered is
carried as a flag by request which triggered eviction.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:23 -05:00
Robert Baldyga
7d889fa1fc
Merge pull request #385 from arutk/pt_write_double_inv
Two pass write invalidate
2020-07-28 07:42:44 +02:00
Adam Rutkowski
91b6098fda Two pass write invalidate
Add second pass of write invalidate. It is necessary only
if concurrent I/O had inserted target LBAs to cache after
WI request did traversation. These LBAs might have been
written by WI request behind the concurrent I/O's back,
resulting in making these sectors effectively invalid.
In this case we must update these sectors' metadata to
reflect this. However we won't know about this after we
traverse the request again - hence calling ocf_write_wi
again with req->wi_second_pass set to indicate that this
is the second pass (core write should be skipped).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-07-20 17:26:35 +02:00
Slawomir Jankowski
da34d5047b Typo fix
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:41 +02:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Michal Rakowski
6f4d02f251 Fix seq_cutoff respecting in pt read
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Michal Rakowski
2edd05c812 Change get_effective_cache_mode to operate on req instead of io
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00