Commit Graph

69 Commits

Author SHA1 Message Date
Roel Apfelbaum
b02481cf74 A utility to continue pipeline on zero refcnt
Signed-off-by: Roel Apfelbaum <roel.apfelbaum@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-02-06 12:04:40 +01:00
Adam Rutkowski
53ee7c1d3a Per-cpu refcounters
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Jan Musial <jan.musial@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-06 12:04:34 +01:00
Robert Baldyga
8b93b699c3 Eliminate queue -> cache mapping
Eliminate need to resolve cache based on the queue. This allows to share
the queue between cache instances. The queue still holds pointer to
a cache that owns the queue, but no management or io path relies on the
queue -> cache mapping.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-09 12:45:51 +02:00
Robert Baldyga
460cd461d3 Allocate requests for management path separately
Management path does not benefit much from mpools, as number of requests
allocated is very small. It's less restrictive (mngt_queue does not have
single-CPU affinity) thus avoiding mpool usage in management path allows
to introduce additional restrictions on mpool, leading to I/O performance
improvement.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-09-09 12:45:51 +02:00
Robert Baldyga
5b2f26decf
Merge pull request #800 from robertbaldyga/redesign-queue-api
Redesign queue API
2024-08-02 14:43:52 +02:00
Ian Levine
ac1b6b774a Added a priority queue for the request instead of push front
Now the request can be pushed to a high priority queue (instead of ocf_queue_push_req_front)
and to a low priority queue (instead of ocf_queue_push_req_back).
Both functions were merged into one function (ocf_queue_push_req) and instead of the
allow_sync parameter there is now a flags parameter that can be an OR combination of
OCF_QUEUE_ALLOW_SYNC and OCF_QUEUE_PRIO_HIGH

Signed-off-by: Ian Levine <ian.levine@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-08-02 12:53:16 +02:00
Ian Levine
038126e9ab Move and rename ocf_engine_push_req_* from engine_common to ocf_queue_push_req_* in ocf_queue
Signed-off-by: Ian Levine <ian.levine@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-08-02 12:53:16 +02:00
Robert Baldyga
40ff7d2dcf
Merge pull request #799 from Open-CAS/cache_detach
Implement cache detach/attach
2024-07-31 06:54:08 +02:00
Robert Baldyga
dfb2e1a8d5 cleaner: Check mapping after taking cache line lock
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-07-12 17:38:13 +02:00
Michal Mielewczyk
83ec255458 Disable changing cache params for detached cache
Majority of management operations should be blocked for detached cache,
although adding and removing cores should be possible.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:19:37 +02:00
Robert Baldyga
168ecd0075 Add missing "static" to the local function
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-05-11 00:59:39 +02:00
Michal Mielewczyk
7b8093aa34 Refactor cleaning policies initialization
Don't populate cleaning policies during initialization procedure so the user
has to call the latter explicitly.

Until now cleaning policies could be populated in two ways:
- implicitly during cleaning policy initialization,
- explicitly be calling populate.
The difference was that the former was single threaded.

This patch removes the functionally redundant and less efficient code.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:14:40 +02:00
Robert Baldyga
228c5fc891 Get rid of req->io_if
Remove one callback indirection level. I/O never changes it's direction
so there is no point in storing both read and write callbacks for each
request.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-09-07 23:07:04 +02:00
Robert Baldyga
d4df912f46 Add option to disable cleaner
This allows to avoid allocating cleaner metadata section and effectively
save up to 20% of metadata memory footprint.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-04-28 13:04:27 +02:00
Michal Mielewczyk
92fa8f7e59 Remove redundant standby check
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-04-08 15:34:14 +02:00
Adam Rutkowski
4a839cd332 Verify standby/active cache state in OCF entry points
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-03-28 09:42:02 +02:00
Michal Mielewczyk
52824adaaf Additional cleaning policy info outside of the SB
Starting cache in a standby mode requires access to a valid cleaning policy
type. If the policy is stored only in the superblock, it may be overridden by
one of the metadata passive updates.

To prevent losing the information it should be stored in cache's runtime
metadata.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Rafal Stefanowski
f22da1cde7 Fix license
Change license to BSD-3-Clause

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-10-28 13:08:50 +02:00
Krzysztof Majzerowicz-Jaszcz
71262d5097 Cache standby mode API changes
Error for an invalid cache operation while in passive mode added

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

Error name correction

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

API changes for passive cache mode

Moved the passive cache error return source to the api for flush and
set_param

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

Further API changes for passive cache mode

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

 Passive api - review changes

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2021-10-22 15:10:53 +02:00
Michal Mielewczyk
b83da68f85 Cleaner context ref counter
To prevent deinitializing cleaner context (i.e. during switching policy) during
processing requests, access to cleaner should be protected with reference
counter

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:33 +02:00
Michal Mielewczyk
f33a6e5ce0 Make switching cleaning policy asynchronous
Making the operation asynchronous will allow to use refcnt utility as an
synchronization mechanism between processing cachelines and deinitializing
cleaning policy.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
26194fc536 Use cleaning ops wrapper functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Adam Rutkowski
87f834c793 Move common user and freelist partition data to a new struct
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:07:10 +02:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Robert Baldyga
a9c36477d2 Fix deadlock on concurrent flush at the same cache
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-03 18:09:35 +02:00
Robert Baldyga
80b410dc2e
Merge pull request #355 from arutk/flush_fixes
Fix stalls and warnings during flush
2020-03-27 14:11:34 +01:00
Adam Rutkowski
b267d5d77d Reduce flush relaxation period by 1 order of magninude
Loop now relaxes every 2^17 (131K) cycles instead of every 1M.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:37:49 +01:00
Adam Rutkowski
fd328bd0a1 Check relaxation condition in each step of flush loop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:36:43 +01:00
Adam Rutkowski
4d61d56249 Rename flushing functions local variables for readibility
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:29:16 +01:00
Adam Rutkowski
64dcae1490 Split global metadata lock critial path
There is no need to constantly hold metadata global lock
during collecting cachelines to clean. Since we are past
freezing dirty request counter, we know for sure that the
number of dirty cache lines will not increase. So the worst
case is when the loop realxes and releases the lock,
a concurent IO to CAS is serviced in WT mode, possibly
inserting and/or evicting cachelines. This does not interfere
with scanning for dirty cachelines. And the lower layer will
handle synchronization with concurrent I/O by acquiring
asynchronous read lock on each cleaned cacheline.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:32:15 -04:00
Adam Rutkowski
3b3a49e8ea Queue flush portion requests to the back of IO queue
In current implementation in case of fast media flushning
container may starve all concurrent containers flushing
due to continous rescheduling of offender requests to the
front of I/O queue. Pushing request to the back of IO
queue ensures FIFO handling and removes possibility of
starvation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:06:14 -04:00
Adam Rutkowski
c17beec7d4 Do not exclude used cachelines from flushing
Lower layer is prepared to handle used cachelines by
acquiring asynchronus read lock. It is very likely that
by the time the cacheline is actually cleaned its lock
state changes. So checking the lock at the moment of
constructing dirty cachelines list makes little sense.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Adam Rutkowski
61983c946c Move flush containers sort & submit outside metadata lock
Moving _ocf_mngt_flush_containers outside global metadata
critical section. All this function does is sort core lines
and add queue request.

This fixes stalls reported by Linux scheduler due to
IO threads waiting on global metadata RW semaprhore for
several minutes.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Adam Rutkowski
d2bd807e49 Remove calls to OCF_METADATA_(UN)LOCK_WR(RD)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
42f65c3fbb Change ocf_metadata_(un)lock -> OCF_METADATA_(UN)LOCK
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Robert Baldyga
259df7ace9 Store core name in metadata
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-07-30 09:26:26 +02:00
Michal Rakowski
b1a6c467a0 Introduce core_is_dirty mngt method
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-06-25 09:12:44 +02:00
Michal Rakowski
2925147395 Remove reduntant dirty check
When flush completion is called there could be some clines marked as dirty since those could be in-use during flushing.
2019-06-24 14:24:34 +02:00
Michal Rakowski
29199cb5d4 Added missing metadata_unlock
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-06-11 12:12:07 +02:00
Michal Mielewczyk
e6bedb692c Unified management functions prefix.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-06-05 09:31:59 -04:00
Robert Baldyga
711de86bff Associate core metadata with core object
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-28 14:49:59 +02:00
Robert Baldyga
7de56940a4 Move ocf_request from utils
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 15:51:27 +02:00
Robert Bałdyga
1c9fe96663
Merge pull request #133 from arutk/ajrutkow_async_counters
Extended reference counting
2019-05-08 14:23:50 +02:00
Robert Baldyga
7b88aac56f Remove "interruption" argument from flush() functions
As non-interruptible flushes are no longer triggered from OCF
internals, we can get rid of "interruption" argument and let
adapters handle it themselves.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-07 17:23:19 +02:00
Adam Rutkowski
979f51612f Move dirty ref counter to cache->refcnt aggregate
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-05-06 19:22:29 -04:00
Robert Baldyga
c2aea209db Introduce pipeline *_RET macros
This simplifies code by allowing to express programmer intent
explicitly and helps to avoid missing return statements (this patch
fixes at least one bug related to this).

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-02 17:22:43 +02:00
Robert Baldyga
1373471af7 Introduce OCF_CMPL_RET() macro
This simplifies cases when we want to call completion callback
and immediately return from void-returning function, by allowing
to explicitly express programmers intent. That way we can avoid
cases when return statement is missing by mistake (this patch
fixes at least one bug related to this).

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-02 17:22:36 +02:00