Commit Graph

1056 Commits

Author SHA1 Message Date
Robert Baldyga
2b94a3ab31 cleaner: Move sort functionality to flush_data abstraction
The flush_data is used by ocf_cleaner_do_flush_data_async(), which means
that callers of ocf_cleaner_fire() are now expected to guarantee that
entries are returned by getter in a sorted order. Currently the only case
when ocf_cleaner_fire() is called directly is for request cleaning, and
the request map is sorted by definition.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-07-12 13:23:35 +02:00
Robert Baldyga
dd4add45e1 lru: Use common flush_data abstraction for cleaning
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-07-12 13:23:35 +02:00
Robert Baldyga
43cc487c40 lru: Move partition runtime structures outside of metadata
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-07-12 13:23:29 +02:00
Michal Mielewczyk
83ec255458 Disable changing cache params for detached cache
Majority of management operations should be blocked for detached cache,
although adding and removing cores should be possible.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:19:37 +02:00
Michal Mielewczyk
f1bfd94c98 Enable IO to detached cache instance
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:18:44 +02:00
Michal Mielewczyk
de07458ff2 Common context for cache stop and cache detach
Stop and cache detach were already sharing contexts implicitly, which allowed
to reuse some functions in both pipelines. However, changing the context structs
could lead to not obvious bugs.

To prevent such errors both methods now share context structure explicitly

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:18:33 +02:00
Michal Mielewczyk
09335cd6f2 Update cache's state after detach
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:18:28 +02:00
Michal Mielewczyk
695d77e3b5 Apply cache state API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:18:23 +02:00
Michal Mielewczyk
2f0b86f5ca Extend cache state API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:18:17 +02:00
Michal Mielewczyk
047e07c062 Rename cache "initializing" state to "detached"
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:18:04 +02:00
Michal Mielewczyk
d3c11a983b Update cache state when stopping uninited instance
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:16:31 +02:00
Michal Mielewczyk
3f41a35f30 Patch detached cache API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:14:33 +02:00
Michal Mielewczyk
41224c61c0 Track max number of cores for atomic volume
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 16:13:15 +02:00
Michal Mielewczyk
2a97de8792 Detach finish: destroy stop pipeline before cmpl
'stop_pipeline' filed may be reused during cache lifetime (e.g. when cache is
detached and attached again - the pipeline would be freed and then
re-allocated). Calling completion after detach before freeing the pipeline may
lead to race condition.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2024-07-10 11:35:42 +02:00
Robert Baldyga
d7fe7c05f1 Add missing ocf_cache_mode_t to ocf_req_cache_mode_t conversions
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-07-05 16:59:05 +02:00
Robert Baldyga
168ecd0075 Add missing "static" to the local function
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-05-11 00:59:39 +02:00
Robert Baldyga
578f4b6591 Add missing headers
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-05-11 00:51:29 +02:00
Robert Baldyga
43608fc812 Remove unused function
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-05-11 00:50:34 +02:00
Robert Baldyga
253734b160 Move misplaced function declaration to the appropriate header
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-05-11 00:49:52 +02:00
Robert Baldyga
dc3b581e38 Move declaration to the right header
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-05-11 00:49:21 +02:00
Robert Baldyga
527e3deb74 Remove accidentally added .swp file
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-05-11 00:35:59 +02:00
Robert Baldyga
5710ca8b4a Fix compilation
Signed-off-by: Robert Baldyga <robert.baldyga@open-cas.com>
2024-04-01 18:27:25 +00:00
Amir Haroush
c85a01473f Fix wrong order call to ocf_alock_waitlist_remove_entry()
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-03-21 21:20:11 +01:00
Robert Baldyga
2398412622 cleaner: Unlock cache mngt lock from queue context
Cache mngt lock cannot be unlocked from io completion context (which is
potentially atomic context) as it may involve sleeping operations.
Modify cleaner utility to support rescheduling to queue context before
calling the completion. Update cleaning policies to use that option.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-03-21 15:25:18 +01:00
Robert Baldyga
fd489e3a30 Fix potential deadlock in discard
HB lock takes inclusive metadata lock, which is taken also by metadata
flush, thus trying to call metadata flush under HB lock attempts to take
this lock recursively. In that case, if in the meantime some other thread
would try to take exclusive metadata lock, the inner inclusive lock would
block (because the lock keeps the order), with outer inclusive lock still
held, leading to a deadlock.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-03-20 23:35:46 +01:00
Robert Baldyga
d57c9bb51d Unlock request in PT using ocf_req_unlock()
There are situations when we can end up in engine_pt with cache lines
locked for write. One example is engine_rd falling back to engine_pt after
failure during cache line preparation, where write lock has been already
taken. To handle this situation properly, unlock request using more general
unlock function.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2023-09-13 17:04:06 +02:00
Robert Baldyga
e09463054d
Merge pull request #771 from robertbaldyga/cache-is-initializing
Add OCF API ocf_cache_is_initializing
2023-04-17 20:38:14 +02:00
Amir Haroush
041df202b8 Fix alignment of private data in parallelize & pipeline
there is an issue when someone call to parallelize/pipeline
with some struct that is aligned (say to 64B)
but these APIs add their own data, right before
the user's private data.
so, the user's data is no longer aligned
which might cause segfault in some cases.

Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2023-04-17 20:35:38 +02:00
Amir Haroush
6cb1ff71c2 Add OCF API ocf_cache_is_initializing
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2023-03-30 10:34:05 +02:00
Amir Haroush
22a697d09e Fix segfault when copy unaligned struct as aligned
Because context has one field which is aligned to 64B
(struct ocf_volume cache_volume) the compiler use vmovdqa (aligned)
instead of vmovdqu (unaligned) in reality the address is not 64 aligned,
it ends with 0x8, so we get this segfault.

Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2023-03-28 09:32:33 +02:00
Damian Raczkowski
d2ea41cdbc remove ocf_io_start function
Signed-off-by: Damian Raczkowski <damian.raczkowski@intel.com>
2022-10-28 15:03:36 +02:00
David Lee
004e930a9f update copyright line as per requested
Signed-off-by: David Lee <live4thee@gmail.com>
2022-10-09 22:30:36 +08:00
David Lee
6184ad7759 alru: add parameter `max_dirty_ratio'
With a high dirty ratio and occupancy, OCF might unable to map cache lines
for new requests, thus pass-through the I/O to core devices.  IOPS will
drop afterwards.  We need to control the dirty ratio.

Existing `alru' policy gives user the chance to control the stale buffer
time, activity threshold etc.  They can affect the dirty ratio of the cache
device, but in an empirical manner, more or less.  Introducing
`max_dirty_ratio' can make it explicit.

At first glance, it might be better to implement a dedicated cleaner policy
directly targeting dirty ratio goal, so that the `alru' parameters remains
orthogonal.  But one the other hand, we still need to flush dirty cache
lines periodically, instead of just keeping a watermark of dirty ratio.
It indicates that existing `alru' parameters are still required if we
develop a new policy, and it seems reasonable to make it a parameter.

To sum up, this patch does the following:
- added a 'max_dirty_ratio' parameter with default value 100;
- with default value 100, `alru' cleaner is identical to what is was;
- with value N less than 100, the cleaner (when waken up) will active
  brought dirty ratio to N, regardless of staleness time.

Signed-off-by: David Lee <live4thee@gmail.com>
2022-10-01 17:48:14 +08:00
Michal Mielewczyk
7b8093aa34 Refactor cleaning policies initialization
Don't populate cleaning policies during initialization procedure so the user
has to call the latter explicitly.

Until now cleaning policies could be populated in two ways:
- implicitly during cleaning policy initialization,
- explicitly be calling populate.
The difference was that the former was single threaded.

This patch removes the functionally redundant and less efficient code.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:14:40 +02:00
Michal Mielewczyk
c0e99e1f79 cleaning: rename recovery to populate
The function not only recovers cleaning policy metadata but is also utilized
to initialize data structures so more generic name is actually more accurate

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:14:40 +02:00
Michal Mielewczyk
8faf74169a Parallelize initializing hash table
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:14:40 +02:00
Michal Mielewczyk
4dbf740f5b Parallelize initializing collision section
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:14:40 +02:00
Michal Mielewczyk
b50bd1b506 Initialize metadata structures in pipelines
Initializing metadata in an asynchronous manner will allow to use
parallelization utilities in the future commits

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:06:40 +02:00
Michal Mielewczyk
da67112b17 load: init_structures as a separate step
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:06:40 +02:00
Michal Mielewczyk
f8e8d74539 attach: setup promotion policy before cleaning
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:06:40 +02:00
Michal Mielewczyk
ca70ea3fff Deinit cleaning policy if attaching cache failed
Normally cleaning policy would be deinitialized during stopping cache which is
one of steps of error handling e.g in case of failed cache activation. But since
`cache_stop()` may be called only for an attached cache instance, cleaning
policy needs to deinitialized explicitly.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:06:40 +02:00
Michal Mielewczyk
21d5da83d9 A utility for counting queues
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-26 14:06:40 +02:00
Michal Mielewczyk
ef997b47fa Fix whitespaces
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-09-23 07:09:41 +02:00
Robert Baldyga
e9a3ebe460
Merge pull request #746 from pdebski21/fix_debug_kernel_stack_overflow
Stack memory reduction for OCF stats
2022-09-09 09:00:03 +02:00
Robert Baldyga
9ad308d84f
Merge pull request #714 from rafalste/copyright_header_check_improvements
Copyright header check improvements
2022-09-09 08:53:13 +02:00
Robert Baldyga
1c701e4101
Merge pull request #750 from robertbaldyga/remove-req-io-if
Get rid of req->io_if
2022-09-08 22:59:57 +02:00
Rafal Stefanowski
9d7f4becb8 copyright/license: Add missing copyright header
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2022-09-08 13:13:18 +02:00
Robert Baldyga
228c5fc891 Get rid of req->io_if
Remove one callback indirection level. I/O never changes it's direction
so there is no point in storing both read and write callbacks for each
request.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-09-07 23:07:04 +02:00
Robert Baldyga
d0d1db0b8d
Merge pull request #748 from arutk/fas
fix potential out of bound access in req->alock_status manipulation
2022-09-07 17:05:14 +02:00
Robert Baldyga
4d32e4272a
Merge pull request #751 from arutk/cesf
unify cache write error accounting
2022-09-07 11:04:21 +02:00
Piotr Debski
0aed807ac4 Stack memory reduction for OCF stats
Signed-off-by: Piotr Debski <piotr.debski@intel.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-09-06 14:34:35 +02:00
Adam Rutkowski
0a09d05a8b Add missing ocf_metadata_read_sb error handling
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-09-06 13:24:05 +02:00
Adam Rutkowski
83b4455a0e unify cache write error stats accounting
In most (6/9) instances across engines ocf_core_stats_cache_error_update
is called upon each cache volume I/O error, possibly multiple times
per a user request in case of multi-cacheline requests. Backfill,
fast and read engine are exceptions, incrementing error stats only
once per user request.

This commit unifies ocf_core_stats_cache_error_update usage so that
in all the engines error statistic is incremented for once for every
error.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-09-05 21:13:06 +02:00
Adam Rutkowski
0cfb8077c5 allocate fixed map status alongside request struct
It is wastefull to allocate a full 1B to store 1 bit of
alock status per cacheline. Fixed allocation of 128 bits
seems more reasonable.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-08-29 20:02:18 +02:00
Adam Rutkowski
2f3e0b0fd0 more precise req->alock_status size calculations
1. On 1 bit per cacheline is required for the status
2. ... however the size must be 8B aligned

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-08-29 20:01:52 +02:00
Krzysztof Majzerowicz-Jaszcz
e12803f547 Fix for bad metadata capacity reported by dmesg
Metadata capacity reported by dmesg was actually a memory footprint.

A proper size of metadata is now reported.

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2022-07-06 14:30:39 +02:00
Adam Rutkowski
5a71f7c068 validate uuid->size in ocf_volume_init
Optional uuid parameter to ocf_volume_init() points to UUID object
initialized by the user. We should verify it is not excesively large
as we attempt to allocate a buffer to store a copy of the UUID.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-28 08:02:58 +02:00
Adam Rutkowski
364e36ec7e Revert "fix deinitialization of moved composite volume"
The proper way to avoid calling on_deinit() callback on an already
deinitialized volume is to deinitialize type callbacks, as it is done
in the previous commit.

This reverts commit a7f70687a9.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-28 08:02:58 +02:00
Adam Rutkowski
b6587ad622 zero volume->type in ocf_volume_deinit()
After deinitialization of volume there is no need to call back to
type ops. Currently we would erroneously call on_deinit() callback
multiple times if ocf_volume_deinit() is performed more than once,
which we expect to happen and treat as a correct use of API.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-28 08:02:58 +02:00
Robert Baldyga
f0f6ff219b Set core volume type in metadata on core insert
ocf_metadata_flush_superblock() is being called on the cache stop, after
deinitialization of the cores (and their volumes), thus accessing core
volume in superblock flushing procedure leads to use-after-free bug.

Fix this by moving volume type setting to the core insertion code.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-28 07:59:43 +02:00
Robert Baldyga
8822094f14 Fix metadata on disk size calculation when cleaner is disabled
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-21 09:33:42 +02:00
Piotr Debski
c448043b42 Conditional pipeline step for filtering invalid segments
Signed-off-by: Piotr Debski <piotr.debski@intel.com>
2022-06-16 09:33:09 +02:00
Adam Rutkowski
1a27b07f72 Pipeline conditional step
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Piotr Debski <piotr.debski@intel.com>
2022-06-16 09:33:09 +02:00
Adam Rutkowski
a7f70687a9 fix deinitialization of moved composite volume
After moving from a volume, it's priv is assigned to the new owner.
Destroying the volume after moving from it must not attempt to use the
priv, especially not to attempt to deinit member volumes in case of
composite volume.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-13 11:40:08 +02:00
Adam Rutkowski
5a80237e74 expose composite volume type id in API
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-13 11:40:08 +02:00
Adam Rutkowski
02db4de75b Composite volume io calculations fix
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-13 11:40:08 +02:00
Adam Rutkowski
0030ebdecc Handle already opened volume in volume open
Volumes are now exposed in OCF API and we should gracefully handle
attempt to open already opened volume (instead of ENV_BUG).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-13 11:40:08 +02:00
Adam Rutkowski
b053f7925a
Merge pull request #702 from robertbaldyga/v22.6-composite-volume
Introduce composite volume
2022-06-02 13:36:21 +02:00
Adam Rutkowski
5f767dd618
Merge pull request #726 from arutk/fipm
flush handling fixes and enhanced tests
2022-06-02 10:46:36 +02:00
Robert Baldyga
b847fa9a61 Introduce composite volume
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-02 09:49:39 +02:00
Robert Baldyga
8858e7344d Replace uuid/type pair with volume object in the device config
It makes it possible to attach/load cache using volume types that have
non-standard constructors.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-02 09:49:39 +02:00
Robert Baldyga
54b951fcdf Make default io allocators part of internal API
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-02 09:49:39 +02:00
Robert Baldyga
c9ea68f3bf Introduce on_init/on_deinit ops in ocf_volume interface
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-02 09:49:39 +02:00
Robert Baldyga
af62d14f02 Set priv to NULL on volume deinit
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-02 09:49:39 +02:00
Robert Baldyga
70a410b2fe Improve error handling in ocf_volume_init()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-06-02 09:49:39 +02:00
Adam Rutkowski
df7ed6920c Fix ops(flush) engine
Flush I/O should be forwarded to core and cache device. In case of core
this is simple - just mirror the I/O from the top volume. Since
cache data is owned by OCF it makes sense to send a simple flush I/O
with 0 address and size.

Current implementation attempts to use cache data I/O interface
(ocf_submit_cache_reqs function) instead of submitting empty flush to
the underlying cache device. This function is designed to read/write
from mapped cachelines while there is no traversation/mapping
performed on flush I/O.

If request map allocation succeeds, this results in sending I/O to
addres 0 with size and flags inherited from the top adapter I/O.
This doesn't make any sense, and can even result in invalid I/O if the
size is greater than cache device size.

Even worse, if flush request map allocation fails (which happens
always in case of large flush requests) then the erroneous call to
ocf_submit_cache_reqs results in NULL pointer dereference.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-06-01 22:33:35 +02:00
Adam Rutkowski
1992bfc410
Merge pull request #710 from pdebski21/cache_line_size_mismatch
Explicit check for cacheline size mismatch during cache activation
2022-06-01 18:07:36 +02:00
Piotr Debski
0b9104e8d5 Cache metadata and superblock cache line size mismatch check
Signed-off-by: Piotr Debski <piotr.debski@intel.com>
2022-05-23 15:20:35 +02:00
Jan Musial
6016a6f4c7 Mark unlikely branches in pio_concurrency
Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-05-18 11:56:06 +02:00
Jan Musial
60a6da7ee6 Extend alock API with entries_count method
Right now alock assumes that number of locks taken will equal number of
core lines. This is not the case in pio, where only parts of metadata
are under locks. If pio request overlaps locked and not-locked metadata
section it will have different core lines number and awaited locks
number. To remedy this discrepancy additional method which gets count of
locks that will be taken/waited on is added to alock API.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-05-16 16:21:08 +02:00
Robert Baldyga
3aa12793a1
Merge pull request #713 from robertbaldyga/use-ocf-div-round-up
Use internal implementation of DIV_ROUND_UP
2022-05-13 21:21:26 +02:00
Robert Baldyga
ad7a40feaf Use internal implementation of DIV_ROUND_UP
It's required, because environments other than Linux kernel may not define
their own DIV_ROUND_UP. Moving it to env would just generate boilerplate,
because its implementation is trivial and portable.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-05-10 09:52:17 +02:00
Robert Baldyga
d4df912f46 Add option to disable cleaner
This allows to avoid allocating cleaner metadata section and effectively
save up to 20% of metadata memory footprint.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-04-28 13:04:27 +02:00
Michal Mielewczyk
e8e4e00bb7 alru: explicit upcasting
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-04-11 15:21:37 +02:00
Michal Mielewczyk
cd4d894348 acp: skip the first bucket on recovering acp
Since the threshold for the first bucket is always zero and the condition to
exit from the loop is never met in the first iteration it is save to start
iterating from `1`

This change is meant to avoid confusing static code analyzers

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-04-11 13:14:25 +02:00
Michal Mielewczyk
edd42fed98 Avoid zero-size memcpy
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-04-08 16:10:28 +02:00
Michal Mielewczyk
92fa8f7e59 Remove redundant standby check
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-04-08 15:34:14 +02:00
Michal Mielewczyk
bc30d2665b Prevent sending io to volume if it not opened
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-04-08 15:34:14 +02:00
Michal Mielewczyk
9734980be2 Free memory when failed to open core volume
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-04-08 15:34:14 +02:00
Adam Rutkowski
8f24556cec Add missing pio deinitialization in standby stop pipeline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-04-07 12:23:03 +02:00
Adam Rutkowski
550a479cde fix typo in cache mngmt
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-04-07 12:23:03 +02:00
Robert Baldyga
dc9c076ef3 Remove space from names of internal volumes
Those names are used for creating allocators. In Linux kernel environment
starting from version 5.12 there is a kernel warning if allocator name
contains spaces. This patch resolves this problem by replacing spaces with
underscores.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-04-06 13:23:02 +02:00
Robert Baldyga
c677f65212 Avoid double initialization of cleaning policy in standby mode
Cleaning policy is initialized on standby activate, after all the metadata
from primary cache is flushed and the actual recovery is being performed.
Thus initializing it earlier on standby attach is incorrect.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-04-04 12:08:27 +02:00
Robert Baldyga
65918344c0
Merge pull request #691 from arutk/fix_core_load_err
Fix core load cleanup loop
2022-04-01 14:57:58 +02:00
Adam Rutkowski
77380d6579 Fix core load cleanup loop
conf_meta->core_count is not modified during load/recovery in the latest
version. Thus in case of error in cores initialization, in order to
iterate over the initialized cores we must depend on core->added only,
regardles of conf_meta->core_count value. for_each_core() macro does
exactly this.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-04-01 13:53:25 +02:00
Krzysztof Majzerowicz-Jaszcz
1b3f0d44a8 Fix error code for superblock checksum mismatch
Fix error code for superblock checksum mismatch.
Superblock validation now returns a proper error on checksum check fail.

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2022-04-01 07:23:49 +00:00
Adam Rutkowski
09b73461b4 Always modify valid_core_map together with core_count
.. to assure that superblock config state on drive is consistent

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-03-31 13:37:42 +02:00
Robert Baldyga
9ebb0de878 Do not modify core_count on cache load / activate
Increment core_count only on core addition, and decrement it only on core
removal.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-31 10:00:24 +02:00
Robert Baldyga
25434cb8d1 Explicitly validate valid_core_bitmap consistency
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-30 23:46:06 +02:00
Robert Baldyga
9c751dd2b8 Manage valid_core_bitmap properly
Set bit only on core addition and clean it on core removal.

This allows to avoid conf metadata modification in load / standby load
paths, which effectively prevents issues with metadata mismatch during
consequent standby activate attempts after initial activate failure.
Previously the first attempt changed the metadata, so on comparison with
metadata on drive failed on any following attempt, leading to inability
to activate the cache.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-30 23:46:06 +02:00
Robert Baldyga
d550c8f4ef Fix minor coding style issues
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-30 22:15:50 +02:00
Robert Baldyga
ca8531a421
Merge pull request #685 from arutk/stats2
Return error from stats API functions in standby
2022-03-30 11:57:10 +02:00
Jan Musial
d1bd32add9 Fix potential unsigned overflow in calculations
Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-03-30 08:24:39 +02:00
Adam Rutkowski
9a1f9d41b8 Return error from stats API functions in standby
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-03-29 22:20:04 +02:00
Robert Baldyga
af43a240d3 Return more specific error on CRC mismatch
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-28 22:42:59 +02:00
Robert Baldyga
84aa968877 Check for load error before accessing metadata content
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-28 22:08:05 +02:00
Robert Baldyga
174f7b5c2b
Merge pull request #682 from jfckm/zero-cache-volume-priv
Zero cache_volume priv on close
2022-03-28 16:02:02 +02:00
Jan Musial
43e643873a Zero cache_volume priv on close
Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-03-28 14:50:25 +02:00
Adam Rutkowski
6b6300c646 Add extra data seek before data fill in mio
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-03-28 14:26:51 +02:00
Adam Rutkowski
4a839cd332 Verify standby/active cache state in OCF entry points
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-03-28 09:42:02 +02:00
Robert Baldyga
aa4622cc01 Make core remove error recoverable
First try to clean only the mapping. This operation does not require any
rollback, so even if flushing collision fails, core object is still
intact. In case of error we inform user that core was not removed by
returning new error code (-OCF_ERR_CORE_NOT_REMOVED).

After flushing collision succeeds we remove core from metadata and
flush superblock at the end. At that point the core is fully removed
from OCF and even if superblock flush error occurs there is nothing we
can do about it, so we just return the error code.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-25 21:32:55 +01:00
Robert Baldyga
643e103fe7 Don't attempt to set data for flush/discard on cache volume
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-17 21:18:31 +01:00
Robert Baldyga
4fc3f8f0d1 Remove extra whitespace
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-09 11:44:27 +01:00
Robert Baldyga
d5b2c65a39 Remove "metadata_layout" parameter of the cache
This feature is replaced with LRU list shuffling.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-07 17:48:25 +01:00
Robert Baldyga
9a956f59cd
Merge pull request #654 from Open-CAS/fix-flapping-merge
Porting fix-flapping patches from v21.6.4 by arutk
2022-03-05 01:31:23 +01:00
Adam Rutkowski
689c44c76b Remove ocf_metadata_probe_cores() implementation
This function must be fixed to work with metadata flapping. Until then
mark as not supported

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2022-03-04 19:13:40 +01:00
Adam Rutkowski
866bba72bf Explicitly validate superblock after load
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>

Additional changes - load sb recovery CRC check

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2022-03-04 19:12:51 +01:00
Robert Baldyga
90ff4afcda Check superblock CRC before it is used
Superblock can be used during load of other sections, so we need to check
its CRC before other sections are loaded.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2022-03-04 19:12:08 +01:00
Krzysztof Majzerowicz-Jaszcz
06f2140090 Removing ocf_metadata_sb_crc_recovery
Removing ocf_metadata_sb_crc_recovery - not used

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2022-03-04 19:10:47 +01:00
Robert Baldyga
1cce6bf24b
Merge pull request #664 from robertbaldyga/improve-bf
Extend BF queue protection to cache device queue
2022-03-04 18:50:43 +01:00
Robert Baldyga
45cc56f40d Extend BF queue protection to cache device queue
So far the only resource protected by backfill queue blocking was internal
OCF request queue. Move unblock to backfill io completion to protect also
queue of underlying cache device.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-03-02 20:59:51 +01:00
Robert Baldyga
87d71f319e
Merge pull request #662 from jfckm/fix-invalid-message-try-add
Fix message when try-adding already opened core
2022-03-01 14:06:30 +01:00
Jan Musial
e0cd0a4882 Fix message when try-adding already opened core
Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-02-18 12:54:13 +01:00
Michal Mielewczyk
116676c18d Verify cache id duing the activate
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-02-17 15:02:03 +01:00
Robert Baldyga
49abe816ce
Merge pull request #649 from pdebski21/1023
fix for issue #1023
2022-02-07 16:17:14 +01:00
Robert Baldyga
805ea14529 Remove runtime recovery in standby mode
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-02-01 03:11:50 +01:00
Robert Baldyga
76684ed8a9
Merge pull request #642 from robertbaldyga/parallelize
Parallelize metadata initialization
2022-02-07 13:53:45 +01:00
Robert Baldyga
e30fd48338
Merge pull request #656 from jfckm/extend-metadata-probe
Include cache mode and cache line size in metadata probe
2022-02-04 13:01:10 +01:00
Jan Musial
8522b0b6e6 Include cache mode and cache line size in metadata probe
Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-02-04 08:15:05 +01:00
Robert Baldyga
c176daeec1
Merge pull request #640 from pdebski21/superblock_mismatch
added error code for superblock mismatch
2022-02-03 15:30:03 +01:00
Robert Baldyga
6a665ea6b1 Shuffle entries within freelists
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-31 06:59:28 +01:00
Robert Baldyga
481e5b7b9b Introduce bisect generator utility
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-31 06:59:28 +01:00
Robert Baldyga
93391c78d8 Parallelize ACP recovery
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-31 06:59:28 +01:00
Robert Baldyga
b70492ad3d Parallelize ALRU recovery
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-31 06:59:28 +01:00
Robert Baldyga
8cc71cc9cb Remove ocf_cleaning_init_cache_block() from metadata rebuild
Cleaning policy initializaton initializes metadata for all cache lines
anyway, so this step is not needed.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:30:41 +01:00
Robert Baldyga
48bed40dd7 Reconstruct freelist during metadata rebuild
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:30:39 +01:00
Robert Baldyga
f3e4f8c2db Parallelize ocf_mngt_rebuild_metadata()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:52 +01:00
Robert Baldyga
036aca41b3 Parallelize ocf_lru_populate()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:21 +01:00
Robert Baldyga
6611b25d1e Initialize LRU lists in domain of cache lines
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:21 +01:00
Robert Baldyga
57fd5c1f20 Introduce ocf_parallelize utility
Introduce utility that allows to parallelize management operation across
all available io queues.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:21 +01:00
Robert Baldyga
a947127f55 Introduce ocf_lru_add_free() function
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:21 +01:00
Robert Baldyga
b82d30a0ef Add missing hb lock functions implementation
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:21 +01:00
Robert Baldyga
25e2551964 Check core status during recovery based on core metadata
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:21 +01:00
Robert Baldyga
568c565497 Init properties before loading superblock
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-28 19:29:21 +01:00
Piotr Debski
9b980d3f22 fix for issue #1023
Better error for core size mismatch during activation/load

adding pyocf test for new error code

Signed-off-by: Piotr Debski <piotr.debski@intel.com>
2022-01-25 05:18:16 +01:00
Robert Baldyga
f4daf05237
Merge pull request #639 from arutk/eha
Fix error handling in cache attach
2022-01-19 15:26:34 +01:00
Robert Baldyga
fb8bea67b6 Set core_seq_no only in atomic mode
This prevents using up pool of seq numbers in normal mode and blocking
addition of any new cores.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-19 11:38:12 +01:00
Adam Rutkowski
a32a787e3d Fix error handling in cache attach
Only close cores in error handling if attach parameter "open_cores" is
set to true.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-01-13 17:26:47 +01:00
Michal Mielewczyk
5d74aec921 Add missing return in raw_ram_zero() in error path
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2022-01-12 07:46:49 +01:00
Adam Rutkowski
294e02bc1b Fail cache recovery in case of erroneous mapping
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-01-10 11:10:02 +01:00
Piotr Debski
609a22cfda added ERROR code for superblock mismatch
Signed-off-by: Piotr Debski <piotr.debski@intel.com>
2022-01-08 23:06:10 +01:00
Adam Rutkowski
9693b82cf9 Only flush superblock at the end of cache attach
The purpose of this change is not to write superblock to the cache
drive untill all other sections are initilized on disk in attach()
path. Combined with superblock clearing at the erarlier stage of
attach(), this assures there are no residual mappings in the collision
section in case of power failure during attach with pre-existing
metadata.

This is implemented by removing ocf_metadata_flush_all_set_status() step
at the beginning of ocf_metadata_flush_all().
ocf_metadata_flush_all() is called, except for the attach() case described
above, in two cases:
1. at the end of cache load - potentially after cache recovery
2. during detaching cache drive in cache stop.

To make sure there are no regressions in the first case, an explicit
_ocf_mngt_attach_shutdown_status() is added to load pipeline before
ocf_metadata_flush_all(). The second case is always ran after cache
drive is attached, so dirty status bit must have already be written to
the disk.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-01-05 13:06:59 +01:00
Adam Rutkowski
196437f9bc Zero superblock before writing metadata
This is the first step towards atomic initialization of metadata
on cache disk.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-01-05 13:06:59 +01:00
Robert Baldyga
c6644116ae
Merge pull request #614 from robertbaldyga/redesign-standby
Redesign failover standby API
2022-01-04 14:07:05 +01:00
Robert Baldyga
4aa3d8f9df Remove "unsafe" path from standby load
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-03 20:10:40 +01:00
Jan Musial
ae18ce274e Fix cache size requirements and some logging
Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-01-03 14:30:07 +01:00
Robert Baldyga
b40fa0c2bf Fix closing volume on standby stop
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 20:54:45 +01:00
Robert Baldyga
86a2896bcf Rename "bind" to "standby"
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 20:32:03 +01:00
Robert Baldyga
b25cd91b86 Remove unused ocf_metadata_load_unsafe()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 20:31:43 +01:00
Robert Baldyga
716b5751d6 Redesign failover standby API
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 20:31:40 +01:00
Robert Baldyga
4cabc60d40 Avoid loading runtime metadata sections during recovery
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 14:04:19 +01:00
Robert Baldyga
4625763df5 Return error on CRC mismatch during recovery
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 14:04:19 +01:00
Robert Baldyga
e73cbad2c7
Merge pull request #631 from mmichal10/dont-stop-cleaner
Don't stop cleaner in activate rollback
2021-12-27 16:51:32 +01:00
Robert Baldyga
0ac66ce4aa Fix cache stop after standby detach
Don't attempt to close cache volume if cache is in standby detached state.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-23 22:39:37 +01:00
Michal Mielewczyk
a8bdba0cb2 Don't stop cleaner in activate rollback
Activate is not responsible for starting cleaner so rollback shouldn't stop it
eiter.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-12-23 14:46:28 +01:00
Bob Chen
b6de614ada fix volume_close completion order
Signed-off-by: Bob Chen <beef9999@qq.com>
2021-12-22 15:18:34 +08:00
Robert Baldyga
a2916313ee
Revert "fix volume_close completion order" 2021-12-21 20:33:34 +01:00
chenbo
aa6e674034 fix volume_close completion order 2021-12-20 20:10:07 +08:00
Robert Baldyga
0751b2c0c0 Fix metadata flapping
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-15 22:21:57 +01:00
Robert Baldyga
cac5869406
Merge pull request #603 from robertbaldyga/metadata-flapping
Introduce flapping of metadata config sections
2021-12-15 17:11:15 +01:00
Robert Baldyga
df9a9f2722 Read superblock sections from cache volume during activate
Because of metadata flapping it is much more complicated to capture those
sections in flight in standby mode, so we read them directly from the cache
volume during the activate.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-15 15:30:34 +01:00
Robert Baldyga
99c8c05f3f Introduce flapping of metadata config sections
This feature provides double buffering of config sections to prevent
situation when power failure during metadata flush leads to partially
updated metadata. Flapping mechanism makes it always possible to perform
graceful rollback to previous config metadata content in such situation.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-15 15:30:34 +01:00
Neil Sun
7f82ef3048 Fix incorrect page count calculation with large PAGE_SIZE
e.g., PAGE_SIZE 65536, cache line 8k.

fix https://github.com/Open-CAS/open-cas-linux/issues/1015

Signed-off-by: Sun Feng <loyou85@gmail.com>
2021-12-14 20:07:59 +08:00
Robert Baldyga
60218759d2
Merge pull request #597 from rafalste/fix_core_zero_size_error
Fix core-zero-size error
2021-12-08 22:04:27 +01:00
Robert Baldyga
21c4673251
Merge pull request #600 from mmichal10/cleaning-cmpl
Call completion if failed to perform cleaning
2021-12-08 22:00:58 +01:00
Michal Mielewczyk
d6f2998890 Call completion if failed to perform cleaning
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-12-08 14:50:50 +01:00
Michal Mielewczyk
911a5cddf0 Deinit all registered volume types
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-12-08 14:16:49 +01:00
Michal Mielewczyk
655f732748 Don't access freed memory
Instead of accessing memory of a freed IO, redo size calculations

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-12-08 14:16:49 +01:00
Michal Mielewczyk
244712b020 Prevent race condition in fast path
Request submitted in fast path may be freed before the sequential cutoff stats
are updated. Increment request reference counter to prevent it.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-12-08 09:00:04 +01:00
Rafal Stefanowski
b57bad4652 Fix core-zero-size error
Move error print to where it belongs, preventing this message to
pop up when same error code is reported elsewhere for other reason.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-12-06 12:30:29 +01:00
Adam Rutkowski
b1494f4642 Remove option to failover without detach
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-11-30 15:18:08 +01:00
Adam Rutkowski
b455a393dd extra assertion in metadata passive update
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-11-30 12:04:57 +01:00
Adam Rutkowski
d0b00817f3 fix cacheline reset in passive metadata update
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-11-30 19:34:52 +01:00
Krzysztof Majzerowicz-Jaszcz
133ea307c8 Fix for issues #988 and #997
This patch fixes the issue 988 (and 997) causing a kernel stack
overflow.

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2021-11-24 08:15:07 +01:00
Michal Mielewczyk
4ab22ee2dc Maintain runtime struct during failover standby
To allow the fastest switching from the passive-standby to active mode, the
runtime metadata must be kept 100% synced with the metadata on the drive and in
the RAM thus recovery is required after each collision section update.

To avoid long-lasting recovering of all the cachelines each time the collision
section is being updated, the passive update procedure recovers only those
which have its MD entries on the updated pages.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:58:09 +01:00
Michal Mielewczyk
a6989d1881 Pio concurrency 2021-11-19 11:58:09 +01:00
Michal Mielewczyk
52824adaaf Additional cleaning policy info outside of the SB
Starting cache in a standby mode requires access to a valid cleaning policy
type. If the policy is stored only in the superblock, it may be overridden by
one of the metadata passive updates.

To prevent losing the information it should be stored in cache's runtime
metadata.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
0e529479d6 Init cleaner during passive start
Initializing cleaning policy is very time consuming. To reduce the time required
for activating cache instance the initialization sholud be done during passitve
start

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
390e80794d Refactor cleaning policy initialization
Extract cleaning policy initialization to a separate function

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
6d4e6af5b6 Recovery on passive start
Adjust recovery procedure to allow rebuilding metadata from partialy valid
metadata

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
11dacd6a84 Set dirty shutdown status on standby init
Since part of the recovery is done during `standby init`, the correct shutdown
status has to be set

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
8f58add152 Lru populate unsafe
The unsafe mode is useful if the metadata of added cores is incomplete.

Such scenario is possible when starting cache to standby mode from partially
vaild metadata.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
4deaa1e133 Reset all the status bits during recovery
Make sure all the invalid cachelines have reset status bits. This allows to
recognize invalid cachelines easily during populate.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
fc7c901c8b Skip collision init on cache start passive
Recovery during passive start is based on the assuption that metadata collision
section stored on disk might be partially valid. Reseting this data would make
rebuilding metadata impossible.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
048bbedd71 Fix metadata_clear_valid_if_clean()
The function should return the cacheline's valid status

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
bb0ff67fe9 Metadata clear_dirty_if_invalid() utility
Fix cacheline's metadata if it is dirty and invalid

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
800190153b Extend lru list API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
5ca1404f06 Fix spelling in the error message
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
ccd0abfea5 Add cache line recovery utils to OCF internal API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
a7bdaa751d Add error messages on superblock mismatch
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00