Commit Graph

328 Commits

Author SHA1 Message Date
Adam Rutkowski
9693b82cf9 Only flush superblock at the end of cache attach
The purpose of this change is not to write superblock to the cache
drive untill all other sections are initilized on disk in attach()
path. Combined with superblock clearing at the erarlier stage of
attach(), this assures there are no residual mappings in the collision
section in case of power failure during attach with pre-existing
metadata.

This is implemented by removing ocf_metadata_flush_all_set_status() step
at the beginning of ocf_metadata_flush_all().
ocf_metadata_flush_all() is called, except for the attach() case described
above, in two cases:
1. at the end of cache load - potentially after cache recovery
2. during detaching cache drive in cache stop.

To make sure there are no regressions in the first case, an explicit
_ocf_mngt_attach_shutdown_status() is added to load pipeline before
ocf_metadata_flush_all(). The second case is always ran after cache
drive is attached, so dirty status bit must have already be written to
the disk.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-01-05 13:06:59 +01:00
Adam Rutkowski
196437f9bc Zero superblock before writing metadata
This is the first step towards atomic initialization of metadata
on cache disk.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2022-01-05 13:06:59 +01:00
Robert Baldyga
c6644116ae
Merge pull request #614 from robertbaldyga/redesign-standby
Redesign failover standby API
2022-01-04 14:07:05 +01:00
Robert Baldyga
4aa3d8f9df Remove "unsafe" path from standby load
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2022-01-03 20:10:40 +01:00
Jan Musial
ae18ce274e Fix cache size requirements and some logging
Signed-off-by: Jan Musial <jan.musial@intel.com>
2022-01-03 14:30:07 +01:00
Robert Baldyga
b40fa0c2bf Fix closing volume on standby stop
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 20:54:45 +01:00
Robert Baldyga
86a2896bcf Rename "bind" to "standby"
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 20:32:03 +01:00
Robert Baldyga
716b5751d6 Redesign failover standby API
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 20:31:40 +01:00
Robert Baldyga
4cabc60d40 Avoid loading runtime metadata sections during recovery
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-29 14:04:19 +01:00
Robert Baldyga
e73cbad2c7
Merge pull request #631 from mmichal10/dont-stop-cleaner
Don't stop cleaner in activate rollback
2021-12-27 16:51:32 +01:00
Robert Baldyga
0ac66ce4aa Fix cache stop after standby detach
Don't attempt to close cache volume if cache is in standby detached state.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-23 22:39:37 +01:00
Michal Mielewczyk
a8bdba0cb2 Don't stop cleaner in activate rollback
Activate is not responsible for starting cleaner so rollback shouldn't stop it
eiter.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-12-23 14:46:28 +01:00
Robert Baldyga
df9a9f2722 Read superblock sections from cache volume during activate
Because of metadata flapping it is much more complicated to capture those
sections in flight in standby mode, so we read them directly from the cache
volume during the activate.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-12-15 15:30:34 +01:00
Rafal Stefanowski
b57bad4652 Fix core-zero-size error
Move error print to where it belongs, preventing this message to
pop up when same error code is reported elsewhere for other reason.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-12-06 12:30:29 +01:00
Adam Rutkowski
b1494f4642 Remove option to failover without detach
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-11-30 15:18:08 +01:00
Michal Mielewczyk
4ab22ee2dc Maintain runtime struct during failover standby
To allow the fastest switching from the passive-standby to active mode, the
runtime metadata must be kept 100% synced with the metadata on the drive and in
the RAM thus recovery is required after each collision section update.

To avoid long-lasting recovering of all the cachelines each time the collision
section is being updated, the passive update procedure recovers only those
which have its MD entries on the updated pages.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:58:09 +01:00
Michal Mielewczyk
52824adaaf Additional cleaning policy info outside of the SB
Starting cache in a standby mode requires access to a valid cleaning policy
type. If the policy is stored only in the superblock, it may be overridden by
one of the metadata passive updates.

To prevent losing the information it should be stored in cache's runtime
metadata.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
0e529479d6 Init cleaner during passive start
Initializing cleaning policy is very time consuming. To reduce the time required
for activating cache instance the initialization sholud be done during passitve
start

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
390e80794d Refactor cleaning policy initialization
Extract cleaning policy initialization to a separate function

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
6d4e6af5b6 Recovery on passive start
Adjust recovery procedure to allow rebuilding metadata from partialy valid
metadata

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
11dacd6a84 Set dirty shutdown status on standby init
Since part of the recovery is done during `standby init`, the correct shutdown
status has to be set

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
8f58add152 Lru populate unsafe
The unsafe mode is useful if the metadata of added cores is incomplete.

Such scenario is possible when starting cache to standby mode from partially
vaild metadata.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
4deaa1e133 Reset all the status bits during recovery
Make sure all the invalid cachelines have reset status bits. This allows to
recognize invalid cachelines easily during populate.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
fc7c901c8b Skip collision init on cache start passive
Recovery during passive start is based on the assuption that metadata collision
section stored on disk might be partially valid. Reseting this data would make
rebuilding metadata impossible.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
5ca1404f06 Fix spelling in the error message
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
ccd0abfea5 Add cache line recovery utils to OCF internal API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Michal Mielewczyk
a7bdaa751d Add error messages on superblock mismatch
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-11-19 11:53:48 +01:00
Rafal Stefanowski
0b39711b8b Add promote-on-threshold sequential cutoff switch
Decide whether to promote sequential cutoff stream
to global structures when threshold is reached

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-11-09 12:54:15 +01:00
Robert Baldyga
55eda53cde
Merge pull request #582 from jfckm/fix-detach-volume
Deinit cache volume on detach
2021-11-09 13:16:08 +01:00
Jan Musial
55eae1447d Deinit cache volume on detach
Signed-off-by: Jan Musial <jan.musial@intel.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-11-08 18:25:30 +01:00
Adam Rutkowski
8f080295bb
Merge pull request #585 from rafalste/license_change
Aplying BSD 3-Clause license to OCF source
2021-10-29 13:16:00 +02:00
Rafal Stefanowski
f22da1cde7 Fix license
Change license to BSD-3-Clause

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-10-28 13:08:50 +02:00
Rafal Stefanowski
3cc0d07197 License change to be approved by contributors
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-10-27 12:48:20 +02:00
Krzysztof Majzerowicz-Jaszcz
71262d5097 Cache standby mode API changes
Error for an invalid cache operation while in passive mode added

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

Error name correction

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

API changes for passive cache mode

Moved the passive cache error return source to the api for flush and
set_param

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

Further API changes for passive cache mode

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

 Passive api - review changes

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2021-10-22 15:10:53 +02:00
Adam Rutkowski
e2c6a25ee9 [REVERTME] Disable option to perform activate without detach
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-10-08 14:52:32 +02:00
Adam Rutkowski
5ad4d937f6 Failover detach
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-10-08 14:52:24 +02:00
Jan Musial
010f30eeaf Validate activate parameters
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-09-14 08:56:41 +02:00
Jan Musial
b9c84e331c Fix attach with no cache_line_size specified
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-09-13 12:52:33 +02:00
Robert Baldyga
1892f58aba Optimize out looping over cache line sectors in recovery
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-07 22:54:10 +02:00
Robert Baldyga
12a82d7fb1 Get rid of struct ocf_cache_line_settings
Remove struct that contains redundant data.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-07 14:53:46 +02:00
Robert Baldyga
7b38ad205c Add cache activation from passive state
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 20:44:40 +02:00
Robert Baldyga
cc22c57cb7 Set proper cache pointer in front volumes
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
1fd9a448d4 Introduce passive cache state
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
ee42d9aaaf Duplicate cache name in struct ocf_cache
Cache name is needed for logging in passive mode, when config metadata
is still not accessible.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
e31e7283d9 Rework volume type management
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
a00ec916e2 Make post metadata load init a separate step in pipeline
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
c6c6618ad8 Move recovery code from metadata to cache mngt
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
a2db4d14e8 Move core initialization code from metadata to mngt
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
24728330fc Make _ocf_mngt_load_add_cores a separate step in pipeline
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
20228561c9 Move metadata deinit to separate pipeline step
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
ab034ab53d Fix uuid comparison in core pool
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Michal Mielewczyk
b83da68f85 Cleaner context ref counter
To prevent deinitializing cleaner context (i.e. during switching policy) during
processing requests, access to cleaner should be protected with reference
counter

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:33 +02:00
Michal Mielewczyk
f33a6e5ce0 Make switching cleaning policy asynchronous
Making the operation asynchronous will allow to use refcnt utility as an
synchronization mechanism between processing cachelines and deinitializing
cleaning policy.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
26194fc536 Use cleaning ops wrapper functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
00aa56dd28 Wrapper functions for cleaning ops
The change should unify access to cleaning policy resources and facilitate
synchronization when switching cleaning policies

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
c6fe2fc3f9 Deinit sequential cutoff on core removal
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-26 16:20:17 +02:00
Adam Rutkowski
36107fd528 Initialize partitions during cache start
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
07cbba32f6 remove stale references to eviction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
33e2beac24 Rename "evp_lru*" functions to "ocf_lru*"
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
b1143374a8 Move eviction files to new locations
src/eviction/lru.c -> src/ocf_lru.c
src/eviction/lru.h -> src/ocf_lru.h
src/eviction/lru_structs.h -> src/ocf_lru_structs.h
src/eviction/eviction.c -> src/ocf_space.c
src/eviction/eviction.h -> src/ocf_space.h

.. as well as corresponding UT files.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>

... in UT as well

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
88e04a4204 Remove eviction policy abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
4f217b91a5 Remove partition list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
87f834c793 Move common user and freelist partition data to a new struct
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:07:10 +02:00
Robert Baldyga
73c3e97f43
Merge pull request #509 from Open-CAS/rm-metadata-updater
Remove metadata updater
2021-06-17 09:34:18 +02:00
Adam Rutkowski
f589341c9a remove metadata updater
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
953e0f25d7 replace metadata updater with metadata I/O concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Rafal Stefanowski
5486e159f4 Fix seq-cutoff promotion count message typo
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-06-11 06:02:01 +02:00
Kozlowski Mateusz
4aff637e57 Add priv field initialization on cache start
This allows access to it in ctx_metadata_updater_init, which is
done in the same call stack during initalization.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-05-25 15:51:00 +02:00
Michał Mielewczyk
5b3a9606d3
Merge pull request #490 from mmichal10/check-core-uuid
Prevent adding core with the same UUID twice
2021-04-14 20:05:22 +02:00
Michal Mielewczyk
19276570b8 Prevent adding core with the same UUID twice
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-04-14 16:56:09 +02:00
Jan Musial
51455330ad Fix removing cores from cleaning policy
After detaching a core if user wanted to remove inactive cores the
cleaning policy data would not be initialized and would bug-out on next
core add.

This check was incorrect, as cleaning policy core metadata lifetime is
not bound to core volume being open or not.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-14 14:31:51 +02:00
Adam Rutkowski
ff4842482e Fix setting cache dirty flag during stop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Michal Mielewczyk
92a5ddd524 ut framework: don't mock env functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:08:27 +01:00
Michal Mielewczyk
0d3f3cde14 Return error when modifying default ioclass rule
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:06:23 +01:00
Michał Mielewczyk
d2b5de7970
Merge pull request #448 from robertbaldyga/perqueue-seq-cutoff
Per-queue multi-stream sequential cutoff
2021-03-05 14:38:21 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Robert Baldyga
3ee253cc4e Per-queue multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-04 16:38:31 +01:00
Robert Baldyga
ac9bd5b094
Merge pull request #453 from arutk/no_cl_gl_lock
Skip cacheline concurrency global lock in fast path
2021-03-04 12:33:50 +01:00
Michal Mielewczyk
f1012b020b Validate ioclass config
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
95d756de91 Remove ioclass min_size from public API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
f61472c3f4 Validate seq cutoff threshold value
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
10c3c3de36 Renaming hash bucket locking functions
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
   hold metadata shared global lock ("naked") vs the ones
   which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
   accepting hash bucket id ("id"), core line and lba ("cline")
   and entire request ("req").

Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Michal Mielewczyk
93eccc862a Reset per-partition counters when adding core
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-03 06:18:44 -05:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Robert Baldyga
5a88ab2d61 Flush metadata collision segment on core remove
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.

This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.

This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-19 13:34:28 +01:00
Adam Rutkowski
f206c64ff6 Fine granularity lock in cache_mngt_core_deinit_attached_meta
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 03:11:46 -05:00
Robert Baldyga
dd508c595f
Merge pull request #430 from rafalste/fix_attach_load_paths
Create separate pipelines and paths for cache attach/load scenarios
2020-12-23 16:51:37 +01:00
Rafal Stefanowski
57d4aaf7c9 Return error status from ocf_freelist_init
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:43:46 +01:00
Rafal Stefanowski
d3b61e474c Remove init_mode and use metadata.is_volatile instead
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:55 +01:00
Rafal Stefanowski
88b97df16d Fix pipeline attach/load paths
Create separate pipelines for cache attach and load scenarios.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:49 +01:00
Robert Baldyga
6270d917f8 Initialize sequential cutoff for detached cores
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-12-23 14:00:54 +01:00
Michal Mielewczyk
0dc8b5811c Store min and max ioclass size as percentage val
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.

To prevent those errors, min and max should be calculated dynamically.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
e9d7290078 Extend ioclass management logging
When setting ioclass, print info about it's max size

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Michal Mielewczyk
c643a41977 Prevent adding ioclass with the same id twice
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00