Commit Graph

998 Commits

Author SHA1 Message Date
Adam Rutkowski
12c8b4e333
Merge pull request #574 from Open-CAS/passive_api
Disable selected management operations in failover standby mode
2021-10-25 10:11:26 +02:00
Krzysztof Majzerowicz-Jaszcz
71262d5097 Cache standby mode API changes
Error for an invalid cache operation while in passive mode added

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

Error name correction

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

API changes for passive cache mode

Moved the passive cache error return source to the api for flush and
set_param

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

Further API changes for passive cache mode

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>

 Passive api - review changes

Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
2021-10-22 15:10:53 +02:00
Adam Rutkowski
f9fb80b887 Fix conditional valid bit reset
Status bits outside provided mask shall be unchanged.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-10-12 22:56:45 +02:00
Adam Rutkowski
e2c6a25ee9 [REVERTME] Disable option to perform activate without detach
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-10-08 14:52:32 +02:00
Adam Rutkowski
5ad4d937f6 Failover detach
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-10-08 14:52:24 +02:00
Adam Rutkowski
48bc6c2f6b Always use async queue kick in management pipeline
Management pipelines tend to consist of multiple asynchronous steps.
Allowing synchronous queue kick results in massive call stacks (e.g.
almost 500 functions deep in case of cache stop). Since async kick
is required anyway, it seems reasonable to switch to async kick
in pipeline implementation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-09-17 12:39:09 +02:00
Jan Musial
010f30eeaf Validate activate parameters
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-09-14 08:56:41 +02:00
Jan Musial
b9c84e331c Fix attach with no cache_line_size specified
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-09-13 12:52:33 +02:00
Robert Baldyga
076b5995ed Fix metadata_clear_valid_if_clean()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-10 08:12:11 +02:00
Robert Baldyga
7587bec07c
Merge pull request #567 from robertbaldyga/optimize-out-recovery-sector-loop
Optimize out looping over cache line sectors in recovery
2021-09-08 13:57:32 +02:00
Michal Mielewczyk
612f68b3c1 Fix metadata io detection in passive mode
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-09-08 13:33:04 +02:00
Robert Baldyga
1a3843ba12 Little coding style fix
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-07 22:54:10 +02:00
Robert Baldyga
1892f58aba Optimize out looping over cache line sectors in recovery
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-07 22:54:10 +02:00
Robert Baldyga
65d3e7a41a Introduce ocf_metadata_clear_valid_if_clean()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-07 22:54:10 +02:00
Robert Baldyga
d7c1404f82 Simplify metadata bit function declarations
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-07 22:54:10 +02:00
Robert Baldyga
12a82d7fb1 Get rid of struct ocf_cache_line_settings
Remove struct that contains redundant data.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-07 14:53:46 +02:00
Robert Baldyga
7b38ad205c Add cache activation from passive state
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 20:44:40 +02:00
Robert Baldyga
cc22c57cb7 Set proper cache pointer in front volumes
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
f96451a698 Introduce ocf_cache_is_passive()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
c662649f31 Increment metadata refcount on cache front volume io
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
d5bd3fbd78 Free zeroed metadata pages on update in raw_dynamic
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
e23342cb0e Update metadata in passive mode
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
9b3a0c968e Introduce ocf_metadata_passive_update()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
1fd9a448d4 Introduce passive cache state
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-06 13:49:21 +02:00
Robert Baldyga
ee42d9aaaf Duplicate cache name in struct ocf_cache
Cache name is needed for logging in passive mode, when config metadata
is still not accessible.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
85e8b414c4 Add ocf_metadata_load_unsafe()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
ad52a7e2e1 Introduce cache front volume
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
fa8e7564f0 Move ocf_io_get_internal() to private header
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
e31e7283d9 Rework volume type management
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
a2bef43975 Add missing lock in ocf_ctx_get_volume_type_id()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
a00ec916e2 Make post metadata load init a separate step in pipeline
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
c6c6618ad8 Move recovery code from metadata to cache mngt
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
a2db4d14e8 Move core initialization code from metadata to mngt
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
24728330fc Make _ocf_mngt_load_add_cores a separate step in pipeline
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
20228561c9 Move metadata deinit to separate pipeline step
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
1f6b83f87d Decouple cache attached state from metadata refcount state
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
9ab4c51dfa Expose superblock operations as part of internal metadata API
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
ab034ab53d Fix uuid comparison in core pool
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
82abcd11e7 Fix documentation of ocf_metadata_raw_rd_access()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-02 22:38:37 +02:00
Robert Baldyga
387cf1b9a5 Fix debug tracing in metadata
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-02 22:38:37 +02:00
Robert Baldyga
c1e9c1fa96 Remove trace API
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-08-09 10:00:01 +02:00
Michal Mielewczyk
b83da68f85 Cleaner context ref counter
To prevent deinitializing cleaner context (i.e. during switching policy) during
processing requests, access to cleaner should be protected with reference
counter

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:33 +02:00
Michal Mielewczyk
f33a6e5ce0 Make switching cleaning policy asynchronous
Making the operation asynchronous will allow to use refcnt utility as an
synchronization mechanism between processing cachelines and deinitializing
cleaning policy.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
26194fc536 Use cleaning ops wrapper functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
00aa56dd28 Wrapper functions for cleaning ops
The change should unify access to cleaning policy resources and facilitate
synchronization when switching cleaning policies

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
6dc29ee85e Refactor includes
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
c6fe2fc3f9 Deinit sequential cutoff on core removal
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-26 16:20:17 +02:00
Kozlowski Mateusz
bd7a89c819 Single map alloc location
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-07-21 08:27:27 +02:00
Kozlowski Mateusz
af1f3d73c2 Add back fastpath
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-07-21 08:27:25 +02:00
Kozlowski Mateusz
ec4bea4fc0 Add missing ocf_io_put to error path in ocf_core_volume_submit_io
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-07-21 08:26:31 +02:00
Robert Baldyga
a2b300d465 Avoid stack overflow when pending read misses list is blocked
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-07-14 13:04:03 +02:00
Robert Baldyga
740bf06c4b
Merge pull request #538 from robertbaldyga/fix-next-inval-getter
Fix helper function getting next invalid cache line
2021-07-14 10:17:39 +02:00
Robert Baldyga
be1ac09c17 Fix helper function getting next invalid cache line
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-07-13 18:56:52 +02:00
Kozlowski Mateusz
8c3ed42fa2 Fix remap line count for user partitions
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-07-12 16:49:14 +02:00
Robert Baldyga
f538bbd3ae Fix argument order in ocf_metadata_set_partition_id() call
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-07-09 21:31:06 +02:00
Robert Baldyga
d79c4b7dc9
Merge pull request #535 from mmkayPL/struct-alignment
Align structures to cacheline
2021-07-08 13:26:57 +02:00
Kozlowski Mateusz
f494448f97 Align structures to cacheline
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-07-08 12:46:39 +02:00
Michal Mielewczyk
a394dd06a8 Unlock cachelines after failed remap
All remapped cachelines are write locked. If the operation fails cachelines has
to be unlocked during rollback

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-06 15:09:50 +02:00
Adam Rutkowski
96dfd87572 restore conditional reschedule during freelist population
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-07-02 11:37:49 +02:00
Robert Baldyga
aa3677da10
Merge pull request #530 from arutk/remove_eviction
Remove remaining stale references to "eviction" and "evp"
2021-06-30 09:47:35 +02:00
Robert Baldyga
43a142ccdd
Merge pull request #531 from arutk/fix_remove_dirty
fix removing dirty core
2021-06-29 09:36:45 +02:00
Adam Rutkowski
a1ec40ce10 Fix ocf_lru_repart for freelist partition
ocf_lru_get_list() now returs clean list for freelist partition to
provide common interface regardless of partition type.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-28 17:39:31 +02:00
Adam Rutkowski
a7581b892c Rename evp_iter to lru_iter in concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-26 19:09:10 +02:00
Adam Rutkowski
d029b2a2be Remove unused pending_eviction_clines counter
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-26 19:09:10 +02:00
Adam Rutkowski
a9ab5fbafd Fix comments in ocf_engine_common.h
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-26 19:09:09 +02:00
Adam Rutkowski
1a5d20156e Renamve eviction_idx to lru_idx
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-26 19:09:08 +02:00
Adam Rutkowski
fc06ef92a0 Remove obsolete wrapper for lru_rm_cline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-26 19:09:04 +02:00
Robert Baldyga
059b845df8 Unlock request after invalidating cache lines
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-06-25 17:50:38 +02:00
Michal Mielewczyk
f0564dcf75 Avoid unnecessary metadata flushes in WT
Flushing metadata in WT is required only if at least of the request's cacheline
changed its state to clean.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-06-23 09:51:16 +02:00
Michal Mielewczyk
c9294d1f06 Reorder metadata updating pattern in WT mode
There's a possibility that WT write is performed to dirty cache line (i.e. after
switching WB->WT without flush) and status bits change from dirty to clean. If
power failure occurs it might happen that recovery would ignore recent data from
cache and would assume that data is clean while backend storage data is out of
date.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-06-22 15:35:50 +02:00
Michal Mielewczyk
0192ce23dd Reorder metadata updating patter in WB mode
In WB mode metadata should be updated only if the actuall data had been saved
on disk. Otherwise metadata might be flushed too early and consequently data
corruption might occur.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-06-22 09:04:56 +02:00
Robert Baldyga
847f5f1174
Merge pull request #520 from arutk/lru_refactor
LRU refactoring
2021-06-21 22:49:08 +02:00
Adam Rutkowski
1e1955b833 lru refactor
rearanging lru implementation for easier journaling

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-21 22:32:57 +02:00
Adam Rutkowski
edf20c133e Move metadata I/O lock to IO queue context
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-21 21:39:35 +02:00
Adam Rutkowski
a70608476d fastpath for metadata update
Removing extra request cycle through IO queue in case of successfull
metadata I/O lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-21 20:02:56 +02:00
Kozlowski Mateusz
50ec65fcfd Fix metadata_io_page_lock_acquired typo
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-21 19:46:35 +02:00
Kozlowski Mateusz
1031139446 OCF: Fix error path for metadata updater
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-21 19:46:34 +02:00
Adam Rutkowski
bae59e0620 Fix include paths in ocf_lru.c and ocf_space.c
This fixes compilation with CAS Linux

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-21 17:12:10 +02:00
Adam Rutkowski
36107fd528 Initialize partitions during cache start
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
dca93964e3 remove stale declaration of space_management_free()
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
07cbba32f6 remove stale references to eviction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
33e2beac24 Rename "evp_lru*" functions to "ocf_lru*"
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
b1143374a8 Move eviction files to new locations
src/eviction/lru.c -> src/ocf_lru.c
src/eviction/lru.h -> src/ocf_lru.h
src/eviction/lru_structs.h -> src/ocf_lru_structs.h
src/eviction/eviction.c -> src/ocf_space.c
src/eviction/eviction.h -> src/ocf_space.h

.. as well as corresponding UT files.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>

... in UT as well

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
7c0f940876 Replace eviction with lru in metadata structs
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
88e04a4204 Remove eviction policy abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
31737ee0e7 Move all eviction locks to lru.c
this is preparation for removal of evp abstraction

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
77bccba036 do not track hotness on free lru list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
fac35d34a2 Rename "evict" to "remap" across the entire repo
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
4f217b91a5 Remove partition list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
87f834c793 Move common user and freelist partition data to a new struct
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:07:10 +02:00
Robert Baldyga
c0b76f9e01
Merge pull request #517 from arutk/hit_shortcut
Check for hit after upgrading hash bucket lock
2021-06-17 12:16:18 +02:00
Robert Baldyga
73c3e97f43
Merge pull request #509 from Open-CAS/rm-metadata-updater
Remove metadata updater
2021-06-17 09:34:18 +02:00
Kozlowski Mateusz
367fcbfe4e Update debug prints and methods
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:37 +02:00
Kozlowski Mateusz
c17b587444 Update cache line concurrency unit tests
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:37 +02:00
Kozlowski Mateusz
ce316cc67c Change alock API to include slow/fast lock callbacks
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:35 +02:00
Adam Rutkowski
d5b16c273e Check for hit after upgrading hash bucket lock
Lookup is repeated after request is identified as miss and hash bucket
lock is upgraded (in order to map missing cachelines). At this point
cachelines status might change and the request might turn out to be
a hit after all. Adding check for this condition removes unnecessary
calls to remap logic.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 23:11:02 +02:00
Kozlowski Mateusz
f49e9d2d6a Save alock callbacks during initialization
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
f589341c9a remove metadata updater
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
953e0f25d7 replace metadata updater with metadata I/O concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
06f3c937c3 mio concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
69c3c6761b Add alock ptr to callbacks params
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
fae620a070 Add entry abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
9746df0b1a rename line to entry
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:04 +02:00
Robert Baldyga
6c617fa688
Merge pull request #514 from jfckm/minor-perf
Small performance improvements
2021-06-14 13:32:27 +02:00
Adam Rutkowski
9a1646c8a1 Move alock implementation to separate file
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 13:22:29 +02:00
Adam Rutkowski
2cb7270f63 Finish separating cacheline conc and alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:17:06 +02:00
Adam Rutkowski
d22a3ad0e0 Rename cacheline concurrency struct to alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:15:43 +02:00
Adam Rutkowski
927bc805fe Rename generic cacheline cocncurrency to alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:14:40 +02:00
Adam Rutkowski
3845da1de8 Rename entry_idx to idx
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Adam Rutkowski
9d94c0b213 Make cacheline concurrency lock implementation more generic
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Adam Rutkowski
fdbdcbb4a6 Rename cb to cmpl in cacheline concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Adam Rutkowski
4634885111 Use request in instead of opaque ctx in cacheline concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:12:42 +02:00
Rafal Stefanowski
5486e159f4 Fix seq-cutoff promotion count message typo
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-06-11 06:02:01 +02:00
Jan Musial
f25d9a8e40 Use new non-zeroing allocator APIs
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:38:44 +02:00
Jan Musial
a52a3b75e5 Mark unlikely conditionals in hot code paths in metadata_raw
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:20:33 +02:00
Jan Musial
4031b4b2ae Delete metadata self-test
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:20:25 +02:00
Kozlowski Mateusz
4aff637e57 Add priv field initialization on cache start
This allows access to it in ctx_metadata_updater_init, which is
done in the same call stack during initalization.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-05-25 15:51:00 +02:00
Michal Mielewczyk
1d9776481c Shorten allocator name
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-04-27 08:40:51 +02:00
Michał Mielewczyk
5b3a9606d3
Merge pull request #490 from mmichal10/check-core-uuid
Prevent adding core with the same UUID twice
2021-04-14 20:05:22 +02:00
Michal Mielewczyk
19276570b8 Prevent adding core with the same UUID twice
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-04-14 16:56:09 +02:00
Jan Musial
6ced60471d Additional safeguard in acp_remove_core
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-14 14:57:08 +02:00
Jan Musial
51455330ad Fix removing cores from cleaning policy
After detaching a core if user wanted to remove inactive cores the
cleaning policy data would not be initialized and would bug-out on next
core add.

This check was incorrect, as cleaning policy core metadata lifetime is
not bound to core volume being open or not.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-14 14:31:51 +02:00
Robert Baldyga
7dcf90ef6a
Merge pull request #487 from Open-CAS/fix-io-put
Avoid nullptr dereference in ocf_io_put
2021-04-06 14:09:09 +02:00
Adam Rutkowski
0476511c00 probe: return dirty and shutdown status despite metadata mismatch
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Adam Rutkowski
ff4842482e Fix setting cache dirty flag during stop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Jan Musial
67f80d813c Avoid nullptr dereference in ocf_io_put
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-06 13:38:34 +02:00
Robert Baldyga
9a3f64df28 seq_cutoff: Ignore invalid streams
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-04-01 18:46:28 +02:00
Adam Rutkowski
2fadd5a22a Fix eviction occupancy stats decrement
Eviction should decrement occupancy statistics for the
core from which a cacheline is being evicted rather than
from the I/O target core.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-01 18:01:28 -05:00
Robert Baldyga
49b9b36d13 cleaner: Don't check for valid if cache line is not dirty
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-04-01 13:28:19 +02:00
Michał Mielewczyk
642794dcd7
Merge pull request #483 from arutk/repart_fix
Fix repartitioning in request refresh path
2021-03-31 11:32:28 +02:00
Adam Rutkowski
719676c444 Fix repartitioning in request refresh path
update_req_info() should include REMAPPED cachelines
in repart stats (number of cachelines within request
belonging to other than the target partition).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-31 12:13:48 -05:00
Adam Rutkowski
521258bcc8 Remove dirty check from LRU cleaner getter callback
This check is incorrect as cacheline status may change
from dirty to clean at any point during cleaning, except for
when the hash bucket is locked.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-30 13:10:28 -05:00
Michał Mielewczyk
c2e588be9d
Merge pull request #476 from mmkayPL/cacheline-alignment
Cacheline alignment
2021-03-26 12:01:55 +01:00
Michał Mielewczyk
2aa8922fea
Merge pull request #478 from Open-CAS/fix-freeing-discard-reqs
Fix freeing oversized discard requests
2021-03-26 11:33:40 +01:00
Michał Mielewczyk
a6c8cbb1ac
Merge pull request #479 from arutk/lru_fix3
Always call LRU_set_hot() under hash bucket lock
2021-03-26 11:04:59 +01:00
Michał Mielewczyk
78d7e5294f
Merge pull request #480 from arutk/lru_fix4
Clear hot flag when removing node from LRU list
2021-03-26 11:04:47 +01:00
Adam Rutkowski
b87008dc67 Clear hot flag when removing node from LRU list
This isn't strictly required in current implementation as
nodes are always re-initialized before inserting to LRU list.
However it seems to make sense to zero the flag anyway to
make the code easier to reason about.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 10:25:03 -05:00
Adam Rutkowski
9486b7796f Remove early return from engine_map()
Removing conditional early return from engine_map() function
in case of insufficient free cachelines. The reasons are:

1. current implementation does not treat unssufficient free
cachelines condition as an error,
2. the check is based on stale request info, so it is inaccurate,
3. it is easier to hit more paths with functional tests,
4. partially mapping request from the freelist becomes more common
rather than being a corner case dependent on racy timings between
threads

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 07:40:24 -05:00
Kozlowski Mateusz
e054949cbb Metadata updater mutex alignment
Avoids trashing of (mostly) static and often used entries

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
e391fc2c13 Queue alignment
Metadata reshuffling

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
642527d72a ref count alignment
Move ref counts to their own cacheline - otherwise they pollute and cause
false sharing to fields nearby and cause a lot of cache bouncing between
physical CPUs.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fd2fd335a0 ocf_cache alignment
Grouping static fields together, while often changing ones get their own
cacheline, or some not used often/important fields.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
33f29e43bc Aligned ocf_volume
Force cacheline alignment to avoid cacheline trashing on static fields
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Adam Rutkowski
a3f2a214b6 Always call LRU_set_hot() under hash bucket lock
set_hot() depends on cacheline metadata status to determine
on which list the element is located (dirty cs clean list).
Thus at least hash bucket lock is required when calling
set_hot().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-25 18:50:13 -05:00
Jan Musial
9c070c1d25 Fix freeing oversized discard requests
When issuing discard request over 512KiB OCF would trim this request and
overwrite req->core_line_count which would then cause this request to be
freed from wrong mpool.

This is fixed now by saving core_line_count that was set when allocating
this request that is never overwritten. This alloc_core_line_count is
then used to free the request from correct mpool.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-25 15:16:57 +01:00
Kozlowski Mateusz
365f4f0d19 Use read/write locks in queue sequential cutoffs
If user thread is preempted during tree/list update and another IO
is issued on the same CPU, the structure will be in undefined state.
This may result in hung tasks, if the tree stops being a tree and a loop exists -
tree search functions won't be able to end properly; or panics if a NULL value appears
suddenly in the preempted thread, after a null-check was already done.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-23 09:54:56 +01:00
Michal Mielewczyk
92a5ddd524 ut framework: don't mock env functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:08:27 +01:00
Michal Mielewczyk
0d3f3cde14 Return error when modifying default ioclass rule
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:06:23 +01:00
Robert Baldyga
5f77db5c85
Merge pull request #473 from robertbaldyga/deallocate-request-properly
ocf_request: Deallocate request with separately allocated map properly
2021-03-19 10:31:07 +01:00
Robert Baldyga
87244c04d7
Merge pull request #472 from mmichal10/lock-on-setting-hot
Update cleaning lru under metadata lock
2021-03-19 09:54:32 +01:00
Robert Baldyga
74d61785e9 ocf_request: Deallocate request with separately allocated map properly
When allocation of request with map fails, we fallback to allocating
request with no map, and then allocate map separately. During request
put we need to distinguish between those two cases in order to deallocate
request properly.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-19 09:49:29 +01:00
Robert Baldyga
296e98e39c
Merge pull request #471 from arutk/lru_fix_2
Prevent remapping cachelines within single request
2021-03-18 20:33:11 +01:00
Adam Rutkowski
a232488c7a Prevent remapping cachelines within single request
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-18 17:11:45 -05:00
Robert Baldyga
8020e7fd67
Merge pull request #457 from Ostrokrzew/false_stats
Fix broken 'dirty_for' stats
2021-03-18 10:24:02 +01:00
Michal Mielewczyk
841f8122d7 Update cleaning lru under metadata lock
This prevents deinitializing cleaning policy structures during IO.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-18 09:55:21 +01:00
Michał Mielewczyk
df969cde16
Merge pull request #470 from arutk/lru_fix
Parallel eviction fixes
2021-03-17 11:41:07 +01:00
Adam Rutkowski
c565c5c3f5 Add comments warning about stale request map info
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:28:02 -05:00
Adam Rutkowski
98124aa13d Add missing lookup in engine_map()
Early return from engine_map() in case of insufficient free
cachelines on the freelist is opportunistic, as both request
map info and freelist count are not accurate. Map info is stale
as it is to be refreshed in engine_map() after hash bucket
lock had been upgraded. Freelist count on other hand is subject
to change asynchronously.

The implementation assumption however is that after engine_map()
request is fully traversed (engine_map() is equivalent to
engine_lookup() followed by an attempt to map missing cachelines).
So in case of early return we must take care of repeating the
lookup.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:23:24 -05:00
Adam Rutkowski
e5fa15bdb2 Remove early return from engine_map() in case of hit
At this point cacheline status in request map is stale,
as lookup was performed before upgrading hash bucket lock.
If indeed all cachelines are mapped, this will be determined
in the main loop of engine_map().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:21:03 -05:00
Michał Mielewczyk
e3d5439d9f
Merge pull request #469 from mmichal10/fix-unmapped
Fix `ocf_engine_unmapped_count()`
2021-03-17 10:30:23 +01:00
Adam Rutkowski
736fb2efc0 Call LRU set_hot() immediately after cache insert
This assures that cacheline with LOOKUP_INSERTED status
is always present on the LRU list.

This fixes an ENV_BUG() caused by an attempt to remove
a cacheline from LRU list which was not there. This
happened when cacheline was mapped from freelist
(LOOKUP_INSERTED) but the entire request mapping failed
and generic cleanup routines attempted to invalidate cacheline,
including removing it from the LRU list. As engine_set_hot()
is called after successfull mapping, the inserted cacheline was
not yet present on the LRU list.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-16 19:59:09 -05:00
Michal Mielewczyk
71ec08c158 Assert number of cachelines to evict
Number of cachelines to evcit can't be greater than the number of unmapped
entries in request.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 16:29:05 +01:00
Michal Mielewczyk
4e8c037d7b Fix ocf_engine_unmapped_count()
Inserted entries should be considered mapped.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 15:36:47 +01:00
Robert Baldyga
415a778c03 ocf_request: Fix use after free bug
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-15 19:41:14 +01:00
Robert Baldyga
b25ea7c8ec seq_cutoff: Fix stream promotion fastpath
Now req_count starts from 1.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-11 14:41:13 +01:00
Jan Musial
2dc36657bf Use mpool to allocate metadata_io requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
c243ad3df0 Use mpool to allocate ocf_requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
8e21aa6441 Remove not needed req allocator size table
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
b47ef2c386 Change vmalloc in metadata asynch io to kmalloc
Vmalloc is very slow in comparison to kmalloc

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
9f8802e833 Decrease memory requirements for metadata io
Magic child metadata request count (33) was deducted
experimentally.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Michał Mielewczyk
d2b5de7970
Merge pull request #448 from robertbaldyga/perqueue-seq-cutoff
Per-queue multi-stream sequential cutoff
2021-03-05 14:38:21 +01:00
Adam Rutkowski
7927b0b74f Optimize set_hot calls in eviction path
Split traversation into two distinct phases: lookup()
and lru set_hot(). prepare_cachelines() now only calls
set_hot() once after lookup and insert are finished.
lookup() is called explicitly only once in
prepare_cachelines() at the very beginning of the
procedure. If request is a miss then then map()
performs operations equivalent to lookup() supplemented
by an attempt to map cachelines. Both lookup() and
set_hot() are called via traverse() from the engines
which do not attempt mapping, and thus do not call
prepare_clines().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1c6168d82b Do not unmap inserted cachelines before eviction
Unmapping cachelines previously mapped from freelist before
eviction is a waste of resources. Also if map does not erarly
exit upon first mapping error, we can have request fully
traversed (and partially mapped) after mapping and thus
skip lookup in eviction.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
ce2ff14150 Move request engine callbacks to req structure
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
0e699fc982 Refactor ocf_engine_remap
.. so that the main part, responsible strictly for mapping
given LBA to given collision index, is encapsulated in
a function ocf_map_cache_line with external linkage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
3bd0f6b6c4 Change sequential request detection logic
Changing sequential request detection so that a miss request is
recognized as sequential after needed cachelines are evicted
and mapped to the request in a sequential order.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
c40e36456b Add missing hash bucket lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
69c0f20b6e Remove global metadata lock from cleaner metadata update step
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a80eea454f Add function to determine hash collisions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
07d1079baa Add LOOKUP_REMAPPED status to allow iterative cacheline lock
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
b34f5fd721 Rename LOOKUP_MAPPED to LOOKUP_INSERTED
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a09587f521 Introduce ocf_cache_line_is_locked_exclusively
Function returns true if cacheline is locked (read
or write) by exactly one entity with no waiters.

This is usefull for eviction. Assuming caller holds
hash bucket write lock, having exlusive cacheline
lock (either read or write) allows holder to remap
cacheline safely. Typically during eviction hash
bucket is unknown until resolved under cacheline lock,
so locking cacheline exclusively (instead of locking
and checking for exclusive lock) is not possible.

More specifically this is the flow for synchronizing
cacheline remap using ocf_cache_line_is_locked_exclusively:
1. acquire a cacheline (read or write) lock
2. resolve hash bucket
3. write-lock hash bucket
4. verify cacheline lock is exclusive

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Robert Baldyga
3ee253cc4e Per-queue multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-04 16:38:31 +01:00
Robert Baldyga
ac9bd5b094
Merge pull request #453 from arutk/no_cl_gl_lock
Skip cacheline concurrency global lock in fast path
2021-03-04 12:33:50 +01:00
Michal Mielewczyk
f1012b020b Validate ioclass config
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
95d756de91 Remove ioclass min_size from public API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
f61472c3f4 Validate seq cutoff threshold value
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Slawomir Jankowski
eeda1f3f0f Unify type of dirty_for in info structs
Reformat function that calculates how long cache/core is dirty
Update `dirty_for` types in functional tests

Values stored in info structs fields (both in cache and core structs)
are unsigned 64-bits ints but `dirty_for`s were unsigned 32-bits ints.

Use existing function to transform returned value to seconds.
Replace seconds stored in metadata with seconds.
Replacement was done if old value of replaced field was equal to zero.
Acquiring monotonic high precission timestamp is potentially
slow and it makes sense to compare the field's value
to zero before calling atomic function.

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2021-02-25 14:51:53 +01:00
Adam Rutkowski
c7fc4fff39 Change cacheline concurrency constructor params
Provide number of cachelines as the cacheline concurrency
construtor param instead of reading it from cache.

The purpose of this change is to improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:45 -06:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
0f34e46375 Fix error handling in cacheline concurrency init
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:36 -06:00
Adam Rutkowski
d8f25d2742 Skip cacheline concurrency global lock in fast path
The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.

Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
 * concurrent fast path will also only try_lock
   cachelines, releasing all acquired locks if failed
   to immediately acquire lock for any cacheline
 * concurrent slow path is guaranteed to have
   precedence in lock acquisition when conditions
   for deadlock occure (both slowpath and fastpath
   have acquired some locks required by the other
   thread). This is because the fastpath thread will
   back off (release acquired locks) if any one of the
   cacheline locks is not acquired.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:28 -06:00
Adam Rutkowski
c95f6358ab Get rid of status bits lock
All the status bits operations are now protectec by
hash bucket locks

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:05:53 -06:00
Adam Rutkowski
cd9e42f987 Properly lock hash bucket for status bits operations
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:02:50 -06:00
Robert Baldyga
75baec5aa5
Merge pull request #456 from arutk/aalru
Relax LRU list ordering to minimize list updates
2021-02-18 13:48:54 +01:00
Michal Mielewczyk
7f3f2ad115 Evict from overflown pinned ioclass
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-16 04:06:07 -05:00
Adam Rutkowski
0748f33a9d Align each global metadata lock to 64B
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
10c3c3de36 Renaming hash bucket locking functions
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
   hold metadata shared global lock ("naked") vs the ones
   which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
   accepting hash bucket id ("id"), core line and lba ("cline")
   and entire request ("req").

Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Adam Rutkowski
c822c953ed Fix return status from hash bucket trylock wr
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-11 15:02:06 -06:00
Robert Baldyga
af8177d2ba
Merge pull request #458 from mmichal10/fix-cleaning
Fix updating hot cachelines cleaning list
2021-02-11 11:30:07 +01:00
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Michal Mielewczyk
fa41d4fc88 Fix updating hot cachelines cleaning list
Update cacheline's timestamp each time it's being written.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-10 10:02:57 -05:00
Adam Rutkowski
9e98eec361 Only acquire read lock to verify lru elem hotness
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
c04bfa3962 Add macros to read lock eviction list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
9690f13bef Change eviction spin lock to RW lock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
b4daac11c2 Track hot items on LRU list
Top 50% least recently used cachelines are not promoted
to list head upon access. Only after cacheline drops to
bottom 50% it is considered as a candidate to promote
to list head.

The purpose of this change is to reduce overhead of
LRU list maintanance for hot cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:22:55 -06:00
Adam Rutkowski
746b32c47d Evict from overflown partitions first
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 12:51:39 -06:00
Adam Rutkowski
5538a5a95d Only request evict size equal to request unmapped count
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 11:11:15 -06:00
Michal Mielewczyk
93eccc862a Reset per-partition counters when adding core
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-03 06:18:44 -05:00
Robert Baldyga
3543f5c5cc
Merge pull request #443 from rafalste/update_copyright
Update copyright statements (2021)
2021-02-03 11:59:39 +01:00
Michal Mielewczyk
3a7b55c4c2 Don't evict on hit
If request is hit, simply try to acquire cachelines instead of verifying
whether target partition's size is not exceeded.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-29 17:15:32 -05:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Adam Rutkowski
012438c279 Add missing collision page lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-20 19:28:41 -06:00
Robert Baldyga
5a88ab2d61 Flush metadata collision segment on core remove
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.

This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.

This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-19 13:34:28 +01:00
Adam Rutkowski
f206c64ff6 Fine granularity lock in cache_mngt_core_deinit_attached_meta
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 03:11:46 -05:00
Michal Mielewczyk
6d962b38e9 API for cacheline write trylock
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
bd20d6119b External linkage for function to sparse single cline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
93bda499c7 Add functions to lock specific hash bucket
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-12 15:42:21 -05:00
Robert Baldyga
fd88c2c3a4
Merge pull request #436 from mmichal10/metadata-assert
Metadata assert
2021-01-08 10:15:08 +01:00
Michal Mielewczyk
fcef130919 Bug on metadata access error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 18:10:44 -05:00
Michal Mielewczyk
d0225ef1cb Prevent uint32_t overflow
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 02:45:05 -05:00
Robert Baldyga
ea1fc7a6d4 seq-cutoff: Don't modify node list under read lock
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-05 19:46:37 +01:00
Robert Baldyga
dd508c595f
Merge pull request #430 from rafalste/fix_attach_load_paths
Create separate pipelines and paths for cache attach/load scenarios
2020-12-23 16:51:37 +01:00
Rafal Stefanowski
57d4aaf7c9 Return error status from ocf_freelist_init
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:43:46 +01:00
Rafal Stefanowski
d3b61e474c Remove init_mode and use metadata.is_volatile instead
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:55 +01:00
Rafal Stefanowski
88b97df16d Fix pipeline attach/load paths
Create separate pipelines for cache attach and load scenarios.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:49 +01:00
Robert Baldyga
6270d917f8 Initialize sequential cutoff for detached cores
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-12-23 14:00:54 +01:00
Rafal Stefanowski
4c42d62f97 Add a newline escape in 'invalid checksum' messages
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-22 16:34:44 +01:00
Adam Rutkowski
1b8bfb36f5 Add missing part->id initialization
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-22 11:52:20 +01:00
Robert Baldyga
7f60d73511
Merge pull request #413 from mmichal10/occ-per-ioclass
Occupancy per ioclass
2020-12-21 23:43:54 +01:00
Michal Mielewczyk
0dc8b5811c Store min and max ioclass size as percentage val
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.

To prevent those errors, min and max should be calculated dynamically.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
bcfc821068 Don't calc free cachelines in per-ioclass stats
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
60680b15b2 Accessors for req->info.mapping_error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
9e11a88f2e Occupancy per ioclass
Respect occpuancy limit set single ioclass

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
05f3c22dad Occupancy per ioclass utilities
Functions to check space availability and to manage cachelines reservation

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:18:47 -05:00
Michal Mielewczyk
600bd1d859 Access partition's metadata counters via functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
9d80882b00 Remove re_part field from struct ocf_req_info
Since the request carries an explicit information about number of the
cacheliens to be reparted, no need of keeping the boolean information if some
of the request's cachelines are assigned to a wrong partition

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e26ca30399 Track explicit number of cachelines to be reparted
Instead of redunant calculating number of cachlines to be reparted, keep this
information in request's info

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
4f228317a1 Update docs for space_managment_evict_do()
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
21e98a6dbc Evict request's target partition in regrular order
Instead of evicting target partition as the last one, respect eviction
priorities

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e999de7232 Don't roundup when evicting single part
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
718dc743c8 Enable particular ioclass eviction
If partition's occupancy limit is reached, cachelines should be evicted from
 request's target partition.

Information whether particular partition eviction should be triggered is
carried as a flag by request which triggered eviction.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:23 -05:00
Michal Mielewczyk
e9d7290078 Extend ioclass management logging
When setting ioclass, print info about it's max size

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Michal Mielewczyk
c643a41977 Prevent adding ioclass with the same id twice
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Michal Rakowski
ac2effb83d Fix whitespaces
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Adam Rutkowski
822cd7c45a Introduce metadata superblock & segment structures
Refactoring metadata superblock and segment ops code
to make it less tightly coupled.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
3eb5568608 rename segment->segment_id and segemnt_ops->segment
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
b074d77797 Spliting metadata implementation to match header files
Moving metadata implementation out of obsolete metadata_hash.c
to .c files corresponding to function declaration header files.
This requires adding shared header for metadata implementation
metadata_internal.h. Some metadata header files did not have
a corresponding .c file - in this case it is added in this
commit.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:49 +01:00
Adam Rutkowski
02405e989d Removing 'hash' word from misspelled metadata functions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
18e35c390b Remove last references to "hash" metadata implementation
Hashed metadata is now default and only implemetation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
5fb4d68c7f Remove get and set from metadata raw ifc
Memcopy based metadata interface is an unnecessary
overhead and is being removed.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
6dfdd6940b Remove metadata ifc structure
At this point it is not used

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
4d97b1611f Move metadata layout field outside meteadata ifc
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
b5d6cdb398 Rename metadata iface_priv to priv
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
05c0826c0f Remove metadata bits manipulation abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
98cba1603f Replace metadata ifc wrappers with direct calls to hash ifc
Metadata wrapper functions (calling iface->func) in header
files are changed to be declarations only. Hash interface
implementation functions in metadata_hash.c are given an
external linkage and are renamed to drop "hash" prefix.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d796e1f400 Remove metadata layout abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d2af0bafda Remove redundant locks from metadata flush/load all
Locks acquired in ocf_metadata_flush(/load)_all are
acquired only for the duration of queueing asynch
service for flush/load, no actual metadata accesses
are performed there.

Also flush/load all are always performed with metadata
marked as deinitialized (metadata reference counter freezed),
so no I/O is reading nor writing the metadata. The only source
of potential concurrent metadata accesses are other management
operations, which should be synchronized using management lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Adam Rutkowski
41a767de97 Multiple LRU lists
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
ac83c4ecd6 seq_cutoff: Allocate seq cutoff structures dynamically per core
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
0fd095046c
Merge pull request #426 from arutk/meta_no_memcpy
Remove memcpy from collision/eviction policy metadata api
2020-12-09 13:02:49 +01:00
Adam Rutkowski
fec61528e6 Remove memcpy from collision/eviction policy metadata api
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-07 17:58:44 +01:00
Robert Baldyga
7af386681d
Merge pull request #418 from robertbaldyga/inc-dep-env-headers
Remove dependency to full ocf_env.h from inc/ headers
2020-11-30 17:16:32 +01:00
Robert Baldyga
9bcafb5bfb seq_cutoff: Initialize each stream with different LBA
Initializing each stream with unique LBA ensures there are no initial
rbtree collisions, and thus helps to avoid clustering of all the streams
into one big linked list instead of forming performance friendly proper
tree structure.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-30 15:58:18 +01:00
Robert Baldyga
b8735f6517 rbtree: Fix swapping out-of-tree node with root
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-30 15:58:18 +01:00
Robert Baldyga
c8e7e0053c Remove dependency to full ocf_env.h from inc/ headers
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-19 13:07:16 +01:00
Robert Baldyga
a54d4461f0 seq_cutoff: Always continue the biggest stream
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
8b03271626 rbtree: Introduce list find callback
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
0ae4f4b5b2 rbtree: Add equal nodes to linked list
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
50c4de0495 rbtree: Make swap resistant to nodes outside the tree
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:20:45 +01:00
Robert Baldyga
694224971c rbtree: Replace spaces with tabs
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-09 17:32:03 +01:00
Robert Baldyga
0e3c9e740e
Merge pull request #396 from arutk/lru_refactor
Simplify and modularize LRU list code
2020-11-05 15:35:33 +01:00
root
ef08141252 Use -1 for LRU list terminator instead of collision_table_entries
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-11-04 17:43:41 -06:00
Adam Rutkowski
58f8a2218a Simplify and modularize LRU list code
Refactoring LRU list code to reduce code duplication and
improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-11-04 17:42:53 -06:00
Robert Baldyga
9a23787c6b
Merge pull request #406 from arutk/flush2
Propagate I/O flags (e.g. FUA) to metadata flush I/O
2020-10-06 12:49:22 +02:00
Adam Rutkowski
716edcc637 Flush cache volume after writing config metadata segments
After writing metadata configuration to disk we must
send a flush request to make sure configuration sections
are commited to non-volatile storage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-30 10:40:03 +02:00
Adam Rutkowski
c945db356c Propagate I/O flags (e.g. FUA) to metadata flush I/O
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-29 14:46:27 +02:00
Robert Baldyga
7c29110e47
Merge pull request #398 from Open-CAS/proper-core-status
Fix logging core state on cache load
2020-09-04 19:56:16 +02:00
Robert Baldyga
990f5160eb Cleanup request map entries in error handling path
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-09-02 14:30:28 +02:00
Robert Baldyga
0dfdcb05e9 Fix core volume lifecycle management
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-08-21 16:40:41 +02:00
Rafal Stefanowski
6542c2fa94 Fix memory requirement when loading cache
Load properties before checking memory needs and obtain cache line size
from context rather than from cache state.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-08-19 03:15:18 +02:00
Rafal Stefanowski
072c9c1902 Pass only needed values to _ocf_mngt_calculate_ram_needed() function
Rather then passing whole structs, supply
_ocf_mngt_calculate_ram_needed() with just the values it actually uses.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-08-19 03:09:05 +02:00
Jan Musial
2ee1e4c8dd Fix logging core state on cache load
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-07-28 14:52:15 +02:00
Robert Baldyga
d5ecdc16dd Make CRC mismatch on recovery a warning instead of error
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:49:29 +02:00
Robert Baldyga
d946124a01 Calculate CRC for runtime metadata sections only on clean load
During recovery procedure there is no guarantee that checksums
of runtime sections were flushed correctly before dirty shutdown.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:45:53 +02:00
Robert Baldyga
7d889fa1fc
Merge pull request #385 from arutk/pt_write_double_inv
Two pass write invalidate
2020-07-28 07:42:44 +02:00
Adam Rutkowski
b232f2b633 Service WA write misses in WI engine
WA write must follow follow the same two-pass pattern
as WI does. This change modifies WA engine to default to
WI in case of any miss (either partial or full), not only
partial miss.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-07-20 17:26:36 +02:00
Adam Rutkowski
91b6098fda Two pass write invalidate
Add second pass of write invalidate. It is necessary only
if concurrent I/O had inserted target LBAs to cache after
WI request did traversation. These LBAs might have been
written by WI request behind the concurrent I/O's back,
resulting in making these sectors effectively invalid.
In this case we must update these sectors' metadata to
reflect this. However we won't know about this after we
traverse the request again - hence calling ocf_write_wi
again with req->wi_second_pass set to indicate that this
is the second pass (core write should be skipped).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-07-20 17:26:35 +02:00
Robert Baldyga
ec6eae6a5f
Merge pull request #377 from arutk/fix_map
Set entry->core_id in ocf_engine_lookup_map_entry
2020-07-10 21:32:09 +02:00
Adam Rutkowski
b14312dcef Set entry->core_id in ocf_engine_lookup_map_entry
core_id should be set in this function. The fact that
it is missing might lead to incorrect behaviour e.g. in
case of promotion policy.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-06-09 13:15:50 +02:00
Adam Rutkowski
7776bd6485 WO: read clean sectors from cache
In case of partial hit WO engine first reads data for the entire
request address range from core device. Then it plumbs it by fetching
dirty sectors from cache device.

For unindentified reason this leads to a data corruption in YCSB
workload A. After flushing dirty data and re-loading cache the
data is correct.

This change modifies WO read handler to read clean data from the
cache. This is not optimal, as the clean sectors are now read twice
in case of partial hit. For now it seems to be good enough work-around
for the data corruption problem.

The symptoms, combined with the fact that this change seems to make
the problem go away, indicates that at some point WB write handler
(and/or special I/O request handlers like discard) puts CAS in a
state where in-memory medata wrongly indicates that a sector is
clean while in fact it is dirty, as marked in the on-disk metadata.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-05-27 12:31:53 +02:00
Robert Baldyga
1428376554
Merge pull request #371 from Ostrokrzew/load
Disable loading cache with 'force' flag
2020-05-22 13:52:16 +02:00
Slawomir Jankowski
248018b341 Change return code to valid OCF code
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-21 11:11:52 +02:00
Slawomir Jankowski
544e4086ca Disable load operation with 'force' flag
Fail `ocf_mngt_cache_load` function with `OCF_ERR_INVAL`
error code when force flag is in use.
Log error message.

Closes #361

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-21 11:11:52 +02:00
Slawomir Jankowski
455d554dc1 Reject zero-sized discard IOs to core
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:41 +02:00
Slawomir Jankowski
da34d5047b Typo fix
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:41 +02:00
Slawomir Jankowski
f516ed62e3 Remove unused parameter
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:32 +02:00
Robert Baldyga
1c9312842a
Merge pull request #369 from rafalste/copyright_update
Update copyright statements
2020-05-06 12:42:10 +02:00
Michal Rakowski
e7a2f333ae Take into account bytes from incoming req for 'full' seq cutoff policy
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-05-06 11:07:26 +02:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Michal Rakowski
67577fc1ef Force pass-through for requests bigger than cache
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-04-24 15:34:27 +02:00
Robert Baldyga
15fd53cbb0 Initialize seqential cutoff in try-add / load paths
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-23 00:41:53 +02:00
Robert Baldyga
188559416c
Merge pull request #354 from robertbaldyga/multistream-seq-cutoff
Introduce multi-stream seqential cutoff
2020-04-22 15:35:42 +02:00
Robert Baldyga
e9afb40860 Add sequential cutoff debug interface
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-22 13:30:42 +02:00
Robert Baldyga
93cd0615d3 Introduce multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-22 13:30:42 +02:00
Robert Baldyga
a9c36477d2 Fix deadlock on concurrent flush at the same cache
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-03 18:09:35 +02:00
Robert Baldyga
53dc4020e3
Merge pull request #358 from arutk/req_queue_fix
Do not reference req after adding to queue list
2020-03-27 15:04:51 +01:00
Robert Baldyga
80b410dc2e
Merge pull request #355 from arutk/flush_fixes
Fix stalls and warnings during flush
2020-03-27 14:11:34 +01:00
Adam Rutkowski
e39a76aa5e Do not reference req after adding to queue list
ocf_engine_push_req_(front|back) must not dereference req
pointer after putting the request on queue list and unlocking
the queue. At this point handler interface may asynchronously
pick up the request, handle it and deallocate.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-26 01:29:02 +01:00
Adam Rutkowski
b267d5d77d Reduce flush relaxation period by 1 order of magninude
Loop now relaxes every 2^17 (131K) cycles instead of every 1M.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:37:49 +01:00
Adam Rutkowski
fd328bd0a1 Check relaxation condition in each step of flush loop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:36:43 +01:00
Adam Rutkowski
4d61d56249 Rename flushing functions local variables for readibility
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:29:16 +01:00
Robert Baldyga
cf5e13c4aa
Merge pull request #357 from arutk/parallel_flush_Fix
Queue flush portion requests to the back of IO queue
2020-03-24 23:15:11 +01:00
Robert Baldyga
332ad1dfbc Make seq cutoff policy and threshold atomic variables
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-24 18:17:15 +01:00
Robert Baldyga
935df23c74 Introduce red-black trees utility
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-24 18:17:15 +01:00
Adam Rutkowski
64dcae1490 Split global metadata lock critial path
There is no need to constantly hold metadata global lock
during collecting cachelines to clean. Since we are past
freezing dirty request counter, we know for sure that the
number of dirty cache lines will not increase. So the worst
case is when the loop realxes and releases the lock,
a concurent IO to CAS is serviced in WT mode, possibly
inserting and/or evicting cachelines. This does not interfere
with scanning for dirty cachelines. And the lower layer will
handle synchronization with concurrent I/O by acquiring
asynchronous read lock on each cleaned cacheline.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:32:15 -04:00
Adam Rutkowski
3b3a49e8ea Queue flush portion requests to the back of IO queue
In current implementation in case of fast media flushning
container may starve all concurrent containers flushing
due to continous rescheduling of offender requests to the
front of I/O queue. Pushing request to the back of IO
queue ensures FIFO handling and removes possibility of
starvation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:06:14 -04:00
Adam Rutkowski
c17beec7d4 Do not exclude used cachelines from flushing
Lower layer is prepared to handle used cachelines by
acquiring asynchronus read lock. It is very likely that
by the time the cacheline is actually cleaned its lock
state changes. So checking the lock at the moment of
constructing dirty cachelines list makes little sense.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Adam Rutkowski
61983c946c Move flush containers sort & submit outside metadata lock
Moving _ocf_mngt_flush_containers outside global metadata
critical section. All this function does is sort core lines
and add queue request.

This fixes stalls reported by Linux scheduler due to
IO threads waiting on global metadata RW semaprhore for
several minutes.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Michal Rakowski
6f4d02f251 Fix seq_cutoff respecting in pt read
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Michal Rakowski
2edd05c812 Change get_effective_cache_mode to operate on req instead of io
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Michal Rakowski
d84942daa3 Typo fixes
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-17 16:36:40 +01:00
Robert Baldyga
22bdb8b004
Merge pull request #352 from robertbaldyga/update-memory-requirement-check
Update memory requirement check
2020-03-17 15:28:56 +01:00
Robert Baldyga
94b4bee6de Update memory requirement check
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-17 14:42:01 +01:00
Jan Musial
d2fe82dc85 Add memory check before engaging promotion policy
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-03-16 09:09:42 +01:00
Jan Musial
4eb5612832 Reorder fields in nhit_hash map to improve memory efficiency
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-03-06 12:36:46 +01:00
Robert Baldyga
108fe28ad4 Introduce core priv
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-03 15:37:12 +01:00
Robert Baldyga
ac7b5aba6b metadata: Allocate memory with ENV_MEM_NOIO flag
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 12:03:21 +01:00
Robert Baldyga
b7e59ee04a metadata: Use proper function for freeing memory
a_req is allocated using env_vmalloc() so we need to free it
using env_vfree(), not env_free().

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 10:29:15 +01:00
Adam Rutkowski
ee37391e97 Fix discard request map allocation
Discard handling splits large request into several steps.
However the actual size of request map for discard was
determined based on original request size, not step request
size, resulting in waste of memory and allocations > 4K.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 17:47:11 -05:00
Adam Rutkowski
26fd938ccf Reduce max trim request size to 512K
512K is the maximum request size for which request map
fits into one page (4K) regardless of cacheline size.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 15:57:34 -05:00
Michał Wysoczański
fabd41250b
Merge pull request #342 from mmichal10/fix-metadata-flush
Fix metadata flush
2020-01-24 17:59:58 +01:00
Michal Mielewczyk
d9c987e068 Flush metadata after changing status of each sector
In case of cleaning metadata used to be flushed only when status of whole cache
line changed to clean.

This patch ensures that metadata flush is triggered after changin status of each
single sector is cache line.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Michal Mielewczyk
2f10365086 Flush metadata after setting dirty status of each sector.
After second dirty write to cache line which was already dirty, metadata flush
was not triggered. In case of dirty shutdown, this led to data corruption.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Robert Baldyga
7d82f20614 Remove unused include
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Robert Baldyga
4d25bbe4b3 metadata: Relax memory allocation requirements
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Jan Musial
adc52ba71e Detect cache devices that would overflow ocf_cacheline_t
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-01-21 15:29:24 +01:00
Robert Baldyga
d1c2fc0c67 discard: Make max_length aligned to sector size
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-21 12:44:04 +01:00
Michal Rakowski
65756a8160 Moved setting ctx for temporary cache object before metadata init
This way debug prints during metadata init phase won't cause crash
(because of the fact that temporary cache object does not have proper
ctx set hence does not have logger obj).

Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-01-16 21:53:40 +01:00
Robert Baldyga
ce28c71475
Merge pull request #326 from Ostrokrzew/upstream
Change error code
2020-01-10 13:38:18 +01:00
Ostrokrzew
3fca309e51 Change error code and add new
Change 'OCF_ERR_START_CACHE_FAIL' to 'OCF_ERR_NO_MEM' while CAS fails in case of memory lack on device.
Add new error code for case, when device doesn't satisfy CAS requirements - 'OCF_ERR_INVAL_CACHE_DEV'.
Use 'OCF_ERR_INVAL_CACHE_DEV' in code.
Update error code match in test.
closes #317 issue

Signed-off-by: Ostrokrzew <slawomir.jankowski@intel.com>
2020-01-02 09:34:24 +01:00
Jan Musial
5eca548e22 Make sure NHIT won't attempt to take the same semaphore twice
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-12-31 14:16:18 +01:00
Jan Musial
4536a51f59 Fix init of nhit + code styling
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-12-31 14:16:18 +01:00
Michal Mielewczyk
6ac3195823 Keep stop pipeline in struct cache.
To eliminate possibility of allocation error in cache stop, pipeline is
allocated on attach.

Due this change, the only possible non-zero status of  ocf_mngt_cache_stop() is
just a warning and cache is always stopped after executing it.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-12-27 18:54:15 -05:00