Commit Graph

593 Commits

Author SHA1 Message Date
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Michal Mielewczyk
fa41d4fc88 Fix updating hot cachelines cleaning list
Update cacheline's timestamp each time it's being written.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-10 10:02:57 -05:00
Adam Rutkowski
9e98eec361 Only acquire read lock to verify lru elem hotness
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
c04bfa3962 Add macros to read lock eviction list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
9690f13bef Change eviction spin lock to RW lock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
b4daac11c2 Track hot items on LRU list
Top 50% least recently used cachelines are not promoted
to list head upon access. Only after cacheline drops to
bottom 50% it is considered as a candidate to promote
to list head.

The purpose of this change is to reduce overhead of
LRU list maintanance for hot cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:22:55 -06:00
Adam Rutkowski
746b32c47d Evict from overflown partitions first
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 12:51:39 -06:00
Adam Rutkowski
5538a5a95d Only request evict size equal to request unmapped count
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 11:11:15 -06:00
Michal Mielewczyk
93eccc862a Reset per-partition counters when adding core
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-03 06:18:44 -05:00
Robert Baldyga
3543f5c5cc
Merge pull request #443 from rafalste/update_copyright
Update copyright statements (2021)
2021-02-03 11:59:39 +01:00
Michal Mielewczyk
3a7b55c4c2 Don't evict on hit
If request is hit, simply try to acquire cachelines instead of verifying
whether target partition's size is not exceeded.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-29 17:15:32 -05:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Adam Rutkowski
012438c279 Add missing collision page lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-20 19:28:41 -06:00
Robert Baldyga
5a88ab2d61 Flush metadata collision segment on core remove
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.

This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.

This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-19 13:34:28 +01:00
Adam Rutkowski
f206c64ff6 Fine granularity lock in cache_mngt_core_deinit_attached_meta
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 03:11:46 -05:00
Michal Mielewczyk
6d962b38e9 API for cacheline write trylock
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
bd20d6119b External linkage for function to sparse single cline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
93bda499c7 Add functions to lock specific hash bucket
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-12 15:42:21 -05:00
Robert Baldyga
fd88c2c3a4
Merge pull request #436 from mmichal10/metadata-assert
Metadata assert
2021-01-08 10:15:08 +01:00
Michal Mielewczyk
fcef130919 Bug on metadata access error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 18:10:44 -05:00
Michal Mielewczyk
d0225ef1cb Prevent uint32_t overflow
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 02:45:05 -05:00
Robert Baldyga
ea1fc7a6d4 seq-cutoff: Don't modify node list under read lock
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-05 19:46:37 +01:00
Robert Baldyga
dd508c595f
Merge pull request #430 from rafalste/fix_attach_load_paths
Create separate pipelines and paths for cache attach/load scenarios
2020-12-23 16:51:37 +01:00
Rafal Stefanowski
57d4aaf7c9 Return error status from ocf_freelist_init
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:43:46 +01:00
Rafal Stefanowski
d3b61e474c Remove init_mode and use metadata.is_volatile instead
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:55 +01:00
Rafal Stefanowski
88b97df16d Fix pipeline attach/load paths
Create separate pipelines for cache attach and load scenarios.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:49 +01:00
Robert Baldyga
6270d917f8 Initialize sequential cutoff for detached cores
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-12-23 14:00:54 +01:00
Rafal Stefanowski
4c42d62f97 Add a newline escape in 'invalid checksum' messages
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-22 16:34:44 +01:00
Adam Rutkowski
1b8bfb36f5 Add missing part->id initialization
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-22 11:52:20 +01:00
Robert Baldyga
7f60d73511
Merge pull request #413 from mmichal10/occ-per-ioclass
Occupancy per ioclass
2020-12-21 23:43:54 +01:00
Michal Mielewczyk
0dc8b5811c Store min and max ioclass size as percentage val
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.

To prevent those errors, min and max should be calculated dynamically.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
bcfc821068 Don't calc free cachelines in per-ioclass stats
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
60680b15b2 Accessors for req->info.mapping_error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
9e11a88f2e Occupancy per ioclass
Respect occpuancy limit set single ioclass

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
05f3c22dad Occupancy per ioclass utilities
Functions to check space availability and to manage cachelines reservation

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:18:47 -05:00
Michal Mielewczyk
600bd1d859 Access partition's metadata counters via functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
9d80882b00 Remove re_part field from struct ocf_req_info
Since the request carries an explicit information about number of the
cacheliens to be reparted, no need of keeping the boolean information if some
of the request's cachelines are assigned to a wrong partition

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e26ca30399 Track explicit number of cachelines to be reparted
Instead of redunant calculating number of cachlines to be reparted, keep this
information in request's info

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
4f228317a1 Update docs for space_managment_evict_do()
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
21e98a6dbc Evict request's target partition in regrular order
Instead of evicting target partition as the last one, respect eviction
priorities

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e999de7232 Don't roundup when evicting single part
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
718dc743c8 Enable particular ioclass eviction
If partition's occupancy limit is reached, cachelines should be evicted from
 request's target partition.

Information whether particular partition eviction should be triggered is
carried as a flag by request which triggered eviction.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:23 -05:00
Michal Mielewczyk
e9d7290078 Extend ioclass management logging
When setting ioclass, print info about it's max size

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Michal Mielewczyk
c643a41977 Prevent adding ioclass with the same id twice
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Michal Rakowski
ac2effb83d Fix whitespaces
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Adam Rutkowski
822cd7c45a Introduce metadata superblock & segment structures
Refactoring metadata superblock and segment ops code
to make it less tightly coupled.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
3eb5568608 rename segment->segment_id and segemnt_ops->segment
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
b074d77797 Spliting metadata implementation to match header files
Moving metadata implementation out of obsolete metadata_hash.c
to .c files corresponding to function declaration header files.
This requires adding shared header for metadata implementation
metadata_internal.h. Some metadata header files did not have
a corresponding .c file - in this case it is added in this
commit.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:49 +01:00
Adam Rutkowski
02405e989d Removing 'hash' word from misspelled metadata functions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
18e35c390b Remove last references to "hash" metadata implementation
Hashed metadata is now default and only implemetation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
5fb4d68c7f Remove get and set from metadata raw ifc
Memcopy based metadata interface is an unnecessary
overhead and is being removed.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
6dfdd6940b Remove metadata ifc structure
At this point it is not used

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
4d97b1611f Move metadata layout field outside meteadata ifc
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
b5d6cdb398 Rename metadata iface_priv to priv
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
05c0826c0f Remove metadata bits manipulation abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
98cba1603f Replace metadata ifc wrappers with direct calls to hash ifc
Metadata wrapper functions (calling iface->func) in header
files are changed to be declarations only. Hash interface
implementation functions in metadata_hash.c are given an
external linkage and are renamed to drop "hash" prefix.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d796e1f400 Remove metadata layout abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d2af0bafda Remove redundant locks from metadata flush/load all
Locks acquired in ocf_metadata_flush(/load)_all are
acquired only for the duration of queueing asynch
service for flush/load, no actual metadata accesses
are performed there.

Also flush/load all are always performed with metadata
marked as deinitialized (metadata reference counter freezed),
so no I/O is reading nor writing the metadata. The only source
of potential concurrent metadata accesses are other management
operations, which should be synchronized using management lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Adam Rutkowski
41a767de97 Multiple LRU lists
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
ac83c4ecd6 seq_cutoff: Allocate seq cutoff structures dynamically per core
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
0fd095046c
Merge pull request #426 from arutk/meta_no_memcpy
Remove memcpy from collision/eviction policy metadata api
2020-12-09 13:02:49 +01:00
Adam Rutkowski
fec61528e6 Remove memcpy from collision/eviction policy metadata api
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-07 17:58:44 +01:00
Robert Baldyga
7af386681d
Merge pull request #418 from robertbaldyga/inc-dep-env-headers
Remove dependency to full ocf_env.h from inc/ headers
2020-11-30 17:16:32 +01:00
Robert Baldyga
9bcafb5bfb seq_cutoff: Initialize each stream with different LBA
Initializing each stream with unique LBA ensures there are no initial
rbtree collisions, and thus helps to avoid clustering of all the streams
into one big linked list instead of forming performance friendly proper
tree structure.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-30 15:58:18 +01:00
Robert Baldyga
b8735f6517 rbtree: Fix swapping out-of-tree node with root
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-30 15:58:18 +01:00
Robert Baldyga
c8e7e0053c Remove dependency to full ocf_env.h from inc/ headers
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-19 13:07:16 +01:00
Robert Baldyga
a54d4461f0 seq_cutoff: Always continue the biggest stream
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
8b03271626 rbtree: Introduce list find callback
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
0ae4f4b5b2 rbtree: Add equal nodes to linked list
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
50c4de0495 rbtree: Make swap resistant to nodes outside the tree
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:20:45 +01:00
Robert Baldyga
694224971c rbtree: Replace spaces with tabs
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-09 17:32:03 +01:00
Robert Baldyga
0e3c9e740e
Merge pull request #396 from arutk/lru_refactor
Simplify and modularize LRU list code
2020-11-05 15:35:33 +01:00
root
ef08141252 Use -1 for LRU list terminator instead of collision_table_entries
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-11-04 17:43:41 -06:00
Adam Rutkowski
58f8a2218a Simplify and modularize LRU list code
Refactoring LRU list code to reduce code duplication and
improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-11-04 17:42:53 -06:00
Robert Baldyga
9a23787c6b
Merge pull request #406 from arutk/flush2
Propagate I/O flags (e.g. FUA) to metadata flush I/O
2020-10-06 12:49:22 +02:00
Adam Rutkowski
716edcc637 Flush cache volume after writing config metadata segments
After writing metadata configuration to disk we must
send a flush request to make sure configuration sections
are commited to non-volatile storage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-30 10:40:03 +02:00
Adam Rutkowski
c945db356c Propagate I/O flags (e.g. FUA) to metadata flush I/O
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-29 14:46:27 +02:00
Robert Baldyga
7c29110e47
Merge pull request #398 from Open-CAS/proper-core-status
Fix logging core state on cache load
2020-09-04 19:56:16 +02:00
Robert Baldyga
990f5160eb Cleanup request map entries in error handling path
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-09-02 14:30:28 +02:00
Robert Baldyga
0dfdcb05e9 Fix core volume lifecycle management
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-08-21 16:40:41 +02:00
Rafal Stefanowski
6542c2fa94 Fix memory requirement when loading cache
Load properties before checking memory needs and obtain cache line size
from context rather than from cache state.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-08-19 03:15:18 +02:00
Rafal Stefanowski
072c9c1902 Pass only needed values to _ocf_mngt_calculate_ram_needed() function
Rather then passing whole structs, supply
_ocf_mngt_calculate_ram_needed() with just the values it actually uses.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-08-19 03:09:05 +02:00
Jan Musial
2ee1e4c8dd Fix logging core state on cache load
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-07-28 14:52:15 +02:00
Robert Baldyga
d5ecdc16dd Make CRC mismatch on recovery a warning instead of error
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:49:29 +02:00
Robert Baldyga
d946124a01 Calculate CRC for runtime metadata sections only on clean load
During recovery procedure there is no guarantee that checksums
of runtime sections were flushed correctly before dirty shutdown.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:45:53 +02:00
Robert Baldyga
7d889fa1fc
Merge pull request #385 from arutk/pt_write_double_inv
Two pass write invalidate
2020-07-28 07:42:44 +02:00
Adam Rutkowski
b232f2b633 Service WA write misses in WI engine
WA write must follow follow the same two-pass pattern
as WI does. This change modifies WA engine to default to
WI in case of any miss (either partial or full), not only
partial miss.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-07-20 17:26:36 +02:00
Adam Rutkowski
91b6098fda Two pass write invalidate
Add second pass of write invalidate. It is necessary only
if concurrent I/O had inserted target LBAs to cache after
WI request did traversation. These LBAs might have been
written by WI request behind the concurrent I/O's back,
resulting in making these sectors effectively invalid.
In this case we must update these sectors' metadata to
reflect this. However we won't know about this after we
traverse the request again - hence calling ocf_write_wi
again with req->wi_second_pass set to indicate that this
is the second pass (core write should be skipped).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-07-20 17:26:35 +02:00
Robert Baldyga
ec6eae6a5f
Merge pull request #377 from arutk/fix_map
Set entry->core_id in ocf_engine_lookup_map_entry
2020-07-10 21:32:09 +02:00
Adam Rutkowski
b14312dcef Set entry->core_id in ocf_engine_lookup_map_entry
core_id should be set in this function. The fact that
it is missing might lead to incorrect behaviour e.g. in
case of promotion policy.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-06-09 13:15:50 +02:00
Adam Rutkowski
7776bd6485 WO: read clean sectors from cache
In case of partial hit WO engine first reads data for the entire
request address range from core device. Then it plumbs it by fetching
dirty sectors from cache device.

For unindentified reason this leads to a data corruption in YCSB
workload A. After flushing dirty data and re-loading cache the
data is correct.

This change modifies WO read handler to read clean data from the
cache. This is not optimal, as the clean sectors are now read twice
in case of partial hit. For now it seems to be good enough work-around
for the data corruption problem.

The symptoms, combined with the fact that this change seems to make
the problem go away, indicates that at some point WB write handler
(and/or special I/O request handlers like discard) puts CAS in a
state where in-memory medata wrongly indicates that a sector is
clean while in fact it is dirty, as marked in the on-disk metadata.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-05-27 12:31:53 +02:00
Robert Baldyga
1428376554
Merge pull request #371 from Ostrokrzew/load
Disable loading cache with 'force' flag
2020-05-22 13:52:16 +02:00
Slawomir Jankowski
248018b341 Change return code to valid OCF code
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-21 11:11:52 +02:00
Slawomir Jankowski
544e4086ca Disable load operation with 'force' flag
Fail `ocf_mngt_cache_load` function with `OCF_ERR_INVAL`
error code when force flag is in use.
Log error message.

Closes #361

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-21 11:11:52 +02:00
Slawomir Jankowski
455d554dc1 Reject zero-sized discard IOs to core
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:41 +02:00
Slawomir Jankowski
da34d5047b Typo fix
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:41 +02:00
Slawomir Jankowski
f516ed62e3 Remove unused parameter
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:32 +02:00
Robert Baldyga
1c9312842a
Merge pull request #369 from rafalste/copyright_update
Update copyright statements
2020-05-06 12:42:10 +02:00
Michal Rakowski
e7a2f333ae Take into account bytes from incoming req for 'full' seq cutoff policy
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-05-06 11:07:26 +02:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Michal Rakowski
67577fc1ef Force pass-through for requests bigger than cache
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-04-24 15:34:27 +02:00
Robert Baldyga
15fd53cbb0 Initialize seqential cutoff in try-add / load paths
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-23 00:41:53 +02:00
Robert Baldyga
188559416c
Merge pull request #354 from robertbaldyga/multistream-seq-cutoff
Introduce multi-stream seqential cutoff
2020-04-22 15:35:42 +02:00
Robert Baldyga
e9afb40860 Add sequential cutoff debug interface
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-22 13:30:42 +02:00
Robert Baldyga
93cd0615d3 Introduce multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-22 13:30:42 +02:00
Robert Baldyga
a9c36477d2 Fix deadlock on concurrent flush at the same cache
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-03 18:09:35 +02:00
Robert Baldyga
53dc4020e3
Merge pull request #358 from arutk/req_queue_fix
Do not reference req after adding to queue list
2020-03-27 15:04:51 +01:00
Robert Baldyga
80b410dc2e
Merge pull request #355 from arutk/flush_fixes
Fix stalls and warnings during flush
2020-03-27 14:11:34 +01:00
Adam Rutkowski
e39a76aa5e Do not reference req after adding to queue list
ocf_engine_push_req_(front|back) must not dereference req
pointer after putting the request on queue list and unlocking
the queue. At this point handler interface may asynchronously
pick up the request, handle it and deallocate.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-26 01:29:02 +01:00
Adam Rutkowski
b267d5d77d Reduce flush relaxation period by 1 order of magninude
Loop now relaxes every 2^17 (131K) cycles instead of every 1M.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:37:49 +01:00
Adam Rutkowski
fd328bd0a1 Check relaxation condition in each step of flush loop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:36:43 +01:00
Adam Rutkowski
4d61d56249 Rename flushing functions local variables for readibility
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:29:16 +01:00
Robert Baldyga
cf5e13c4aa
Merge pull request #357 from arutk/parallel_flush_Fix
Queue flush portion requests to the back of IO queue
2020-03-24 23:15:11 +01:00
Robert Baldyga
332ad1dfbc Make seq cutoff policy and threshold atomic variables
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-24 18:17:15 +01:00
Robert Baldyga
935df23c74 Introduce red-black trees utility
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-24 18:17:15 +01:00
Adam Rutkowski
64dcae1490 Split global metadata lock critial path
There is no need to constantly hold metadata global lock
during collecting cachelines to clean. Since we are past
freezing dirty request counter, we know for sure that the
number of dirty cache lines will not increase. So the worst
case is when the loop realxes and releases the lock,
a concurent IO to CAS is serviced in WT mode, possibly
inserting and/or evicting cachelines. This does not interfere
with scanning for dirty cachelines. And the lower layer will
handle synchronization with concurrent I/O by acquiring
asynchronous read lock on each cleaned cacheline.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:32:15 -04:00
Adam Rutkowski
3b3a49e8ea Queue flush portion requests to the back of IO queue
In current implementation in case of fast media flushning
container may starve all concurrent containers flushing
due to continous rescheduling of offender requests to the
front of I/O queue. Pushing request to the back of IO
queue ensures FIFO handling and removes possibility of
starvation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:06:14 -04:00
Adam Rutkowski
c17beec7d4 Do not exclude used cachelines from flushing
Lower layer is prepared to handle used cachelines by
acquiring asynchronus read lock. It is very likely that
by the time the cacheline is actually cleaned its lock
state changes. So checking the lock at the moment of
constructing dirty cachelines list makes little sense.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Adam Rutkowski
61983c946c Move flush containers sort & submit outside metadata lock
Moving _ocf_mngt_flush_containers outside global metadata
critical section. All this function does is sort core lines
and add queue request.

This fixes stalls reported by Linux scheduler due to
IO threads waiting on global metadata RW semaprhore for
several minutes.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Michal Rakowski
6f4d02f251 Fix seq_cutoff respecting in pt read
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Michal Rakowski
2edd05c812 Change get_effective_cache_mode to operate on req instead of io
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Michal Rakowski
d84942daa3 Typo fixes
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-17 16:36:40 +01:00
Robert Baldyga
22bdb8b004
Merge pull request #352 from robertbaldyga/update-memory-requirement-check
Update memory requirement check
2020-03-17 15:28:56 +01:00
Robert Baldyga
94b4bee6de Update memory requirement check
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-17 14:42:01 +01:00
Jan Musial
d2fe82dc85 Add memory check before engaging promotion policy
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-03-16 09:09:42 +01:00
Jan Musial
4eb5612832 Reorder fields in nhit_hash map to improve memory efficiency
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-03-06 12:36:46 +01:00
Robert Baldyga
108fe28ad4 Introduce core priv
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-03 15:37:12 +01:00
Robert Baldyga
ac7b5aba6b metadata: Allocate memory with ENV_MEM_NOIO flag
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 12:03:21 +01:00
Robert Baldyga
b7e59ee04a metadata: Use proper function for freeing memory
a_req is allocated using env_vmalloc() so we need to free it
using env_vfree(), not env_free().

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 10:29:15 +01:00
Adam Rutkowski
ee37391e97 Fix discard request map allocation
Discard handling splits large request into several steps.
However the actual size of request map for discard was
determined based on original request size, not step request
size, resulting in waste of memory and allocations > 4K.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 17:47:11 -05:00
Adam Rutkowski
26fd938ccf Reduce max trim request size to 512K
512K is the maximum request size for which request map
fits into one page (4K) regardless of cacheline size.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 15:57:34 -05:00
Michał Wysoczański
fabd41250b
Merge pull request #342 from mmichal10/fix-metadata-flush
Fix metadata flush
2020-01-24 17:59:58 +01:00
Michal Mielewczyk
d9c987e068 Flush metadata after changing status of each sector
In case of cleaning metadata used to be flushed only when status of whole cache
line changed to clean.

This patch ensures that metadata flush is triggered after changin status of each
single sector is cache line.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Michal Mielewczyk
2f10365086 Flush metadata after setting dirty status of each sector.
After second dirty write to cache line which was already dirty, metadata flush
was not triggered. In case of dirty shutdown, this led to data corruption.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Robert Baldyga
7d82f20614 Remove unused include
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Robert Baldyga
4d25bbe4b3 metadata: Relax memory allocation requirements
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Jan Musial
adc52ba71e Detect cache devices that would overflow ocf_cacheline_t
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-01-21 15:29:24 +01:00
Robert Baldyga
d1c2fc0c67 discard: Make max_length aligned to sector size
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-21 12:44:04 +01:00
Michal Rakowski
65756a8160 Moved setting ctx for temporary cache object before metadata init
This way debug prints during metadata init phase won't cause crash
(because of the fact that temporary cache object does not have proper
ctx set hence does not have logger obj).

Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-01-16 21:53:40 +01:00
Robert Baldyga
ce28c71475
Merge pull request #326 from Ostrokrzew/upstream
Change error code
2020-01-10 13:38:18 +01:00
Ostrokrzew
3fca309e51 Change error code and add new
Change 'OCF_ERR_START_CACHE_FAIL' to 'OCF_ERR_NO_MEM' while CAS fails in case of memory lack on device.
Add new error code for case, when device doesn't satisfy CAS requirements - 'OCF_ERR_INVAL_CACHE_DEV'.
Use 'OCF_ERR_INVAL_CACHE_DEV' in code.
Update error code match in test.
closes #317 issue

Signed-off-by: Ostrokrzew <slawomir.jankowski@intel.com>
2020-01-02 09:34:24 +01:00
Jan Musial
5eca548e22 Make sure NHIT won't attempt to take the same semaphore twice
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-12-31 14:16:18 +01:00
Jan Musial
4536a51f59 Fix init of nhit + code styling
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-12-31 14:16:18 +01:00
Michal Mielewczyk
6ac3195823 Keep stop pipeline in struct cache.
To eliminate possibility of allocation error in cache stop, pipeline is
allocated on attach.

Due this change, the only possible non-zero status of  ocf_mngt_cache_stop() is
just a warning and cache is always stopped after executing it.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-12-27 18:54:15 -05:00
Adam Rutkowski
92b36c3484 Change DIV_ROUND_UP to OCF_DIV_ROUND_UP
This fixes compilation in SPDK env

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-12-28 18:24:12 -05:00
Robert Baldyga
d1249e5238 Limit number of concurrent io submitted by metadata_io_i_asynch()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-19 16:50:41 +01:00
Robert Baldyga
e06832426d cleaner: Retrieve core object properly
Cleaner doesn't set core object in req as it works in domain of cache
lines, which may belong to various cores. It this case should retrieve
core object not from the req, but from the map instead.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-19 14:44:04 +01:00
Michal Rakowski
a074026773
Merge pull request #329 from robertbaldyga/fix-cleaner-queue-change-before-put
Put a queue before calling cleaner completion callback
2019-12-19 11:33:27 +01:00
Robert Baldyga
32fd371583 Put a queue before calling cleaner completion callback
This ensures that cleaner queue will not be changed
by starting another cleaning iteration before we put it.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-18 20:49:56 +01:00