Robert Baldyga
9dba49192f
Merge pull request #462 from arutk/no_bits_lock
...
Get rid of status bits lock
2021-02-19 10:12:46 +01:00
Adam Rutkowski
c95f6358ab
Get rid of status bits lock
...
All the status bits operations are now protectec by
hash bucket locks
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:05:53 -06:00
Adam Rutkowski
cd9e42f987
Properly lock hash bucket for status bits operations
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:02:50 -06:00
Robert Baldyga
75baec5aa5
Merge pull request #456 from arutk/aalru
...
Relax LRU list ordering to minimize list updates
2021-02-18 13:48:54 +01:00
Robert Baldyga
91cbeed611
Merge pull request #461 from mmichal10/evict-pinned
...
Evict pinned overflown ioclass
2021-02-16 13:55:52 +01:00
Michal Mielewczyk
83f142c987
Functional test for overflown pinned ioclass
...
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-16 07:07:01 -05:00
Slawomir Jankowski
2741acc069
Create templates for issues
...
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2021-02-16 11:36:53 +01:00
Michal Mielewczyk
7f3f2ad115
Evict from overflown pinned ioclass
...
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-16 04:06:07 -05:00
Robert Baldyga
fe206a86ec
Merge pull request #452 from arutk/split_gml_master
...
Split global metadata lock
2021-02-15 18:10:36 +01:00
Adam Rutkowski
0748f33a9d
Align each global metadata lock to 64B
...
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
05780c98ed
Split global metadata lock
...
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
10c3c3de36
Renaming hash bucket locking functions
...
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
hold metadata shared global lock ("naked") vs the ones
which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
accepting hash bucket id ("id"), core line and lba ("cline")
and entire request ("req").
Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Adam Rutkowski
c822c953ed
Fix return status from hash bucket trylock wr
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-11 15:02:06 -06:00
Robert Baldyga
af8177d2ba
Merge pull request #458 from mmichal10/fix-cleaning
...
Fix updating hot cachelines cleaning list
2021-02-11 11:30:07 +01:00
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
...
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Michal Mielewczyk
fa41d4fc88
Fix updating hot cachelines cleaning list
...
Update cacheline's timestamp each time it's being written.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-10 10:02:57 -05:00
Adam Rutkowski
9e98eec361
Only acquire read lock to verify lru elem hotness
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
c04bfa3962
Add macros to read lock eviction list
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
9690f13bef
Change eviction spin lock to RW lock
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
1908707a3d
LRU list unit tests
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
b4daac11c2
Track hot items on LRU list
...
Top 50% least recently used cachelines are not promoted
to list head upon access. Only after cacheline drops to
bottom 50% it is considered as a candidate to promote
to list head.
The purpose of this change is to reduce overhead of
LRU list maintanance for hot cachelines.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:22:55 -06:00
Adam Rutkowski
4276d65e5a
unit tests for new eviction order
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 12:51:39 -06:00
Adam Rutkowski
746b32c47d
Evict from overflown partitions first
...
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 12:51:39 -06:00
Adam Rutkowski
5538a5a95d
Only request evict size equal to request unmapped count
...
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 11:11:15 -06:00
Robert Baldyga
fd26735157
Merge pull request #450 from mmichal10/ioclass-stats-fix
...
Reset per-partition counters when adding core
2021-02-03 13:13:30 +01:00
Michal Mielewczyk
93eccc862a
Reset per-partition counters when adding core
...
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-03 06:18:44 -05:00
Robert Baldyga
3543f5c5cc
Merge pull request #443 from rafalste/update_copyright
...
Update copyright statements (2021)
2021-02-03 11:59:39 +01:00
Robert Baldyga
3ddefc9b59
Merge pull request #446 from mmichal10/lock-on-hit
...
Don't evict on hit
2021-02-02 15:18:22 +01:00
Michal Mielewczyk
3a7b55c4c2
Don't evict on hit
...
If request is hit, simply try to acquire cachelines instead of verifying
whether target partition's size is not exceeded.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-29 17:15:32 -05:00
Rafal Stefanowski
6ed4cf8a24
Update copyright statements (2021)
...
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Robert Baldyga
7f72a9431c
Merge pull request #442 from arutk/cleaner_page_lock
...
Add missing collision page lock in cleaner
2021-01-21 09:25:21 +01:00
Adam Rutkowski
012438c279
Add missing collision page lock in cleaner
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-20 19:28:41 -06:00
Robert Baldyga
fe56d719d7
Merge pull request #440 from robertbaldyga/flush-collision-on-core-remove
...
Flush metadata collision segment on core remove
2021-01-19 15:11:11 +01:00
Robert Baldyga
5a88ab2d61
Flush metadata collision segment on core remove
...
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.
This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.
This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-19 13:34:28 +01:00
Robert Baldyga
3f7139a814
Merge pull request #439 from mmichal10/remove_core_fine_lock
...
Remove core fine lock
2021-01-15 21:10:14 +01:00
Adam Rutkowski
f206c64ff6
Fine granularity lock in cache_mngt_core_deinit_attached_meta
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 03:11:46 -05:00
Michal Mielewczyk
6d962b38e9
API for cacheline write trylock
...
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
bd20d6119b
External linkage for function to sparse single cline
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
93bda499c7
Add functions to lock specific hash bucket
...
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-12 15:42:21 -05:00
Robert Baldyga
fd88c2c3a4
Merge pull request #436 from mmichal10/metadata-assert
...
Metadata assert
2021-01-08 10:15:08 +01:00
Robert Baldyga
eff0047d6f
Merge pull request #434 from robertbaldyga/dont-modify-list-read-lock
...
seq-cutoff: Don't modify node list under read lock
2021-01-08 10:02:26 +01:00
Michal Mielewczyk
fcef130919
Bug on metadata access error
...
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 18:10:44 -05:00
Michal Mielewczyk
d0225ef1cb
Prevent uint32_t overflow
...
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 02:45:05 -05:00
Robert Baldyga
ea1fc7a6d4
seq-cutoff: Don't modify node list under read lock
...
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-05 19:46:37 +01:00
Robert Baldyga
dd508c595f
Merge pull request #430 from rafalste/fix_attach_load_paths
...
Create separate pipelines and paths for cache attach/load scenarios
2020-12-23 16:51:37 +01:00
Robert Baldyga
69e388a10f
Merge pull request #372 from arutk/wo_test_enhancements
...
Extend WO engine functional tests
2020-12-23 16:51:16 +01:00
Rafal Stefanowski
57d4aaf7c9
Return error status from ocf_freelist_init
...
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:43:46 +01:00
Rafal Stefanowski
d3b61e474c
Remove init_mode and use metadata.is_volatile instead
...
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:55 +01:00
Rafal Stefanowski
88b97df16d
Fix pipeline attach/load paths
...
Create separate pipelines for cache attach and load scenarios.
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:49 +01:00
Robert Baldyga
efc8786ed3
Merge pull request #432 from robertbaldyga/seq-cutoff-detached-core
...
Initialize sequential cutoff for detached cores
2020-12-23 14:24:24 +01:00