Commit Graph

711 Commits

Author SHA1 Message Date
Adam Rutkowski
9486b7796f Remove early return from engine_map()
Removing conditional early return from engine_map() function
in case of insufficient free cachelines. The reasons are:

1. current implementation does not treat unssufficient free
cachelines condition as an error,
2. the check is based on stale request info, so it is inaccurate,
3. it is easier to hit more paths with functional tests,
4. partially mapping request from the freelist becomes more common
rather than being a corner case dependent on racy timings between
threads

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 07:40:24 -05:00
Kozlowski Mateusz
e054949cbb Metadata updater mutex alignment
Avoids trashing of (mostly) static and often used entries

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
e391fc2c13 Queue alignment
Metadata reshuffling

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
642527d72a ref count alignment
Move ref counts to their own cacheline - otherwise they pollute and cause
false sharing to fields nearby and cause a lot of cache bouncing between
physical CPUs.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fd2fd335a0 ocf_cache alignment
Grouping static fields together, while often changing ones get their own
cacheline, or some not used often/important fields.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
33f29e43bc Aligned ocf_volume
Force cacheline alignment to avoid cacheline trashing on static fields
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Adam Rutkowski
a3f2a214b6 Always call LRU_set_hot() under hash bucket lock
set_hot() depends on cacheline metadata status to determine
on which list the element is located (dirty cs clean list).
Thus at least hash bucket lock is required when calling
set_hot().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-25 18:50:13 -05:00
Jan Musial
9c070c1d25 Fix freeing oversized discard requests
When issuing discard request over 512KiB OCF would trim this request and
overwrite req->core_line_count which would then cause this request to be
freed from wrong mpool.

This is fixed now by saving core_line_count that was set when allocating
this request that is never overwritten. This alloc_core_line_count is
then used to free the request from correct mpool.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-25 15:16:57 +01:00
Kozlowski Mateusz
365f4f0d19 Use read/write locks in queue sequential cutoffs
If user thread is preempted during tree/list update and another IO
is issued on the same CPU, the structure will be in undefined state.
This may result in hung tasks, if the tree stops being a tree and a loop exists -
tree search functions won't be able to end properly; or panics if a NULL value appears
suddenly in the preempted thread, after a null-check was already done.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-23 09:54:56 +01:00
Michal Mielewczyk
92a5ddd524 ut framework: don't mock env functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:08:27 +01:00
Michal Mielewczyk
0d3f3cde14 Return error when modifying default ioclass rule
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:06:23 +01:00
Robert Baldyga
5f77db5c85
Merge pull request #473 from robertbaldyga/deallocate-request-properly
ocf_request: Deallocate request with separately allocated map properly
2021-03-19 10:31:07 +01:00
Robert Baldyga
87244c04d7
Merge pull request #472 from mmichal10/lock-on-setting-hot
Update cleaning lru under metadata lock
2021-03-19 09:54:32 +01:00
Robert Baldyga
74d61785e9 ocf_request: Deallocate request with separately allocated map properly
When allocation of request with map fails, we fallback to allocating
request with no map, and then allocate map separately. During request
put we need to distinguish between those two cases in order to deallocate
request properly.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-19 09:49:29 +01:00
Robert Baldyga
296e98e39c
Merge pull request #471 from arutk/lru_fix_2
Prevent remapping cachelines within single request
2021-03-18 20:33:11 +01:00
Adam Rutkowski
a232488c7a Prevent remapping cachelines within single request
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-18 17:11:45 -05:00
Robert Baldyga
8020e7fd67
Merge pull request #457 from Ostrokrzew/false_stats
Fix broken 'dirty_for' stats
2021-03-18 10:24:02 +01:00
Michal Mielewczyk
841f8122d7 Update cleaning lru under metadata lock
This prevents deinitializing cleaning policy structures during IO.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-18 09:55:21 +01:00
Michał Mielewczyk
df969cde16
Merge pull request #470 from arutk/lru_fix
Parallel eviction fixes
2021-03-17 11:41:07 +01:00
Adam Rutkowski
c565c5c3f5 Add comments warning about stale request map info
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:28:02 -05:00
Adam Rutkowski
98124aa13d Add missing lookup in engine_map()
Early return from engine_map() in case of insufficient free
cachelines on the freelist is opportunistic, as both request
map info and freelist count are not accurate. Map info is stale
as it is to be refreshed in engine_map() after hash bucket
lock had been upgraded. Freelist count on other hand is subject
to change asynchronously.

The implementation assumption however is that after engine_map()
request is fully traversed (engine_map() is equivalent to
engine_lookup() followed by an attempt to map missing cachelines).
So in case of early return we must take care of repeating the
lookup.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:23:24 -05:00
Adam Rutkowski
e5fa15bdb2 Remove early return from engine_map() in case of hit
At this point cacheline status in request map is stale,
as lookup was performed before upgrading hash bucket lock.
If indeed all cachelines are mapped, this will be determined
in the main loop of engine_map().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:21:03 -05:00
Michał Mielewczyk
e3d5439d9f
Merge pull request #469 from mmichal10/fix-unmapped
Fix `ocf_engine_unmapped_count()`
2021-03-17 10:30:23 +01:00
Adam Rutkowski
736fb2efc0 Call LRU set_hot() immediately after cache insert
This assures that cacheline with LOOKUP_INSERTED status
is always present on the LRU list.

This fixes an ENV_BUG() caused by an attempt to remove
a cacheline from LRU list which was not there. This
happened when cacheline was mapped from freelist
(LOOKUP_INSERTED) but the entire request mapping failed
and generic cleanup routines attempted to invalidate cacheline,
including removing it from the LRU list. As engine_set_hot()
is called after successfull mapping, the inserted cacheline was
not yet present on the LRU list.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-16 19:59:09 -05:00
Michal Mielewczyk
71ec08c158 Assert number of cachelines to evict
Number of cachelines to evcit can't be greater than the number of unmapped
entries in request.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 16:29:05 +01:00
Michal Mielewczyk
4e8c037d7b Fix ocf_engine_unmapped_count()
Inserted entries should be considered mapped.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 15:36:47 +01:00
Robert Baldyga
415a778c03 ocf_request: Fix use after free bug
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-15 19:41:14 +01:00
Robert Baldyga
b25ea7c8ec seq_cutoff: Fix stream promotion fastpath
Now req_count starts from 1.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-11 14:41:13 +01:00
Jan Musial
2dc36657bf Use mpool to allocate metadata_io requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
c243ad3df0 Use mpool to allocate ocf_requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
8e21aa6441 Remove not needed req allocator size table
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
b47ef2c386 Change vmalloc in metadata asynch io to kmalloc
Vmalloc is very slow in comparison to kmalloc

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
9f8802e833 Decrease memory requirements for metadata io
Magic child metadata request count (33) was deducted
experimentally.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Michał Mielewczyk
d2b5de7970
Merge pull request #448 from robertbaldyga/perqueue-seq-cutoff
Per-queue multi-stream sequential cutoff
2021-03-05 14:38:21 +01:00
Adam Rutkowski
7927b0b74f Optimize set_hot calls in eviction path
Split traversation into two distinct phases: lookup()
and lru set_hot(). prepare_cachelines() now only calls
set_hot() once after lookup and insert are finished.
lookup() is called explicitly only once in
prepare_cachelines() at the very beginning of the
procedure. If request is a miss then then map()
performs operations equivalent to lookup() supplemented
by an attempt to map cachelines. Both lookup() and
set_hot() are called via traverse() from the engines
which do not attempt mapping, and thus do not call
prepare_clines().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1c6168d82b Do not unmap inserted cachelines before eviction
Unmapping cachelines previously mapped from freelist before
eviction is a waste of resources. Also if map does not erarly
exit upon first mapping error, we can have request fully
traversed (and partially mapped) after mapping and thus
skip lookup in eviction.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
ce2ff14150 Move request engine callbacks to req structure
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
0e699fc982 Refactor ocf_engine_remap
.. so that the main part, responsible strictly for mapping
given LBA to given collision index, is encapsulated in
a function ocf_map_cache_line with external linkage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
3bd0f6b6c4 Change sequential request detection logic
Changing sequential request detection so that a miss request is
recognized as sequential after needed cachelines are evicted
and mapped to the request in a sequential order.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
c40e36456b Add missing hash bucket lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
69c0f20b6e Remove global metadata lock from cleaner metadata update step
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a80eea454f Add function to determine hash collisions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
07d1079baa Add LOOKUP_REMAPPED status to allow iterative cacheline lock
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
b34f5fd721 Rename LOOKUP_MAPPED to LOOKUP_INSERTED
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a09587f521 Introduce ocf_cache_line_is_locked_exclusively
Function returns true if cacheline is locked (read
or write) by exactly one entity with no waiters.

This is usefull for eviction. Assuming caller holds
hash bucket write lock, having exlusive cacheline
lock (either read or write) allows holder to remap
cacheline safely. Typically during eviction hash
bucket is unknown until resolved under cacheline lock,
so locking cacheline exclusively (instead of locking
and checking for exclusive lock) is not possible.

More specifically this is the flow for synchronizing
cacheline remap using ocf_cache_line_is_locked_exclusively:
1. acquire a cacheline (read or write) lock
2. resolve hash bucket
3. write-lock hash bucket
4. verify cacheline lock is exclusive

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Robert Baldyga
3ee253cc4e Per-queue multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-04 16:38:31 +01:00
Robert Baldyga
ac9bd5b094
Merge pull request #453 from arutk/no_cl_gl_lock
Skip cacheline concurrency global lock in fast path
2021-03-04 12:33:50 +01:00
Michal Mielewczyk
f1012b020b Validate ioclass config
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
95d756de91 Remove ioclass min_size from public API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
f61472c3f4 Validate seq cutoff threshold value
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Slawomir Jankowski
eeda1f3f0f Unify type of dirty_for in info structs
Reformat function that calculates how long cache/core is dirty
Update `dirty_for` types in functional tests

Values stored in info structs fields (both in cache and core structs)
are unsigned 64-bits ints but `dirty_for`s were unsigned 32-bits ints.

Use existing function to transform returned value to seconds.
Replace seconds stored in metadata with seconds.
Replacement was done if old value of replaced field was equal to zero.
Acquiring monotonic high precission timestamp is potentially
slow and it makes sense to compare the field's value
to zero before calling atomic function.

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2021-02-25 14:51:53 +01:00
Adam Rutkowski
c7fc4fff39 Change cacheline concurrency constructor params
Provide number of cachelines as the cacheline concurrency
construtor param instead of reading it from cache.

The purpose of this change is to improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:45 -06:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
0f34e46375 Fix error handling in cacheline concurrency init
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:36 -06:00
Adam Rutkowski
d8f25d2742 Skip cacheline concurrency global lock in fast path
The main purpose of cacheline concurrency global lock
is to eliminate the possibility of deadlocks when
locking multiple cachelines.

Cacheline lock fast path does not need to acquire
this lock, as it is only opportunistically attempting
to lock all clines without wait. There is no risk
of deadlock, as:
 * concurrent fast path will also only try_lock
   cachelines, releasing all acquired locks if failed
   to immediately acquire lock for any cacheline
 * concurrent slow path is guaranteed to have
   precedence in lock acquisition when conditions
   for deadlock occure (both slowpath and fastpath
   have acquired some locks required by the other
   thread). This is because the fastpath thread will
   back off (release acquired locks) if any one of the
   cacheline locks is not acquired.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:28 -06:00
Adam Rutkowski
c95f6358ab Get rid of status bits lock
All the status bits operations are now protectec by
hash bucket locks

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:05:53 -06:00
Adam Rutkowski
cd9e42f987 Properly lock hash bucket for status bits operations
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:02:50 -06:00
Robert Baldyga
75baec5aa5
Merge pull request #456 from arutk/aalru
Relax LRU list ordering to minimize list updates
2021-02-18 13:48:54 +01:00
Michal Mielewczyk
7f3f2ad115 Evict from overflown pinned ioclass
If an ioclass is pinned but it exceeded its occupancy limit, it should be
evicted anyway.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-16 04:06:07 -05:00
Adam Rutkowski
0748f33a9d Align each global metadata lock to 64B
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
10c3c3de36 Renaming hash bucket locking functions
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
   hold metadata shared global lock ("naked") vs the ones
   which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
   accepting hash bucket id ("id"), core line and lba ("cline")
   and entire request ("req").

Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Adam Rutkowski
c822c953ed Fix return status from hash bucket trylock wr
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-11 15:02:06 -06:00
Robert Baldyga
af8177d2ba
Merge pull request #458 from mmichal10/fix-cleaning
Fix updating hot cachelines cleaning list
2021-02-11 11:30:07 +01:00
Robert Baldyga
d03ea719cd
Merge pull request #451 from arutk/exact_evict_count
only request evict size equal to request unmapped count
2021-02-11 10:47:12 +01:00
Michal Mielewczyk
fa41d4fc88 Fix updating hot cachelines cleaning list
Update cacheline's timestamp each time it's being written.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-10 10:02:57 -05:00
Adam Rutkowski
9e98eec361 Only acquire read lock to verify lru elem hotness
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
c04bfa3962 Add macros to read lock eviction list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
9690f13bef Change eviction spin lock to RW lock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Adam Rutkowski
b4daac11c2 Track hot items on LRU list
Top 50% least recently used cachelines are not promoted
to list head upon access. Only after cacheline drops to
bottom 50% it is considered as a candidate to promote
to list head.

The purpose of this change is to reduce overhead of
LRU list maintanance for hot cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:22:55 -06:00
Adam Rutkowski
746b32c47d Evict from overflown partitions first
Overflown partitions now have precedence over others during
eviction, regardless of IO class priorities.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 12:51:39 -06:00
Adam Rutkowski
5538a5a95d Only request evict size equal to request unmapped count
Removing the logic for oportunistic partition overflow
reduction by evicting more cachelines than actually
required by the request being serviced.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 11:11:15 -06:00
Michal Mielewczyk
93eccc862a Reset per-partition counters when adding core
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-03 06:18:44 -05:00
Robert Baldyga
3543f5c5cc
Merge pull request #443 from rafalste/update_copyright
Update copyright statements (2021)
2021-02-03 11:59:39 +01:00
Michal Mielewczyk
3a7b55c4c2 Don't evict on hit
If request is hit, simply try to acquire cachelines instead of verifying
whether target partition's size is not exceeded.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-29 17:15:32 -05:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Adam Rutkowski
012438c279 Add missing collision page lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-20 19:28:41 -06:00
Robert Baldyga
5a88ab2d61 Flush metadata collision segment on core remove
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.

This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.

This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-19 13:34:28 +01:00
Adam Rutkowski
f206c64ff6 Fine granularity lock in cache_mngt_core_deinit_attached_meta
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 03:11:46 -05:00
Michal Mielewczyk
6d962b38e9 API for cacheline write trylock
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
bd20d6119b External linkage for function to sparse single cline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-15 02:10:42 -05:00
Adam Rutkowski
93bda499c7 Add functions to lock specific hash bucket
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-12 15:42:21 -05:00
Robert Baldyga
fd88c2c3a4
Merge pull request #436 from mmichal10/metadata-assert
Metadata assert
2021-01-08 10:15:08 +01:00
Michal Mielewczyk
fcef130919 Bug on metadata access error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 18:10:44 -05:00
Michal Mielewczyk
d0225ef1cb Prevent uint32_t overflow
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 02:45:05 -05:00
Robert Baldyga
ea1fc7a6d4 seq-cutoff: Don't modify node list under read lock
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-05 19:46:37 +01:00
Robert Baldyga
dd508c595f
Merge pull request #430 from rafalste/fix_attach_load_paths
Create separate pipelines and paths for cache attach/load scenarios
2020-12-23 16:51:37 +01:00
Rafal Stefanowski
57d4aaf7c9 Return error status from ocf_freelist_init
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:43:46 +01:00
Rafal Stefanowski
d3b61e474c Remove init_mode and use metadata.is_volatile instead
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:55 +01:00
Rafal Stefanowski
88b97df16d Fix pipeline attach/load paths
Create separate pipelines for cache attach and load scenarios.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:49 +01:00
Robert Baldyga
6270d917f8 Initialize sequential cutoff for detached cores
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-12-23 14:00:54 +01:00
Rafal Stefanowski
4c42d62f97 Add a newline escape in 'invalid checksum' messages
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-22 16:34:44 +01:00
Adam Rutkowski
1b8bfb36f5 Add missing part->id initialization
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-22 11:52:20 +01:00
Robert Baldyga
7f60d73511
Merge pull request #413 from mmichal10/occ-per-ioclass
Occupancy per ioclass
2020-12-21 23:43:54 +01:00
Michal Mielewczyk
0dc8b5811c Store min and max ioclass size as percentage val
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.

To prevent those errors, min and max should be calculated dynamically.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
bcfc821068 Don't calc free cachelines in per-ioclass stats
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
60680b15b2 Accessors for req->info.mapping_error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
9e11a88f2e Occupancy per ioclass
Respect occpuancy limit set single ioclass

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
05f3c22dad Occupancy per ioclass utilities
Functions to check space availability and to manage cachelines reservation

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:18:47 -05:00
Michal Mielewczyk
600bd1d859 Access partition's metadata counters via functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
9d80882b00 Remove re_part field from struct ocf_req_info
Since the request carries an explicit information about number of the
cacheliens to be reparted, no need of keeping the boolean information if some
of the request's cachelines are assigned to a wrong partition

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e26ca30399 Track explicit number of cachelines to be reparted
Instead of redunant calculating number of cachlines to be reparted, keep this
information in request's info

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
4f228317a1 Update docs for space_managment_evict_do()
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
21e98a6dbc Evict request's target partition in regrular order
Instead of evicting target partition as the last one, respect eviction
priorities

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e999de7232 Don't roundup when evicting single part
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
718dc743c8 Enable particular ioclass eviction
If partition's occupancy limit is reached, cachelines should be evicted from
 request's target partition.

Information whether particular partition eviction should be triggered is
carried as a flag by request which triggered eviction.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:23 -05:00
Michal Mielewczyk
e9d7290078 Extend ioclass management logging
When setting ioclass, print info about it's max size

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Michal Mielewczyk
c643a41977 Prevent adding ioclass with the same id twice
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Michal Rakowski
ac2effb83d Fix whitespaces
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Adam Rutkowski
822cd7c45a Introduce metadata superblock & segment structures
Refactoring metadata superblock and segment ops code
to make it less tightly coupled.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
3eb5568608 rename segment->segment_id and segemnt_ops->segment
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
b074d77797 Spliting metadata implementation to match header files
Moving metadata implementation out of obsolete metadata_hash.c
to .c files corresponding to function declaration header files.
This requires adding shared header for metadata implementation
metadata_internal.h. Some metadata header files did not have
a corresponding .c file - in this case it is added in this
commit.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:49 +01:00
Adam Rutkowski
02405e989d Removing 'hash' word from misspelled metadata functions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
18e35c390b Remove last references to "hash" metadata implementation
Hashed metadata is now default and only implemetation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
5fb4d68c7f Remove get and set from metadata raw ifc
Memcopy based metadata interface is an unnecessary
overhead and is being removed.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
6dfdd6940b Remove metadata ifc structure
At this point it is not used

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
4d97b1611f Move metadata layout field outside meteadata ifc
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
b5d6cdb398 Rename metadata iface_priv to priv
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
05c0826c0f Remove metadata bits manipulation abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
98cba1603f Replace metadata ifc wrappers with direct calls to hash ifc
Metadata wrapper functions (calling iface->func) in header
files are changed to be declarations only. Hash interface
implementation functions in metadata_hash.c are given an
external linkage and are renamed to drop "hash" prefix.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d796e1f400 Remove metadata layout abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d2af0bafda Remove redundant locks from metadata flush/load all
Locks acquired in ocf_metadata_flush(/load)_all are
acquired only for the duration of queueing asynch
service for flush/load, no actual metadata accesses
are performed there.

Also flush/load all are always performed with metadata
marked as deinitialized (metadata reference counter freezed),
so no I/O is reading nor writing the metadata. The only source
of potential concurrent metadata accesses are other management
operations, which should be synchronized using management lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Adam Rutkowski
41a767de97 Multiple LRU lists
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
ac83c4ecd6 seq_cutoff: Allocate seq cutoff structures dynamically per core
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
0fd095046c
Merge pull request #426 from arutk/meta_no_memcpy
Remove memcpy from collision/eviction policy metadata api
2020-12-09 13:02:49 +01:00
Adam Rutkowski
fec61528e6 Remove memcpy from collision/eviction policy metadata api
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-07 17:58:44 +01:00
Robert Baldyga
7af386681d
Merge pull request #418 from robertbaldyga/inc-dep-env-headers
Remove dependency to full ocf_env.h from inc/ headers
2020-11-30 17:16:32 +01:00
Robert Baldyga
9bcafb5bfb seq_cutoff: Initialize each stream with different LBA
Initializing each stream with unique LBA ensures there are no initial
rbtree collisions, and thus helps to avoid clustering of all the streams
into one big linked list instead of forming performance friendly proper
tree structure.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-30 15:58:18 +01:00
Robert Baldyga
b8735f6517 rbtree: Fix swapping out-of-tree node with root
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-30 15:58:18 +01:00
Robert Baldyga
c8e7e0053c Remove dependency to full ocf_env.h from inc/ headers
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-19 13:07:16 +01:00
Robert Baldyga
a54d4461f0 seq_cutoff: Always continue the biggest stream
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
8b03271626 rbtree: Introduce list find callback
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
0ae4f4b5b2 rbtree: Add equal nodes to linked list
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:21:14 +01:00
Robert Baldyga
50c4de0495 rbtree: Make swap resistant to nodes outside the tree
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-10 13:20:45 +01:00
Robert Baldyga
694224971c rbtree: Replace spaces with tabs
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-11-09 17:32:03 +01:00
Robert Baldyga
0e3c9e740e
Merge pull request #396 from arutk/lru_refactor
Simplify and modularize LRU list code
2020-11-05 15:35:33 +01:00
root
ef08141252 Use -1 for LRU list terminator instead of collision_table_entries
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-11-04 17:43:41 -06:00
Adam Rutkowski
58f8a2218a Simplify and modularize LRU list code
Refactoring LRU list code to reduce code duplication and
improve testability.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-11-04 17:42:53 -06:00
Robert Baldyga
9a23787c6b
Merge pull request #406 from arutk/flush2
Propagate I/O flags (e.g. FUA) to metadata flush I/O
2020-10-06 12:49:22 +02:00
Adam Rutkowski
716edcc637 Flush cache volume after writing config metadata segments
After writing metadata configuration to disk we must
send a flush request to make sure configuration sections
are commited to non-volatile storage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-30 10:40:03 +02:00
Adam Rutkowski
c945db356c Propagate I/O flags (e.g. FUA) to metadata flush I/O
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-29 14:46:27 +02:00
Robert Baldyga
7c29110e47
Merge pull request #398 from Open-CAS/proper-core-status
Fix logging core state on cache load
2020-09-04 19:56:16 +02:00
Robert Baldyga
990f5160eb Cleanup request map entries in error handling path
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-09-02 14:30:28 +02:00
Robert Baldyga
0dfdcb05e9 Fix core volume lifecycle management
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-08-21 16:40:41 +02:00
Rafal Stefanowski
6542c2fa94 Fix memory requirement when loading cache
Load properties before checking memory needs and obtain cache line size
from context rather than from cache state.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-08-19 03:15:18 +02:00
Rafal Stefanowski
072c9c1902 Pass only needed values to _ocf_mngt_calculate_ram_needed() function
Rather then passing whole structs, supply
_ocf_mngt_calculate_ram_needed() with just the values it actually uses.

Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-08-19 03:09:05 +02:00
Jan Musial
2ee1e4c8dd Fix logging core state on cache load
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-07-28 14:52:15 +02:00
Robert Baldyga
d5ecdc16dd Make CRC mismatch on recovery a warning instead of error
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:49:29 +02:00
Robert Baldyga
d946124a01 Calculate CRC for runtime metadata sections only on clean load
During recovery procedure there is no guarantee that checksums
of runtime sections were flushed correctly before dirty shutdown.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:45:53 +02:00
Robert Baldyga
7d889fa1fc
Merge pull request #385 from arutk/pt_write_double_inv
Two pass write invalidate
2020-07-28 07:42:44 +02:00
Adam Rutkowski
b232f2b633 Service WA write misses in WI engine
WA write must follow follow the same two-pass pattern
as WI does. This change modifies WA engine to default to
WI in case of any miss (either partial or full), not only
partial miss.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-07-20 17:26:36 +02:00
Adam Rutkowski
91b6098fda Two pass write invalidate
Add second pass of write invalidate. It is necessary only
if concurrent I/O had inserted target LBAs to cache after
WI request did traversation. These LBAs might have been
written by WI request behind the concurrent I/O's back,
resulting in making these sectors effectively invalid.
In this case we must update these sectors' metadata to
reflect this. However we won't know about this after we
traverse the request again - hence calling ocf_write_wi
again with req->wi_second_pass set to indicate that this
is the second pass (core write should be skipped).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-07-20 17:26:35 +02:00
Robert Baldyga
ec6eae6a5f
Merge pull request #377 from arutk/fix_map
Set entry->core_id in ocf_engine_lookup_map_entry
2020-07-10 21:32:09 +02:00
Adam Rutkowski
b14312dcef Set entry->core_id in ocf_engine_lookup_map_entry
core_id should be set in this function. The fact that
it is missing might lead to incorrect behaviour e.g. in
case of promotion policy.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-06-09 13:15:50 +02:00
Adam Rutkowski
7776bd6485 WO: read clean sectors from cache
In case of partial hit WO engine first reads data for the entire
request address range from core device. Then it plumbs it by fetching
dirty sectors from cache device.

For unindentified reason this leads to a data corruption in YCSB
workload A. After flushing dirty data and re-loading cache the
data is correct.

This change modifies WO read handler to read clean data from the
cache. This is not optimal, as the clean sectors are now read twice
in case of partial hit. For now it seems to be good enough work-around
for the data corruption problem.

The symptoms, combined with the fact that this change seems to make
the problem go away, indicates that at some point WB write handler
(and/or special I/O request handlers like discard) puts CAS in a
state where in-memory medata wrongly indicates that a sector is
clean while in fact it is dirty, as marked in the on-disk metadata.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-05-27 12:31:53 +02:00
Robert Baldyga
1428376554
Merge pull request #371 from Ostrokrzew/load
Disable loading cache with 'force' flag
2020-05-22 13:52:16 +02:00
Slawomir Jankowski
248018b341 Change return code to valid OCF code
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-21 11:11:52 +02:00
Slawomir Jankowski
544e4086ca Disable load operation with 'force' flag
Fail `ocf_mngt_cache_load` function with `OCF_ERR_INVAL`
error code when force flag is in use.
Log error message.

Closes #361

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-21 11:11:52 +02:00
Slawomir Jankowski
455d554dc1 Reject zero-sized discard IOs to core
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:41 +02:00
Slawomir Jankowski
da34d5047b Typo fix
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:41 +02:00
Slawomir Jankowski
f516ed62e3 Remove unused parameter
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2020-05-19 16:23:32 +02:00
Robert Baldyga
1c9312842a
Merge pull request #369 from rafalste/copyright_update
Update copyright statements
2020-05-06 12:42:10 +02:00
Michal Rakowski
e7a2f333ae Take into account bytes from incoming req for 'full' seq cutoff policy
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-05-06 11:07:26 +02:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Michal Rakowski
67577fc1ef Force pass-through for requests bigger than cache
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-04-24 15:34:27 +02:00
Robert Baldyga
15fd53cbb0 Initialize seqential cutoff in try-add / load paths
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-23 00:41:53 +02:00
Robert Baldyga
188559416c
Merge pull request #354 from robertbaldyga/multistream-seq-cutoff
Introduce multi-stream seqential cutoff
2020-04-22 15:35:42 +02:00
Robert Baldyga
e9afb40860 Add sequential cutoff debug interface
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-22 13:30:42 +02:00
Robert Baldyga
93cd0615d3 Introduce multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-22 13:30:42 +02:00
Robert Baldyga
a9c36477d2 Fix deadlock on concurrent flush at the same cache
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-04-03 18:09:35 +02:00
Robert Baldyga
53dc4020e3
Merge pull request #358 from arutk/req_queue_fix
Do not reference req after adding to queue list
2020-03-27 15:04:51 +01:00
Robert Baldyga
80b410dc2e
Merge pull request #355 from arutk/flush_fixes
Fix stalls and warnings during flush
2020-03-27 14:11:34 +01:00
Adam Rutkowski
e39a76aa5e Do not reference req after adding to queue list
ocf_engine_push_req_(front|back) must not dereference req
pointer after putting the request on queue list and unlocking
the queue. At this point handler interface may asynchronously
pick up the request, handle it and deallocate.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-26 01:29:02 +01:00
Adam Rutkowski
b267d5d77d Reduce flush relaxation period by 1 order of magninude
Loop now relaxes every 2^17 (131K) cycles instead of every 1M.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:37:49 +01:00
Adam Rutkowski
fd328bd0a1 Check relaxation condition in each step of flush loop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:36:43 +01:00
Adam Rutkowski
4d61d56249 Rename flushing functions local variables for readibility
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-25 23:29:16 +01:00
Robert Baldyga
cf5e13c4aa
Merge pull request #357 from arutk/parallel_flush_Fix
Queue flush portion requests to the back of IO queue
2020-03-24 23:15:11 +01:00
Robert Baldyga
332ad1dfbc Make seq cutoff policy and threshold atomic variables
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-24 18:17:15 +01:00
Robert Baldyga
935df23c74 Introduce red-black trees utility
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-24 18:17:15 +01:00
Adam Rutkowski
64dcae1490 Split global metadata lock critial path
There is no need to constantly hold metadata global lock
during collecting cachelines to clean. Since we are past
freezing dirty request counter, we know for sure that the
number of dirty cache lines will not increase. So the worst
case is when the loop realxes and releases the lock,
a concurent IO to CAS is serviced in WT mode, possibly
inserting and/or evicting cachelines. This does not interfere
with scanning for dirty cachelines. And the lower layer will
handle synchronization with concurrent I/O by acquiring
asynchronous read lock on each cleaned cacheline.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:32:15 -04:00
Adam Rutkowski
3b3a49e8ea Queue flush portion requests to the back of IO queue
In current implementation in case of fast media flushning
container may starve all concurrent containers flushing
due to continous rescheduling of offender requests to the
front of I/O queue. Pushing request to the back of IO
queue ensures FIFO handling and removes possibility of
starvation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 19:06:14 -04:00
Adam Rutkowski
c17beec7d4 Do not exclude used cachelines from flushing
Lower layer is prepared to handle used cachelines by
acquiring asynchronus read lock. It is very likely that
by the time the cacheline is actually cleaned its lock
state changes. So checking the lock at the moment of
constructing dirty cachelines list makes little sense.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Adam Rutkowski
61983c946c Move flush containers sort & submit outside metadata lock
Moving _ocf_mngt_flush_containers outside global metadata
critical section. All this function does is sort core lines
and add queue request.

This fixes stalls reported by Linux scheduler due to
IO threads waiting on global metadata RW semaprhore for
several minutes.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-23 12:59:30 -04:00
Michal Rakowski
6f4d02f251 Fix seq_cutoff respecting in pt read
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Michal Rakowski
2edd05c812 Change get_effective_cache_mode to operate on req instead of io
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-20 18:58:10 +01:00
Michal Rakowski
d84942daa3 Typo fixes
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-17 16:36:40 +01:00
Robert Baldyga
22bdb8b004
Merge pull request #352 from robertbaldyga/update-memory-requirement-check
Update memory requirement check
2020-03-17 15:28:56 +01:00
Robert Baldyga
94b4bee6de Update memory requirement check
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-17 14:42:01 +01:00
Jan Musial
d2fe82dc85 Add memory check before engaging promotion policy
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-03-16 09:09:42 +01:00
Jan Musial
4eb5612832 Reorder fields in nhit_hash map to improve memory efficiency
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-03-06 12:36:46 +01:00
Robert Baldyga
108fe28ad4 Introduce core priv
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-03 15:37:12 +01:00
Robert Baldyga
ac7b5aba6b metadata: Allocate memory with ENV_MEM_NOIO flag
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 12:03:21 +01:00
Robert Baldyga
b7e59ee04a metadata: Use proper function for freeing memory
a_req is allocated using env_vmalloc() so we need to free it
using env_vfree(), not env_free().

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 10:29:15 +01:00
Adam Rutkowski
ee37391e97 Fix discard request map allocation
Discard handling splits large request into several steps.
However the actual size of request map for discard was
determined based on original request size, not step request
size, resulting in waste of memory and allocations > 4K.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 17:47:11 -05:00
Adam Rutkowski
26fd938ccf Reduce max trim request size to 512K
512K is the maximum request size for which request map
fits into one page (4K) regardless of cacheline size.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 15:57:34 -05:00