Commit Graph

1156 Commits

Author SHA1 Message Date
Adam Rutkowski
e5fa15bdb2 Remove early return from engine_map() in case of hit
At this point cacheline status in request map is stale,
as lookup was performed before upgrading hash bucket lock.
If indeed all cachelines are mapped, this will be determined
in the main loop of engine_map().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-17 11:21:03 -05:00
Michał Mielewczyk
e3d5439d9f
Merge pull request #469 from mmichal10/fix-unmapped
Fix `ocf_engine_unmapped_count()`
2021-03-17 10:30:23 +01:00
Adam Rutkowski
736fb2efc0 Call LRU set_hot() immediately after cache insert
This assures that cacheline with LOOKUP_INSERTED status
is always present on the LRU list.

This fixes an ENV_BUG() caused by an attempt to remove
a cacheline from LRU list which was not there. This
happened when cacheline was mapped from freelist
(LOOKUP_INSERTED) but the entire request mapping failed
and generic cleanup routines attempted to invalidate cacheline,
including removing it from the LRU list. As engine_set_hot()
is called after successfull mapping, the inserted cacheline was
not yet present on the LRU list.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-16 19:59:09 -05:00
Michal Mielewczyk
71ec08c158 Assert number of cachelines to evict
Number of cachelines to evcit can't be greater than the number of unmapped
entries in request.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 16:29:05 +01:00
Michal Mielewczyk
4e8c037d7b Fix ocf_engine_unmapped_count()
Inserted entries should be considered mapped.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-16 15:36:47 +01:00
Robert Baldyga
c36eefbaf0
Merge pull request #468 from robertbaldyga/fix-use-after-free
ocf_request: Fix use after free bug
2021-03-16 13:46:54 +01:00
Robert Baldyga
415a778c03 ocf_request: Fix use after free bug
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-15 19:41:14 +01:00
Robert Baldyga
c6bf894f46
Merge pull request #467 from robertbaldyga/seq-cutoff-fix-promotion-fastpath
seq_cutoff: Fix stream promotion fastpath
2021-03-12 10:08:31 +01:00
Robert Baldyga
b25ea7c8ec seq_cutoff: Fix stream promotion fastpath
Now req_count starts from 1.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-11 14:41:13 +01:00
Michał Mielewczyk
d28451ab4a
Merge pull request #465 from Open-CAS/mio_mpool
Use mpool allocators for requests and metadata_io
2021-03-10 16:48:02 +01:00
Jan Musial
8756fe121f Add mpools to POSIX env
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
2dc36657bf Use mpool to allocate metadata_io requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
c243ad3df0 Use mpool to allocate ocf_requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
8e21aa6441 Remove not needed req allocator size table
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
b47ef2c386 Change vmalloc in metadata asynch io to kmalloc
Vmalloc is very slow in comparison to kmalloc

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
9f8802e833 Decrease memory requirements for metadata io
Magic child metadata request count (33) was deducted
experimentally.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Michał Mielewczyk
d2b5de7970
Merge pull request #448 from robertbaldyga/perqueue-seq-cutoff
Per-queue multi-stream sequential cutoff
2021-03-05 14:38:21 +01:00
Michał Mielewczyk
6a93303d26
Merge pull request #433 from arutk/fast_evict_master
Parallel eviction
2021-03-05 13:47:51 +01:00
Adam Rutkowski
7927b0b74f Optimize set_hot calls in eviction path
Split traversation into two distinct phases: lookup()
and lru set_hot(). prepare_cachelines() now only calls
set_hot() once after lookup and insert are finished.
lookup() is called explicitly only once in
prepare_cachelines() at the very beginning of the
procedure. If request is a miss then then map()
performs operations equivalent to lookup() supplemented
by an attempt to map cachelines. Both lookup() and
set_hot() are called via traverse() from the engines
which do not attempt mapping, and thus do not call
prepare_clines().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1c6168d82b Do not unmap inserted cachelines before eviction
Unmapping cachelines previously mapped from freelist before
eviction is a waste of resources. Also if map does not erarly
exit upon first mapping error, we can have request fully
traversed (and partially mapped) after mapping and thus
skip lookup in eviction.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
ce2ff14150 Move request engine callbacks to req structure
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
0e699fc982 Refactor ocf_engine_remap
.. so that the main part, responsible strictly for mapping
given LBA to given collision index, is encapsulated in
a function ocf_map_cache_line with external linkage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
3bd0f6b6c4 Change sequential request detection logic
Changing sequential request detection so that a miss request is
recognized as sequential after needed cachelines are evicted
and mapped to the request in a sequential order.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
056217d103 Rename cleaner attribute cache_line_lock to lock_cacheline
.. to make it clean that true means cleaner must lock
cachelines rather than the lock is already being held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
c40e36456b Add missing hash bucket lock in cleaner
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
69c0f20b6e Remove global metadata lock from cleaner metadata update step
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a80eea454f Add function to determine hash collisions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
07d1079baa Add LOOKUP_REMAPPED status to allow iterative cacheline lock
Allowing request cacheline lock to be called on partially
locked request. This is going to be usefull for upcomming
eviction improvements, where request will first have evicted
(LOOKUP_REMAPPED) cachelines assigned to it in a locked state,
followed by standard request cacheline lock call in order to
lock previously inserted (LOOKUP_HIT) or mapped from freelist
(LOOKUP_INSERTED) cachelines.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
b34f5fd721 Rename LOOKUP_MAPPED to LOOKUP_INSERTED
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Adam Rutkowski
a09587f521 Introduce ocf_cache_line_is_locked_exclusively
Function returns true if cacheline is locked (read
or write) by exactly one entity with no waiters.

This is usefull for eviction. Assuming caller holds
hash bucket write lock, having exlusive cacheline
lock (either read or write) allows holder to remap
cacheline safely. Typically during eviction hash
bucket is unknown until resolved under cacheline lock,
so locking cacheline exclusively (instead of locking
and checking for exclusive lock) is not possible.

More specifically this is the flow for synchronizing
cacheline remap using ocf_cache_line_is_locked_exclusively:
1. acquire a cacheline (read or write) lock
2. resolve hash bucket
3. write-lock hash bucket
4. verify cacheline lock is exclusive

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:46 +01:00
Robert Baldyga
9352c881ab tests: Update sequential cutoff tests
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-04 16:38:31 +01:00
Robert Baldyga
3ee253cc4e Per-queue multi-stream sequential cutoff
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-04 16:38:31 +01:00
Robert Baldyga
ac9bd5b094
Merge pull request #453 from arutk/no_cl_gl_lock
Skip cacheline concurrency global lock in fast path
2021-03-04 12:33:50 +01:00
Robert Baldyga
f37d7b6a45
Merge pull request #466 from mmichal10/security_tests
Secure OCF
2021-03-04 09:19:21 +01:00
Michal Mielewczyk
3a26bc56cd pyocf: improve test logging
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 18:37:58 +01:00
Michal Mielewczyk
7f862c3080 pyocf: improve random string generator
Set of random characters may be exteded with a custom list.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 18:37:58 +01:00
Michal Mielewczyk
a81be31dd4 pyocf: default range for int16
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 18:37:58 +01:00
Michal Mielewczyk
c4a2dc4cad pyocf: security tests for ioclass api
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 18:37:58 +01:00
Michal Mielewczyk
fa556247d7 pyocf: change encoding of ioclass name
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
080e13a071 pyocf: valid ranges for ioclass config values
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
f1012b020b Validate ioclass config
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
06edc48717 pyocf: remove min_size from ioclass config
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
95d756de91 Remove ioclass min_size from public API
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Michal Mielewczyk
5c053ad964 pyocf: security test for seq cutoff threshold
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Michal Mielewczyk
73d6fb33de pyocf: api for setting core seq cutoff threshold
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Michal Mielewczyk
9aebf57efa pyocf: valid ranges for seq cutoff threshold
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Michal Mielewczyk
f61472c3f4 Validate seq cutoff threshold value
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00
Michal Mielewczyk
d909698790 pyocf: fix acp security test
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-02-26 08:24:41 -05:00