Commit Graph

1362 Commits

Author SHA1 Message Date
Michał Mielewczyk
5b3a9606d3
Merge pull request #490 from mmichal10/check-core-uuid
Prevent adding core with the same UUID twice
2021-04-14 20:05:22 +02:00
Michał Mielewczyk
467dfed51e
Merge pull request #489 from Open-CAS/fix-removing-acp-cores
Fix removing cores from cleaning policy
2021-04-14 16:58:46 +02:00
Michal Mielewczyk
19276570b8 Prevent adding core with the same UUID twice
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-04-14 16:56:09 +02:00
Michał Mielewczyk
aa5de7342c
Merge pull request #488 from mmichal10/seq-cutoff-security-tests
Seq-cutoff promotion count secuity tests
2021-04-14 16:02:22 +02:00
Michal Mielewczyk
0bfa9ed870 pyocf: Seq-cutoff promotion count security tests
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-04-14 15:40:29 +02:00
Jan Musial
6ced60471d Additional safeguard in acp_remove_core
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-14 14:57:08 +02:00
Jan Musial
51455330ad Fix removing cores from cleaning policy
After detaching a core if user wanted to remove inactive cores the
cleaning policy data would not be initialized and would bug-out on next
core add.

This check was incorrect, as cleaning policy core metadata lifetime is
not bound to core volume being open or not.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-14 14:31:51 +02:00
Robert Baldyga
7dcf90ef6a
Merge pull request #487 from Open-CAS/fix-io-put
Avoid nullptr dereference in ocf_io_put
2021-04-06 14:09:09 +02:00
Robert Baldyga
73415c6349
Merge pull request #445 from arutk/probe
probe: return dirty and shutdown status despite metadata mismatch
2021-04-06 13:52:19 +02:00
Adam Rutkowski
0476511c00 probe: return dirty and shutdown status despite metadata mismatch
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Adam Rutkowski
ff4842482e Fix setting cache dirty flag during stop
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Jan Musial
67f80d813c Avoid nullptr dereference in ocf_io_put
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-04-06 13:38:34 +02:00
Robert Baldyga
215f1d925a
Merge pull request #486 from robertbaldyga/seq-cutoff-ignore-invalid
seq_cutoff: Ignore invalid streams
2021-04-02 10:15:58 +02:00
Robert Baldyga
9a3f64df28 seq_cutoff: Ignore invalid streams
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-04-01 18:46:28 +02:00
Robert Baldyga
6603e958bf
Merge pull request #485 from arutk/core_stats_fix
Fix eviction occupancy stats decrement
2021-04-01 17:19:11 +02:00
Adam Rutkowski
2fadd5a22a Fix eviction occupancy stats decrement
Eviction should decrement occupancy statistics for the
core from which a cacheline is being evicted rather than
from the I/O target core.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-01 18:01:28 -05:00
Michal Mielewczyk
8e0bb49493 functional test for eviction between cores
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-01 18:01:28 -05:00
Michał Mielewczyk
fee4382553
Merge pull request #484 from robertbaldyga/cleaner-dont-check-for-valid-on-skip
cleaner: Don't check for valid if cache line is not dirty
2021-04-01 14:21:39 +02:00
Robert Baldyga
49b9b36d13 cleaner: Don't check for valid if cache line is not dirty
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-04-01 13:28:19 +02:00
Michał Mielewczyk
642794dcd7
Merge pull request #483 from arutk/repart_fix
Fix repartitioning in request refresh path
2021-03-31 11:32:28 +02:00
Adam Rutkowski
719676c444 Fix repartitioning in request refresh path
update_req_info() should include REMAPPED cachelines
in repart stats (number of cachelines within request
belonging to other than the target partition).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-31 12:13:48 -05:00
Robert Baldyga
0bdc8208e9
Merge pull request #482 from arutk/cleaner
Remove dirty check from LRU cleaner getter callback
2021-03-30 21:06:42 +02:00
Adam Rutkowski
521258bcc8 Remove dirty check from LRU cleaner getter callback
This check is incorrect as cacheline status may change
from dirty to clean at any point during cleaning, except for
when the hash bucket is locked.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-30 13:10:28 -05:00
Michał Mielewczyk
c2e588be9d
Merge pull request #476 from mmkayPL/cacheline-alignment
Cacheline alignment
2021-03-26 12:01:55 +01:00
Michał Mielewczyk
2aa8922fea
Merge pull request #478 from Open-CAS/fix-freeing-discard-reqs
Fix freeing oversized discard requests
2021-03-26 11:33:40 +01:00
Michał Mielewczyk
a6c8cbb1ac
Merge pull request #479 from arutk/lru_fix3
Always call LRU_set_hot() under hash bucket lock
2021-03-26 11:04:59 +01:00
Michał Mielewczyk
78d7e5294f
Merge pull request #480 from arutk/lru_fix4
Clear hot flag when removing node from LRU list
2021-03-26 11:04:47 +01:00
Adam Rutkowski
b87008dc67 Clear hot flag when removing node from LRU list
This isn't strictly required in current implementation as
nodes are always re-initialized before inserting to LRU list.
However it seems to make sense to zero the flag anyway to
make the code easier to reason about.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 10:25:03 -05:00
Michał Mielewczyk
5eab515499
Merge pull request #481 from arutk/map_no_early_exit
Remove early return from engine_map()
2021-03-26 11:04:10 +01:00
Adam Rutkowski
9486b7796f Remove early return from engine_map()
Removing conditional early return from engine_map() function
in case of insufficient free cachelines. The reasons are:

1. current implementation does not treat unssufficient free
cachelines condition as an error,
2. the check is based on stale request info, so it is inaccurate,
3. it is easier to hit more paths with functional tests,
4. partially mapping request from the freelist becomes more common
rather than being a corner case dependent on racy timings between
threads

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-26 07:40:24 -05:00
Kozlowski Mateusz
3f9af8bd82 Update pyocf types to new field order
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
e054949cbb Metadata updater mutex alignment
Avoids trashing of (mostly) static and often used entries

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
e391fc2c13 Queue alignment
Metadata reshuffling

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
642527d72a ref count alignment
Move ref counts to their own cacheline - otherwise they pollute and cause
false sharing to fields nearby and cause a lot of cache bouncing between
physical CPUs.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fd2fd335a0 ocf_cache alignment
Grouping static fields together, while often changing ones get their own
cacheline, or some not used often/important fields.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
33f29e43bc Aligned ocf_volume
Force cacheline alignment to avoid cacheline trashing on static fields
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Adam Rutkowski
a3f2a214b6 Always call LRU_set_hot() under hash bucket lock
set_hot() depends on cacheline metadata status to determine
on which list the element is located (dirty cs clean list).
Thus at least hash bucket lock is required when calling
set_hot().

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-25 18:50:13 -05:00
Jan Musial
9c070c1d25 Fix freeing oversized discard requests
When issuing discard request over 512KiB OCF would trim this request and
overwrite req->core_line_count which would then cause this request to be
freed from wrong mpool.

This is fixed now by saving core_line_count that was set when allocating
this request that is never overwritten. This alloc_core_line_count is
then used to free the request from correct mpool.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-25 15:16:57 +01:00
Robert Baldyga
b12e124954
Merge pull request #477 from mmkayPL/per-queue-seq-cutoff-locks
Use read/write locks in queue sequential cutoffs
2021-03-23 13:54:27 +01:00
Kozlowski Mateusz
365f4f0d19 Use read/write locks in queue sequential cutoffs
If user thread is preempted during tree/list update and another IO
is issued on the same CPU, the structure will be in undefined state.
This may result in hung tasks, if the tree stops being a tree and a loop exists -
tree search functions won't be able to end properly; or panics if a NULL value appears
suddenly in the preempted thread, after a null-check was already done.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-23 09:54:56 +01:00
Robert Baldyga
e321497ecc
Merge pull request #475 from mmichal10/fix-ioclass-config
Return error when modifying default ioclass rule
2021-03-19 18:15:21 +01:00
Michal Mielewczyk
92a5ddd524 ut framework: don't mock env functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:08:27 +01:00
Michal Mielewczyk
0d3f3cde14 Return error when modifying default ioclass rule
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-19 16:06:23 +01:00
Robert Baldyga
ca4c2238b4
Merge pull request #474 from robertbaldyga/seq-cutoff-max-threshold-fix
seq_cutoff: Fix max threshold value
2021-03-19 11:58:40 +01:00
Robert Baldyga
5a765c6127 seq_cutoff: Fix max threshold value
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-19 11:40:22 +01:00
Robert Baldyga
5f77db5c85
Merge pull request #473 from robertbaldyga/deallocate-request-properly
ocf_request: Deallocate request with separately allocated map properly
2021-03-19 10:31:07 +01:00
Robert Baldyga
87244c04d7
Merge pull request #472 from mmichal10/lock-on-setting-hot
Update cleaning lru under metadata lock
2021-03-19 09:54:32 +01:00
Robert Baldyga
74d61785e9 ocf_request: Deallocate request with separately allocated map properly
When allocation of request with map fails, we fallback to allocating
request with no map, and then allocate map separately. During request
put we need to distinguish between those two cases in order to deallocate
request properly.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-03-19 09:49:29 +01:00
Robert Baldyga
296e98e39c
Merge pull request #471 from arutk/lru_fix_2
Prevent remapping cachelines within single request
2021-03-18 20:33:11 +01:00