Commit Graph

90 Commits

Author SHA1 Message Date
Michal Mielewczyk
9e11a88f2e Occupancy per ioclass
Respect occpuancy limit set single ioclass

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Mielewczyk
9d80882b00 Remove re_part field from struct ocf_req_info
Since the request carries an explicit information about number of the
cacheliens to be reparted, no need of keeping the boolean information if some
of the request's cachelines are assigned to a wrong partition

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Michal Mielewczyk
e26ca30399 Track explicit number of cachelines to be reparted
Instead of redunant calculating number of cachlines to be reparted, keep this
information in request's info

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:00:25 -05:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Robert Baldyga
990f5160eb Cleanup request map entries in error handling path
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-09-02 14:30:28 +02:00
Robert Baldyga
ec6eae6a5f
Merge pull request #377 from arutk/fix_map
Set entry->core_id in ocf_engine_lookup_map_entry
2020-07-10 21:32:09 +02:00
Adam Rutkowski
b14312dcef Set entry->core_id in ocf_engine_lookup_map_entry
core_id should be set in this function. The fact that
it is missing might lead to incorrect behaviour e.g. in
case of promotion policy.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-06-09 13:15:50 +02:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Adam Rutkowski
e39a76aa5e Do not reference req after adding to queue list
ocf_engine_push_req_(front|back) must not dereference req
pointer after putting the request on queue list and unlocking
the queue. At this point handler interface may asynchronously
pick up the request, handle it and deallocate.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-03-26 01:29:02 +01:00
Michal Rakowski
d84942daa3 Typo fixes
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-03-17 16:36:40 +01:00
Adam Rutkowski
cf5a92b527 Lock cachelines under hash bucket locks
.. or when holding exclusive global metadata lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-01 17:09:27 -04:00
Adam Rutkowski
09b68297b2 Revert "Optimize cacheline locking in ocf_engine_prepare_clines"
This change introduced a race condition. In some code paths after
cacheline trylock failed, hash bucket lock needed bo be upgraded
in order to obtain asynchronous lock. During hash bucket lock
upgrade, hash read locks were released followed by obtaining
hash write locks. After read locks were released, concurrent
thread could obtain hash bucket locks and modify cacheline
state. The thread upgrading hash bucket lock would need to
repeat traversation in order to safely continue.

This reverts commit 30f22d4f47.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-30 23:53:10 -04:00
Adam Rutkowski
be3b402162 Synchronization of collision table
Adding synchronization around metadata collision segment pages.
This part of metadata is modified when cacheline is mapped/unmapped
and when dirty status changes.

Synchronization on page level is required on top of cacheline
and hash bucket locks to assure metadata flush always reads
consistent state when copying entire collision table memory
page.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 00:26:29 -04:00
Adam Rutkowski
30f22d4f47 Optimize cacheline locking in ocf_engine_prepare_clines
Hash bucket read/write lock is sufficient to safely attempt
cacheline trylock/lock. This change removes cacheline lock
global RW semaprhore and moves cacheline trylock/lock under
hash bucket read/write lock respectively.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
5248093e1f Move common mapping and locking logic to dedicated function
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
3a70d68d38 Switch from global metadata locks to hash-bucket locks in engines
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
b39bcf86d4 Separate engine map/evict (refactoring)
This temporarily increases amount of boiler-plate code, but
this is going to be mitigated in the following commits.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Michal Rakowski
29c1c7f9e8
Merge pull request #253 from mmichal10/stats-refactor
Stats builder for ioclasses
2019-09-10 14:56:26 +02:00
Michal Mielewczyk
01ce586e6a Use API instead of raw variables to update block stats.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-10 08:01:09 -04:00
Michal Mielewczyk
51c9c516a4 Use API instead of raw variables to update req stats.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-10 08:01:09 -04:00
Adam Rutkowski
13cf871a13 Per-execution-context freelists
Global free cacheline list is divided into a set of freelists, one
per execution context. When attempting to map addres to cache, first
the freelist for current execution context is considered (fast path).
If current execution context freelist is empty (fast path failure),
mapping function attempts to get freelist from other execution context
list (slow path).

The purpose of this change is improve concurrency in freelist access.
It is part of fine granularity metadata lock implementation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-09 16:19:52 -04:00
Kamil Łepek
05be67a72b
Merge pull request #220 from arutk/metadata_offset_hash
New hash function formula
2019-07-29 13:03:04 +02:00
Adam Rutkowski
7184f7787c Cleanup map_info struct
1. Rename hash_key -> hash
2. Better comments for hash and coll_idx

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-07-24 13:12:05 -04:00
Jan Musial
917cbd859a Add promotion policy API and use it in I/O path
Promotion policy is supposed to perform ALRU noise filtering by
eliminating one-hit wonders being added to cache and polluting it.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-07-19 13:52:00 +02:00
Adam Rutkowski
ae6164a49c Helper functions to get request start/end sector in cacheline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-06-12 12:07:02 -04:00
Michal Rakowski
d714f6235b Use rate limited logging in case of engine error
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-06-06 11:27:25 +02:00
Michal Rakowski
9f4536c6e3 Error codes in IO path changed to OCF-specific
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-06-05 09:10:54 +02:00
Robert Baldyga
f240f81641 Make request structure more compressed
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-06-03 19:19:29 +02:00
Adam Rutkowski
d7b3a187e4 Fix condition for setting req->info.dirty_all
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-05-31 19:10:11 -04:00
Robert Baldyga
7de56940a4 Move ocf_request from utils
ocf_request has always been first class citizen in OCF,
so lets place it along with another essential objects.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-27 15:51:27 +02:00
Robert Baldyga
0490dd8bd4 ocf_reqest: Store core handle instead of core_id
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-05-21 12:30:29 +02:00
Robert Baldyga
91e0345b78 Implement asynchronous attach, load, detach and stop
NOTE: This is still not the real asynchronism. Metadata interfaces
are still not fully asynchronous.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-03-18 15:06:57 +01:00
Michal Mielewczyk
e53944d472 Dynamic I/O queue management
- Queue allocation is now separated from starting cache.
- Queue can be created and destroyed in runtime.
- All queue ops accept queue handle instead of queue id.
- Cache stores queues as list instead of array.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-02-26 17:36:19 +01:00
xqzhou
bbb0e1bbb0 Improvement of metadata flush setting
Signed-off-by: xqzhou <xue.qiang.zhou@intel.com>
2019-02-19 05:14:06 -07:00
Michal Mielewczyk
916f46c893 Code refactoring
Removed few dead assignments and added variables initialization.
2019-01-16 06:46:00 -05:00
Robert Baldyga
8d127e6351 Move eviction stuff to eviction/ directory
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-12-31 14:51:51 +01:00
Robert Baldyga
69947bb44b Unify core naming convention (core_obj -> core)
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-12-12 14:49:17 +01:00
Robert Baldyga
db92083432 Unify req naming convention (rq -> req)
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-12-12 13:36:34 +01:00
wolszews
0cb11b0d3c Remove OCF prefix 2018-12-06 16:04:30 +01:00
Robert Baldyga
a8e1ce8cc5 Initial commit
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2018-11-29 15:14:21 +01:00