Commit Graph

416 Commits

Author SHA1 Message Date
Robert Baldyga
108fe28ad4 Introduce core priv
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-03-03 15:37:12 +01:00
Robert Baldyga
ac7b5aba6b metadata: Allocate memory with ENV_MEM_NOIO flag
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 12:03:21 +01:00
Robert Baldyga
b7e59ee04a metadata: Use proper function for freeing memory
a_req is allocated using env_vmalloc() so we need to free it
using env_vfree(), not env_free().

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 10:29:15 +01:00
Adam Rutkowski
ee37391e97 Fix discard request map allocation
Discard handling splits large request into several steps.
However the actual size of request map for discard was
determined based on original request size, not step request
size, resulting in waste of memory and allocations > 4K.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 17:47:11 -05:00
Adam Rutkowski
26fd938ccf Reduce max trim request size to 512K
512K is the maximum request size for which request map
fits into one page (4K) regardless of cacheline size.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-02-10 15:57:34 -05:00
Michał Wysoczański
fabd41250b
Merge pull request #342 from mmichal10/fix-metadata-flush
Fix metadata flush
2020-01-24 17:59:58 +01:00
Michal Mielewczyk
d9c987e068 Flush metadata after changing status of each sector
In case of cleaning metadata used to be flushed only when status of whole cache
line changed to clean.

This patch ensures that metadata flush is triggered after changin status of each
single sector is cache line.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Michal Mielewczyk
2f10365086 Flush metadata after setting dirty status of each sector.
After second dirty write to cache line which was already dirty, metadata flush
was not triggered. In case of dirty shutdown, this led to data corruption.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Robert Baldyga
7d82f20614 Remove unused include
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Robert Baldyga
4d25bbe4b3 metadata: Relax memory allocation requirements
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Jan Musial
adc52ba71e Detect cache devices that would overflow ocf_cacheline_t
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-01-21 15:29:24 +01:00
Robert Baldyga
d1c2fc0c67 discard: Make max_length aligned to sector size
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-21 12:44:04 +01:00
Michal Rakowski
65756a8160 Moved setting ctx for temporary cache object before metadata init
This way debug prints during metadata init phase won't cause crash
(because of the fact that temporary cache object does not have proper
ctx set hence does not have logger obj).

Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2020-01-16 21:53:40 +01:00
Robert Baldyga
ce28c71475
Merge pull request #326 from Ostrokrzew/upstream
Change error code
2020-01-10 13:38:18 +01:00
Ostrokrzew
3fca309e51 Change error code and add new
Change 'OCF_ERR_START_CACHE_FAIL' to 'OCF_ERR_NO_MEM' while CAS fails in case of memory lack on device.
Add new error code for case, when device doesn't satisfy CAS requirements - 'OCF_ERR_INVAL_CACHE_DEV'.
Use 'OCF_ERR_INVAL_CACHE_DEV' in code.
Update error code match in test.
closes #317 issue

Signed-off-by: Ostrokrzew <slawomir.jankowski@intel.com>
2020-01-02 09:34:24 +01:00
Jan Musial
5eca548e22 Make sure NHIT won't attempt to take the same semaphore twice
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-12-31 14:16:18 +01:00
Jan Musial
4536a51f59 Fix init of nhit + code styling
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-12-31 14:16:18 +01:00
Michal Mielewczyk
6ac3195823 Keep stop pipeline in struct cache.
To eliminate possibility of allocation error in cache stop, pipeline is
allocated on attach.

Due this change, the only possible non-zero status of  ocf_mngt_cache_stop() is
just a warning and cache is always stopped after executing it.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-12-27 18:54:15 -05:00
Adam Rutkowski
92b36c3484 Change DIV_ROUND_UP to OCF_DIV_ROUND_UP
This fixes compilation in SPDK env

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-12-28 18:24:12 -05:00
Robert Baldyga
d1249e5238 Limit number of concurrent io submitted by metadata_io_i_asynch()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-19 16:50:41 +01:00
Robert Baldyga
e06832426d cleaner: Retrieve core object properly
Cleaner doesn't set core object in req as it works in domain of cache
lines, which may belong to various cores. It this case should retrieve
core object not from the req, but from the map instead.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-19 14:44:04 +01:00
Michal Rakowski
a074026773
Merge pull request #329 from robertbaldyga/fix-cleaner-queue-change-before-put
Put a queue before calling cleaner completion callback
2019-12-19 11:33:27 +01:00
Robert Baldyga
32fd371583 Put a queue before calling cleaner completion callback
This ensures that cleaner queue will not be changed
by starting another cleaning iteration before we put it.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-18 20:49:56 +01:00
Michal Mielewczyk
fb95f048fd Revert "Limit number of concurrent io submitted by metadata_io_i_asynch()"
Starting big caches hangs.

This reverts commit c2c9307b9b.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-12-17 10:29:44 -05:00
Ostrokrzew
fc1847cf55 Add reschedule to metadata hash init
Signed-off-by: Ostrokrzew <slawomir.jankowski@intel.com>
2019-12-17 09:56:45 +01:00
Adam Rutkowski
57e6b96791
Merge pull request #323 from arutk/remove_fallthrough
Remove switch/case fallthrough
2019-12-12 11:49:08 +01:00
Adam Rutkowski
867e06ebf1 Remove switch/case fallthrough
This construction breaks compilation on latest kernels.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-12-12 13:48:47 -05:00
Robert Baldyga
c2c9307b9b Limit number of concurrent io submitted by metadata_io_i_asynch()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-11 14:57:20 +01:00
Jan Musial
94d27c0f43 Fix counting occupancy on WB write insert error
During error handling in WB write insert we didn't invalidate affected
cache lines. Because of that the cache stopped properly (as it's
supposed to), but cache lines were marked as inserted which caused
occupancy stats to increase even though nothing was succesfully
inserted.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-12-09 11:01:28 +01:00
Michal Mielewczyk
b61843d7df Reset initial ioclass stats value when retrieving.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-10-29 09:53:36 -04:00
Adam Rutkowski
6423c48dfe cacheline concurrency: move allocation outside critical section
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-18 18:41:01 -04:00
Adam Rutkowski
07b1f0c064 Replace global concurrency rw spinlock with rw semaphore
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-18 18:19:05 -04:00
Michal Mielewczyk
db06783d56 Fix cache stats updating.
When single request to cache was issued, stats updating function was called with
0 bytes as value to update. In case of many request issued to cache, stats were
updated only in case of error.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-10-08 08:49:24 -04:00
Robert Baldyga
f51f7f7e1e Update stats before calling completion callback
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-10-02 18:49:59 +02:00
Adam Rutkowski
94a0b5392b Fix hash bucket iterator
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-02 19:03:52 -04:00
Adam Rutkowski
cf5a92b527 Lock cachelines under hash bucket locks
.. or when holding exclusive global metadata lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-10-01 17:09:27 -04:00
Adam Rutkowski
5113542c7f
Merge pull request #297 from mmichal10/pp-params-in-sb
Store PP config params in cache superblock.
2019-10-01 12:32:15 +02:00
Michal Mielewczyk
e16d4e6dda Initialize promotion policy on cache attach.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-10-01 06:11:53 -04:00
Michał Mielewczyk
ee3f2205fd
Merge pull request #300 from arutk/revert_cl_lock_opt
Revert "Optimize cacheline locking in ocf_engine_prepare_clines"
2019-10-01 11:50:20 +02:00
Michal Rakowski
fc971b9961 Add missing env wrapper
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-10-01 09:05:59 +02:00
Adam Rutkowski
09b68297b2 Revert "Optimize cacheline locking in ocf_engine_prepare_clines"
This change introduced a race condition. In some code paths after
cacheline trylock failed, hash bucket lock needed bo be upgraded
in order to obtain asynchronous lock. During hash bucket lock
upgrade, hash read locks were released followed by obtaining
hash write locks. After read locks were released, concurrent
thread could obtain hash bucket locks and modify cacheline
state. The thread upgrading hash bucket lock would need to
repeat traversation in order to safely continue.

This reverts commit 30f22d4f47.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-30 23:53:10 -04:00
Adam Rutkowski
944d70288e
Merge pull request #296 from micrakow/sec_rev_fixes
Env fixes & more
2019-09-30 17:42:40 +02:00
Michal Rakowski
325994074e env: change env_strncmp to take 4 args
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:26:47 +02:00
Michal Rakowski
2575be83fa Error handling for env_rwsem_init added
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:16:37 +02:00
Michal Rakowski
b78557a2cc Change env_spinlock_init to non-void function
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Michal Rakowski
8426d662cb Changed err handling to BUG_ON in case of refcnt_int fail durign cache init.
Since we do not expect that incrementing cache's reference counter
during cache init will fail at any condition it is can be changed
to an assert instead of error handling.

Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Michal Rakowski
9504cb044d discard: Added missing io_put in case of error
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Michal Rakowski
f1cfc800e2 Add check for part_id in ocf_stats_collect_part_*
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Michal Rakowski
888ac74e32 Removed redundand include
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-30 17:15:27 +02:00
Slawomir_Jankowski
cdf0caf704 **ocf_mngt.h**: In core name change
pointer type to array which is 32 characters long;
**core.py**: Add missing import and modify class' field type
to keep consistency;
**ocf_mngt_core**: Remove local variable 'name';
remove env_vmalloc for 'name' - isn't no longer needed;
remove initialization 'name' - as above;
remove env_vfree for context->cfg.name - variable isn't no allocated
in memory;
check if cfg->name exists;
change label in goto from deleted err_name to the closest err_pipeline.

Signed-off-by: Slawomir_Jankowski <slawomir.jankowski@intel.com>
2019-09-30 15:55:33 +02:00
Michal Mielewczyk
dfc55538ce Store PP config params in cache superblock.
It allows to modify and retrieve particular PP params event if it isn't active
and store values between cache stop and load.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-27 10:55:58 -04:00
Robert Baldyga
96a1fdb17e Deinitialize locks on cache stop instead of put
Cache lock waiters hold cache refcount. Because of that,
if there were some waiters, deinitialization of cache
lock on the last put did never happen and putting the
cache was effectively impossible.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-09-27 12:02:04 +02:00
Robert Bałdyga
75569ecaba
Merge pull request #284 from mmichal10/prevent-cache-name-duplicate
Prevent cache name duplicate
2019-09-25 15:36:08 +02:00
Michal Mielewczyk
6c076a7c07 Remove set_cache_name() from public API.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-25 09:21:42 -04:00
Michał Mielewczyk
9e707d81b5
Merge pull request #287 from micrakow/nhit_fixe
Nhit fixes
2019-09-25 15:11:47 +02:00
Michal Rakowski
5efa5ac414 nhit PP: Prevent setting nhit policy again if it was already set 2019-09-25 14:59:23 +02:00
Michal Rakowski
547306efea nhit PP: change trigger_threshold to percent value
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-25 14:59:23 +02:00
Michał Mielewczyk
9613c325fc
Merge pull request #285 from arutk/fix_io_class_configure
Fix error handling in IO ocf_mngt_cache_io_classes_configure
2019-09-25 14:42:06 +02:00
Michał Wysoczański
aafe870e44
Merge pull request #280 from arutk/metadata_sync_2
Additional metadata synchronization
2019-09-25 14:11:59 +02:00
Adam Rutkowski
a934b43aec Add missing error handling in hash bucket locks initialization
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:17 -04:00
Adam Rutkowski
6de280283a Fix hash_table_entries param type in ocf_metadata_concurrency_attached_init
Number of hash buckets is 32 bit integer.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:17 -04:00
Adam Rutkowski
937b010ef6 Synchronize access to cleaner shared structures
Cleaning policy callbacks are typically  called with hash buckets or
cache lines locked. However cleaning policies maintain structures
which are shared across multiple cache lines (e.g. ALRU list).
Additional synchronization is required for theese structures to
maintain integrity.

ACP already implements hash bucket locks. This patch is adding
ALRU list mutex.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:17 -04:00
Adam Rutkowski
5e28474322 Adding partition locks
Adding locks to assure partition list consistency. Partition
lists is typically modified under cacheline or hash bucket lock.
This is not sufficient synchronization as adding/removing cacheline
from partition list affects neighbouring cachelines state as well.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:17 -04:00
Adam Rutkowski
41d3542952 Lock collision page in metadata flush
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 14:26:15 -04:00
Michal Mielewczyk
f461f3c62e Extend probe informations with cache name.
Since ocf requires loading cache with the same name as it was stopped, it should
also allow to read name from metadata.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-25 07:59:50 -04:00
Michal Mielewczyk
39c5819a51 Set cache name before adding it to context list.
This change allows to check if specified cache name name is unique. To prevent
adding cache instance with the same name, context lock is acquired until name
isn't set.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-25 07:59:50 -04:00
Michal Mielewczyk
c04ea4898f Check if loaded cache name is valid.
When loading cache, it's name should be the same as the loaded one.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-25 05:06:37 -04:00
Michal Rakowski
23aba6a9f3 nhit PP: Added info about setting nhit params
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-25 11:04:51 +02:00
Adam Rutkowski
be3b402162 Synchronization of collision table
Adding synchronization around metadata collision segment pages.
This part of metadata is modified when cacheline is mapped/unmapped
and when dirty status changes.

Synchronization on page level is required on top of cacheline
and hash bucket locks to assure metadata flush always reads
consistent state when copying entire collision table memory
page.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-25 00:26:29 -04:00
Adam Rutkowski
5684b53d9b Adding collision table locks
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-24 22:59:35 -04:00
Adam Rutkowski
727a6d2e4b Fix error handling in IO ocf_mngt_cache_io_classes_configure
Modifying _ocf_mngt_io_class_configure and _ocf_mngt_io_class_remove
to never return -OCF_ERR_IO_CLASS_NOT_EXIST error code. This
return code was ignored by the caller anyway. In _ocf_mngt_io_class_remove
-OCF_ERR_IO_CLASS_NOT_EXIST indicated the IO class is already
removed, which is not an error. In _ocf_mngt_io_class_configure
-OCF_ERR_IO_CLASS_NOT_EXIST indicated empty IO class name, which
is actualy invalid input. This change made it possible to remove
erroneous error handling for -OCF_ERR_IO_CLASS_NOT_EXIST case in
ocf_mngt_cache_io_classes_configure.

This change fixes IO class configuration.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-24 21:51:43 -04:00
Michał Wysoczański
07aa29fc56
Merge pull request #283 from rafalste/fix_nhit_param_value
Accept max values of nhit PP as valid.
2019-09-24 10:46:38 +02:00
Rafal Stefanowski
9cb5c60c80 Accept max values of nhit PP as valid.
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2019-09-23 15:58:57 +02:00
Robert Baldyga
b8f5f135fe ocf_async_lock: Replace mutex with spinlocks
The ocf_async_lock may be used in atomic context, thus we need
to replace synchronization primitives to non-sleeping variants.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-09-23 15:54:25 +02:00
Robert Baldyga
dd0a39eea7 Create new volume instead of using non-allocated one
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-09-23 14:01:17 +02:00
Adam Rutkowski
30f22d4f47 Optimize cacheline locking in ocf_engine_prepare_clines
Hash bucket read/write lock is sufficient to safely attempt
cacheline trylock/lock. This change removes cacheline lock
global RW semaprhore and moves cacheline trylock/lock under
hash bucket read/write lock respectively.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
5248093e1f Move common mapping and locking logic to dedicated function
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
d2bd807e49 Remove calls to OCF_METADATA_(UN)LOCK_WR(RD)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
2333d837fb Add single hash bucket lock interface
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
3a70d68d38 Switch from global metadata locks to hash-bucket locks in engines
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
b39bcf86d4 Separate engine map/evict (refactoring)
This temporarily increases amount of boiler-plate code, but
this is going to be mitigated in the following commits.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
d91012f4b4 Introduce hash bucket locks
There is one RW lock per hash bucket. Write lock is required
to map cacheline, read lock is sufficient for traversing.
Hash bucket locks are always acquired under global metadata
read lock. This assures mutual exclusion with eviction and
management paths, where global metadata write lock is held.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
42f65c3fbb Change ocf_metadata_(un)lock -> OCF_METADATA_(UN)LOCK
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Adam Rutkowski
f34cacf150 Move resume callback to async lock function params (refactoring)
This is a step towards common async lock interface in OCF.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-09-20 17:21:00 -04:00
Jan Musial
827273645c Use better function for calculating occupancy 2019-09-20 14:56:17 +02:00
Kamil Łepek
7131178e71
Merge pull request #272 from imjfckm/fix-pp-validation
Add PP type validation
2019-09-18 17:30:23 +02:00
Jan Musial
0c1ccddf8a Add PP type validation
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-09-18 15:12:23 +02:00
Jan Musial
0e85ebe4a3 Get PP params in line with rest of OCF
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-09-18 11:15:41 +02:00
Jan Musial
e9bd139349 Add validation of PP for cache start config
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-09-18 09:53:13 +02:00
Jan Musial
e8fc2c24f1 Add missing stuff from get_param in PP
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-09-17 15:05:29 +02:00
Michał Mielewczyk
f86287ef06
Merge pull request #261 from micrakow/coverity_19_9
Fixed some bugs found by the coverity tool
2019-09-17 09:25:18 +02:00
Michal Rakowski
83e23c5593 Fixed some bugs found by the coverity tool
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
2019-09-16 15:49:37 +02:00
Michal Mielewczyk
c5edc60345 Fix stats update in cleaner.
Core is not assigned to request in cleaner, so to increase it's stats it has to
be retrieved from mapping.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-16 05:59:28 -04:00
Michał Mielewczyk
0391fc17b7
Merge pull request #255 from imjfckm/pp-tests
Add promotion policies functional tests
2019-09-16 09:34:16 +02:00
Jan Musiał
58012cd14b
Merge pull request #260 from mmichal10/unify-inactive-cores-stats
Unify inactive cores stats
2019-09-16 09:03:08 +02:00
Michal Mielewczyk
f226f978f0 Unify inactive cores stats.
Inactive core stats should be caluculated and returned to adapter in unified
from, just like all stats are.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-16 02:49:42 -04:00
Michal Mielewczyk
494a1ccc79 Extract stats builder utils to separate file.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-09-12 05:21:17 -04:00
Jan Musial
703a757db1 Fix minor bugs in promotion policy
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-09-11 10:31:14 +02:00
Jan Musial
633f31716e Make NHIT API naming convention similar to cleaning
Signed-off-by: Jan Musial <jan.musial@intel.com>
2019-09-11 07:31:50 +02:00
Michal Rakowski
29c1c7f9e8
Merge pull request #253 from mmichal10/stats-refactor
Stats builder for ioclasses
2019-09-10 14:56:26 +02:00