there is an issue when someone call to parallelize/pipeline
with some struct that is aligned (say to 64B)
but these APIs add their own data, right before
the user's private data.
so, the user's data is no longer aligned
which might cause segfault in some cases.
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
PyOCF needs to control random seed, to allow running tests with
pytest-xdist. Use local random object initialized with seed
from the config.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Because context has one field which is aligned to 64B
(struct ocf_volume cache_volume) the compiler use vmovdqa (aligned)
instead of vmovdqu (unaligned) in reality the address is not 64 aligned,
it ends with 0x8, so we get this segfault.
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
With a high dirty ratio and occupancy, OCF might unable to map cache lines
for new requests, thus pass-through the I/O to core devices. IOPS will
drop afterwards. We need to control the dirty ratio.
Existing `alru' policy gives user the chance to control the stale buffer
time, activity threshold etc. They can affect the dirty ratio of the cache
device, but in an empirical manner, more or less. Introducing
`max_dirty_ratio' can make it explicit.
At first glance, it might be better to implement a dedicated cleaner policy
directly targeting dirty ratio goal, so that the `alru' parameters remains
orthogonal. But one the other hand, we still need to flush dirty cache
lines periodically, instead of just keeping a watermark of dirty ratio.
It indicates that existing `alru' parameters are still required if we
develop a new policy, and it seems reasonable to make it a parameter.
To sum up, this patch does the following:
- added a 'max_dirty_ratio' parameter with default value 100;
- with default value 100, `alru' cleaner is identical to what is was;
- with value N less than 100, the cleaner (when waken up) will active
brought dirty ratio to N, regardless of staleness time.
Signed-off-by: David Lee <live4thee@gmail.com>
Don't populate cleaning policies during initialization procedure so the user
has to call the latter explicitly.
Until now cleaning policies could be populated in two ways:
- implicitly during cleaning policy initialization,
- explicitly be calling populate.
The difference was that the former was single threaded.
This patch removes the functionally redundant and less efficient code.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
The function not only recovers cleaning policy metadata but is also utilized
to initialize data structures so more generic name is actually more accurate
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Initializing metadata in an asynchronous manner will allow to use
parallelization utilities in the future commits
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Normally cleaning policy would be deinitialized during stopping cache which is
one of steps of error handling e.g in case of failed cache activation. But since
`cache_stop()` may be called only for an attached cache instance, cleaning
policy needs to deinitialized explicitly.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Instead of ignoring `pthread_rwlock_unlock()` return value assert that is must
succeed.
The function returns an error eg. when there is an attempt to unlock the
resource from a different thread than it was originally locked which is illegal
in userspace.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
- tighten the copyright regex to include all necessary info
- add checking for proper license identifier
- output only in case of error and put it in stderr
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
Besides looking for files with proper extension, check also listed
files without extension, which should contain this header as well.
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
Remove one callback indirection level. I/O never changes it's direction
so there is no point in storing both read and write callbacks for each
request.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
In most (6/9) instances across engines ocf_core_stats_cache_error_update
is called upon each cache volume I/O error, possibly multiple times
per a user request in case of multi-cacheline requests. Backfill,
fast and read engine are exceptions, incrementing error stats only
once per user request.
This commit unifies ocf_core_stats_cache_error_update usage so that
in all the engines error statistic is incremented for once for every
error.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
It is wastefull to allocate a full 1B to store 1 bit of
alock status per cacheline. Fixed allocation of 128 bits
seems more reasonable.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>