Moved cas_blk_get_part_count function to configure section after the
the disk's partitions table was changed to xarray.
Signed-off-by: Gal Hammer <gal.hammer@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
The module_mutex is internal to the module loader since kernel
commit 922f2a7c.
Signed-off-by: Gal Hammer <gal.hammer@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
It is legal to call KCAS_IOCTL_INSERT_CORE against non-existing cache
(in try_add mode), however in that case core_id has to be provded.
Return error code in case when given cache id does not exist and core_id
is set to OCF_CORE_MAX.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Due to linux thread scheduling nature, we prefer to promote streams
as early as we reasonably can. One way to achieve that is to set
promotion count really low, which unfortunately significantly increases
number of accesses to shared structures. The other way is to promote
streams which reach cutoff threshold, as we can reasonably assume that
they are likely be continued after thread is rescheduled to another CPU.
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
Don't print statistics for a cache in passive state
Passive cache - casadm set/get cache param disabled in passive state
Obsolete "cache_get_param" function removed
Error in layer_cache_management.c fixed
Flushing cache/core disabled with error for passive mode
Core addition disabled in passive mode
IO class setting disabled for passive mode
Counters reset disabled for passive mode
Ioctl handling changes to reflect OCF API changes
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
This is a suboptimal solution to CAS on top of MD RAID1 device. If using
only submit_bio API RAID1 would process all IOs in single thread.
Plugging bypasses this thread and processess IOs in blk_finish_plug
caller context improving performance drastically.
Testing showed no negative impact to other usecases and it's a thing
that Linux does in AIO, so it's vetted and proven to work.
Signed-off-by: Jan Musial <jan.musial@intel.com>
All the operations on `count` are performed under the lock thus it doesn't need
to be atomic.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Some helper threads are created at the very beginning of cache start/stop
operations, but they are used only after OCF start/stop finishes, which
may take significant amount of time. Kernel by default creates threads
that wait for the first wake up in uninterruptible state, which may trigger
hung task warning if the first wake up is called more than 120 seconds
after thread creation. To mitigate this problem we create lazy thread
abstraction that waits for a wake up in interruptible state.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
OCF cannot allocate request map bigger than 4MiB (due to kmalloc
limitations), thus we need to split bigger IOs into series smaller
ones to reduce request map size.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
It's defined on every single supported kernel, so there is actually no need
for this define at all.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>