After attaching new cache device handle all the IOs in Pass-Through mode
until all the d2c requests are completed.
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
This avoids unnecessary map allocation and initialization of unused fields of
request structure. It also allows to track thier number separately from
the regular requests
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
The queues can be created and destroyed dynamically at any point in
the cache lifetime, and this can happen from different execution contexts,
thus there is a need to protect the queue_list with a lock.
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
To allow the fastest switching from the passive-standby to active mode, the
runtime metadata must be kept 100% synced with the metadata on the drive and in
the RAM thus recovery is required after each collision section update.
To avoid long-lasting recovering of all the cachelines each time the collision
section is being updated, the passive update procedure recovers only those
which have its MD entries on the updated pages.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Cache name is needed for logging in passive mode, when config metadata
is still not accessible.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Move ref counts to their own cacheline - otherwise they pollute and cause
false sharing to fields nearby and cause a lot of cache bouncing between
physical CPUs.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Grouping static fields together, while often changing ones get their own
cacheline, or some not used often/important fields.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
To eliminate possibility of allocation error in cache stop, pipeline is
allocated on attach.
Due this change, the only possible non-zero status of ocf_mngt_cache_stop() is
just a warning and cache is always stopped after executing it.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Global free cacheline list is divided into a set of freelists, one
per execution context. When attempting to map addres to cache, first
the freelist for current execution context is considered (fast path).
If current execution context freelist is empty (fast path failure),
mapping function attempts to get freelist from other execution context
list (slow path).
The purpose of this change is improve concurrency in freelist access.
It is part of fine granularity metadata lock implementation.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Promotion policy is supposed to perform ALRU noise filtering by
eliminating one-hit wonders being added to cache and polluting it.
Signed-off-by: Jan Musial <jan.musial@intel.com>
NOTE: This is still not the real asynchronism. Metadata interfaces
are still not fully asynchronous.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
- Queue allocation is now separated from starting cache.
- Queue can be created and destroyed in runtime.
- All queue ops accept queue handle instead of queue id.
- Cache stores queues as list instead of array.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>