Lookup is repeated after request is identified as miss and hash bucket
lock is upgraded (in order to map missing cachelines). At this point
cachelines status might change and the request might turn out to be
a hit after all. Adding check for this condition removes unnecessary
calls to remap logic.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This allows access to it in ctx_metadata_updater_init, which is
done in the same call stack during initalization.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
After detaching a core if user wanted to remove inactive cores the
cleaning policy data would not be initialized and would bug-out on next
core add.
This check was incorrect, as cleaning policy core metadata lifetime is
not bound to core volume being open or not.
Signed-off-by: Jan Musial <jan.musial@intel.com>
Eviction should decrement occupancy statistics for the
core from which a cacheline is being evicted rather than
from the I/O target core.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
update_req_info() should include REMAPPED cachelines
in repart stats (number of cachelines within request
belonging to other than the target partition).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This check is incorrect as cacheline status may change
from dirty to clean at any point during cleaning, except for
when the hash bucket is locked.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This isn't strictly required in current implementation as
nodes are always re-initialized before inserting to LRU list.
However it seems to make sense to zero the flag anyway to
make the code easier to reason about.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Removing conditional early return from engine_map() function
in case of insufficient free cachelines. The reasons are:
1. current implementation does not treat unssufficient free
cachelines condition as an error,
2. the check is based on stale request info, so it is inaccurate,
3. it is easier to hit more paths with functional tests,
4. partially mapping request from the freelist becomes more common
rather than being a corner case dependent on racy timings between
threads
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Move ref counts to their own cacheline - otherwise they pollute and cause
false sharing to fields nearby and cause a lot of cache bouncing between
physical CPUs.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Grouping static fields together, while often changing ones get their own
cacheline, or some not used often/important fields.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>