Eviction should decrement occupancy statistics for the
core from which a cacheline is being evicted rather than
from the I/O target core.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
update_req_info() should include REMAPPED cachelines
in repart stats (number of cachelines within request
belonging to other than the target partition).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This check is incorrect as cacheline status may change
from dirty to clean at any point during cleaning, except for
when the hash bucket is locked.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This isn't strictly required in current implementation as
nodes are always re-initialized before inserting to LRU list.
However it seems to make sense to zero the flag anyway to
make the code easier to reason about.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Removing conditional early return from engine_map() function
in case of insufficient free cachelines. The reasons are:
1. current implementation does not treat unssufficient free
cachelines condition as an error,
2. the check is based on stale request info, so it is inaccurate,
3. it is easier to hit more paths with functional tests,
4. partially mapping request from the freelist becomes more common
rather than being a corner case dependent on racy timings between
threads
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Move ref counts to their own cacheline - otherwise they pollute and cause
false sharing to fields nearby and cause a lot of cache bouncing between
physical CPUs.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Grouping static fields together, while often changing ones get their own
cacheline, or some not used often/important fields.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
set_hot() depends on cacheline metadata status to determine
on which list the element is located (dirty cs clean list).
Thus at least hash bucket lock is required when calling
set_hot().
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
When issuing discard request over 512KiB OCF would trim this request and
overwrite req->core_line_count which would then cause this request to be
freed from wrong mpool.
This is fixed now by saving core_line_count that was set when allocating
this request that is never overwritten. This alloc_core_line_count is
then used to free the request from correct mpool.
Signed-off-by: Jan Musial <jan.musial@intel.com>
If user thread is preempted during tree/list update and another IO
is issued on the same CPU, the structure will be in undefined state.
This may result in hung tasks, if the tree stops being a tree and a loop exists -
tree search functions won't be able to end properly; or panics if a NULL value appears
suddenly in the preempted thread, after a null-check was already done.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
When allocation of request with map fails, we fallback to allocating
request with no map, and then allocate map separately. During request
put we need to distinguish between those two cases in order to deallocate
request properly.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Early return from engine_map() in case of insufficient free
cachelines on the freelist is opportunistic, as both request
map info and freelist count are not accurate. Map info is stale
as it is to be refreshed in engine_map() after hash bucket
lock had been upgraded. Freelist count on other hand is subject
to change asynchronously.
The implementation assumption however is that after engine_map()
request is fully traversed (engine_map() is equivalent to
engine_lookup() followed by an attempt to map missing cachelines).
So in case of early return we must take care of repeating the
lookup.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
At this point cacheline status in request map is stale,
as lookup was performed before upgrading hash bucket lock.
If indeed all cachelines are mapped, this will be determined
in the main loop of engine_map().
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
This assures that cacheline with LOOKUP_INSERTED status
is always present on the LRU list.
This fixes an ENV_BUG() caused by an attempt to remove
a cacheline from LRU list which was not there. This
happened when cacheline was mapped from freelist
(LOOKUP_INSERTED) but the entire request mapping failed
and generic cleanup routines attempted to invalidate cacheline,
including removing it from the LRU list. As engine_set_hot()
is called after successfull mapping, the inserted cacheline was
not yet present on the LRU list.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Number of cachelines to evcit can't be greater than the number of unmapped
entries in request.
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>