Queue settle is a mechanism to wait for all OCF processing
on a given queue to finish.
In some tests simply waiting for I/O to finish is not enough. Most
notably some statistics are potentially incremented after user triggered
I/O is finished. This is due to asynchronous nature of I/O operations
and OCF approach to statistics update, where only eventual consistency
is guaranteed without explicit mechanism available to query whether
the final state is reached yet. However it is fully in the adapter power
to determine this, as OCF executes in context of API calls from the
adapter (like I/O submission, ocf_queue_run, ocf_cleaner_run, management
operations) and I/O completion callbacks. Queue settle is a mechanism to
assure ocf_queue_run is not being executed by the thread associated with
a queue.
With queue settle mechanism it is straightforward for the adapter to
wait for cache statistics to reach fixed values:
1. wait for all I/O to OCF to finish
2. settle all I/O queues
3. make sure background cleaner is not active
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
rio stands for Rigid IO tester and is a simple mechanism for testing
OCF cache IO.
Signed-off-by: Jan Musial <jan.musial@intel.com>
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
With atomic superblock commit during cache attach, it is possible
that power failure interrupts attach operation at a point where
neither new or old superblock is present - right after the superblock
is cleared.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
... this is useful to workaround current pyocf limitations and
load cache with manual core insertion
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Adding option to
1. inject error based on I/O number
2. arm/disarm error injection for easier testing
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
As amount of fixed size metadata allocated by OCF grows, we need to adjust
test to not try to start cache on device that is too small.
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
Error for an invalid cache operation while in passive mode added
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
Error name correction
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
API changes for passive cache mode
Moved the passive cache error return source to the api for flush and
set_param
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
Further API changes for passive cache mode
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
Passive api - review changes
Signed-off-by: Krzysztof Majzerowicz-Jaszcz <krzysztof.majzerowicz-jaszcz@intel.com>
src/eviction/lru.c -> src/ocf_lru.c
src/eviction/lru.h -> src/ocf_lru.h
src/eviction/lru_structs.h -> src/ocf_lru_structs.h
src/eviction/eviction.c -> src/ocf_space.c
src/eviction/eviction.h -> src/ocf_space.h
.. as well as corresponding UT files.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
... in UT as well
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.
As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.
Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Reformat function that calculates how long cache/core is dirty
Update `dirty_for` types in functional tests
Values stored in info structs fields (both in cache and core structs)
are unsigned 64-bits ints but `dirty_for`s were unsigned 32-bits ints.
Use existing function to transform returned value to seconds.
Replace seconds stored in metadata with seconds.
Replacement was done if old value of replaced field was equal to zero.
Acquiring monotonic high precission timestamp is potentially
slow and it makes sense to compare the field's value
to zero before calling atomic function.
Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
hold metadata shared global lock ("naked") vs the ones
which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
accepting hash bucket id ("id"), core line and lba ("cline")
and entire request ("req").
Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>