Commit Graph

231 Commits

Author SHA1 Message Date
Robert Baldyga
a2db4d14e8 Move core initialization code from metadata to mngt
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
9ab4c51dfa Expose superblock operations as part of internal metadata API
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-03 17:22:22 +02:00
Robert Baldyga
82abcd11e7 Fix documentation of ocf_metadata_raw_rd_access()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-02 22:38:37 +02:00
Robert Baldyga
387cf1b9a5 Fix debug tracing in metadata
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-09-02 22:38:37 +02:00
Michal Mielewczyk
26194fc536 Use cleaning ops wrapper functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Michal Mielewczyk
6dc29ee85e Refactor includes
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-07-27 15:44:18 +02:00
Robert Baldyga
f538bbd3ae Fix argument order in ocf_metadata_set_partition_id() call
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-07-09 21:31:06 +02:00
Robert Baldyga
847f5f1174
Merge pull request #520 from arutk/lru_refactor
LRU refactoring
2021-06-21 22:49:08 +02:00
Adam Rutkowski
1e1955b833 lru refactor
rearanging lru implementation for easier journaling

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-21 22:32:57 +02:00
Adam Rutkowski
edf20c133e Move metadata I/O lock to IO queue context
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-21 21:39:35 +02:00
Adam Rutkowski
a70608476d fastpath for metadata update
Removing extra request cycle through IO queue in case of successfull
metadata I/O lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-21 20:02:56 +02:00
Kozlowski Mateusz
50ec65fcfd Fix metadata_io_page_lock_acquired typo
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-21 19:46:35 +02:00
Kozlowski Mateusz
1031139446 OCF: Fix error path for metadata updater
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-21 19:46:34 +02:00
Adam Rutkowski
36107fd528 Initialize partitions during cache start
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
07cbba32f6 remove stale references to eviction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
33e2beac24 Rename "evp_lru*" functions to "ocf_lru*"
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
b1143374a8 Move eviction files to new locations
src/eviction/lru.c -> src/ocf_lru.c
src/eviction/lru.h -> src/ocf_lru.h
src/eviction/lru_structs.h -> src/ocf_lru_structs.h
src/eviction/eviction.c -> src/ocf_space.c
src/eviction/eviction.h -> src/ocf_space.h

.. as well as corresponding UT files.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>

... in UT as well

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
7c0f940876 Replace eviction with lru in metadata structs
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
88e04a4204 Remove eviction policy abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
4f217b91a5 Remove partition list
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:19:08 +02:00
Adam Rutkowski
87f834c793 Move common user and freelist partition data to a new struct
New structure ocf_part is added to contain all the data common for both
user partitions and freelist partition: part_runtime and part_id.
ocf_user_part now contains ocf_part structure as well as pointer to
cleaning partition runtime metadata (moved out from part_runtime) and
user partition config (no change here).

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-18 12:07:10 +02:00
Robert Baldyga
73c3e97f43
Merge pull request #509 from Open-CAS/rm-metadata-updater
Remove metadata updater
2021-06-17 09:34:18 +02:00
Kozlowski Mateusz
ce316cc67c Change alock API to include slow/fast lock callbacks
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-06-16 13:48:35 +02:00
Adam Rutkowski
f589341c9a remove metadata updater
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
953e0f25d7 replace metadata updater with metadata I/O concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Adam Rutkowski
06f3c937c3 mio concurrency
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-15 10:25:05 +02:00
Robert Baldyga
6c617fa688
Merge pull request #514 from jfckm/minor-perf
Small performance improvements
2021-06-14 13:32:27 +02:00
Adam Rutkowski
d22a3ad0e0 Rename cacheline concurrency struct to alock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-06-14 12:15:43 +02:00
Jan Musial
f25d9a8e40 Use new non-zeroing allocator APIs
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:38:44 +02:00
Jan Musial
a52a3b75e5 Mark unlikely conditionals in hot code paths in metadata_raw
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:20:33 +02:00
Jan Musial
4031b4b2ae Delete metadata self-test
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-06-10 15:20:25 +02:00
Adam Rutkowski
0476511c00 probe: return dirty and shutdown status despite metadata mismatch
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-04-06 14:07:42 -05:00
Kozlowski Mateusz
e054949cbb Metadata updater mutex alignment
Avoids trashing of (mostly) static and often used entries

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
e391fc2c13 Queue alignment
Metadata reshuffling

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Kozlowski Mateusz
fdd6b88cc4 General packing of structs
Get back some memory/cachelines by packing any leftover static fields together.

Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
2021-03-26 08:57:04 +01:00
Robert Baldyga
296e98e39c
Merge pull request #471 from arutk/lru_fix_2
Prevent remapping cachelines within single request
2021-03-18 20:33:11 +01:00
Adam Rutkowski
a232488c7a Prevent remapping cachelines within single request
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-18 17:11:45 -05:00
Robert Baldyga
8020e7fd67
Merge pull request #457 from Ostrokrzew/false_stats
Fix broken 'dirty_for' stats
2021-03-18 10:24:02 +01:00
Jan Musial
2dc36657bf Use mpool to allocate metadata_io requests
Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
b47ef2c386 Change vmalloc in metadata asynch io to kmalloc
Vmalloc is very slow in comparison to kmalloc

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Jan Musial
9f8802e833 Decrease memory requirements for metadata io
Magic child metadata request count (33) was deducted
experimentally.

Signed-off-by: Jan Musial <jan.musial@intel.com>
2021-03-10 16:10:11 +01:00
Adam Rutkowski
81fc7ab5c5 Parallel eviction
Eviction changes allowing to evict (remap) cachelines while
holding hash bucket write lock instead of global metadata
write lock.

As eviction (replacement) is now tightly coupled with request,
each request uses eviction size equal to number of its
unmapped cachelines.

Evicting without global metadata write lock is possible
thanks to the fact that remaping is always performed
while exclusively holding cacheline (read or write) lock.
So for a cacheline on LRU list we acquire cacheline lock,
safely resolve hash and consequently write-lock hash bucket.
Since cacheline lock is acquired under hash bucket (everywhere
except for new eviction implementation), we are certain that
noone acquires cacheline lock behind our back. Concurrent
eviction threads are eliminated by holding eviction list
lock for the duration of critial locking operations.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Adam Rutkowski
1411314678 Add getter function for cache->device->concurrency.cache_line
The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-03-05 11:20:47 +01:00
Robert Baldyga
ac9bd5b094
Merge pull request #453 from arutk/no_cl_gl_lock
Skip cacheline concurrency global lock in fast path
2021-03-04 12:33:50 +01:00
Michal Mielewczyk
f1012b020b Validate ioclass config
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-03-02 14:46:57 +01:00
Slawomir Jankowski
eeda1f3f0f Unify type of dirty_for in info structs
Reformat function that calculates how long cache/core is dirty
Update `dirty_for` types in functional tests

Values stored in info structs fields (both in cache and core structs)
are unsigned 64-bits ints but `dirty_for`s were unsigned 32-bits ints.

Use existing function to transform returned value to seconds.
Replace seconds stored in metadata with seconds.
Replacement was done if old value of replaced field was equal to zero.
Acquiring monotonic high precission timestamp is potentially
slow and it makes sense to compare the field's value
to zero before calling atomic function.

Signed-off-by: Slawomir Jankowski <slawomir.jankowski@intel.com>
2021-02-25 14:51:53 +01:00
Adam Rutkowski
cf5f82b253 Use cline concurrency ctx instead of cache
Cacheline concurrency functions have their interface changed
so that the cacheline concurrency private context is
explicitly on the parameter list, rather than being taken
from cache->device->concurrency.cache_line.

Cache pointer is no longer provided as a parameter to these
functions. Cacheline concurrency context now has a pointer
to cache structure (for logging purposes only).

The purpose of this change is to facilitate unit testing.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-24 17:29:39 -06:00
Adam Rutkowski
c95f6358ab Get rid of status bits lock
All the status bits operations are now protectec by
hash bucket locks

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-18 15:05:53 -06:00
Robert Baldyga
75baec5aa5
Merge pull request #456 from arutk/aalru
Relax LRU list ordering to minimize list updates
2021-02-18 13:48:54 +01:00
Adam Rutkowski
0748f33a9d Align each global metadata lock to 64B
.. in order to move primitives intended to be accessed
concurrently in separate CPU cache line.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
05780c98ed Split global metadata lock
Divide single global lock instance into 4 to reduce contention
in multiple read-locks scenario.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-15 11:27:49 -06:00
Adam Rutkowski
10c3c3de36 Renaming hash bucket locking functions
1. new abbreviated previx: ocf_hb (HB stands for hash bucket)
2. clear distinction between functions requiring caller to
   hold metadata shared global lock ("naked") vs the ones
   which acquire global lock on its own ("prot" for protected)
3. clear distinction between hash bucket locking functions
   accepting hash bucket id ("id"), core line and lba ("cline")
   and entire request ("req").

Resulting naming scheme:
ocf_hb_(id/cline/req)_(prot/naked)_(lock/unlock/trylock)_(rd/wr)

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-12 18:08:15 -06:00
Adam Rutkowski
9690f13bef Change eviction spin lock to RW lock
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-02-09 16:23:01 -06:00
Rafal Stefanowski
6ed4cf8a24 Update copyright statements (2021)
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2021-01-21 13:17:34 +01:00
Robert Baldyga
5a88ab2d61 Flush metadata collision segment on core remove
If there is any dirty data on the cache associated with removed core,
we must flush collision metadata after removing core to make metadata
persistent in case of dirty shutdown.

This fixes the problem when recovery procedure erroneously interprets
cache lines that belonged to removed core as valid ones.

This also fixes the problem, when after removing core containing dirty
data another core is added, and then recovery procedure following dirty
shutdown assigns cache lines from removed core to the new one, effectively
leading to data corruption.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2021-01-19 13:34:28 +01:00
Adam Rutkowski
bd20d6119b External linkage for function to sparse single cline
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2021-01-15 02:10:42 -05:00
Michal Mielewczyk
fcef130919 Bug on metadata access error
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2021-01-07 18:10:44 -05:00
Rafal Stefanowski
d3b61e474c Remove init_mode and use metadata.is_volatile instead
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-23 16:31:55 +01:00
Rafal Stefanowski
4c42d62f97 Add a newline escape in 'invalid checksum' messages
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-12-22 16:34:44 +01:00
Adam Rutkowski
1b8bfb36f5 Add missing part->id initialization
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-22 11:52:20 +01:00
Robert Baldyga
7f60d73511
Merge pull request #413 from mmichal10/occ-per-ioclass
Occupancy per ioclass
2020-12-21 23:43:54 +01:00
Michal Mielewczyk
0dc8b5811c Store min and max ioclass size as percentage val
Min and max values, keept as an explicit number of cachelines, are tightly
coupled with particular cache. This might lead to errors and mismatches after
reattaching cache of different size.

To prevent those errors, min and max should be calculated dynamically.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 08:23:34 -05:00
Michal Rakowski
ac2effb83d Fix whitespaces
Signed-off-by: Michal Rakowski <michal.rakowski@intel.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-12-21 06:04:16 -05:00
Adam Rutkowski
822cd7c45a Introduce metadata superblock & segment structures
Refactoring metadata superblock and segment ops code
to make it less tightly coupled.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
3eb5568608 rename segment->segment_id and segemnt_ops->segment
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:53 +01:00
Adam Rutkowski
b074d77797 Spliting metadata implementation to match header files
Moving metadata implementation out of obsolete metadata_hash.c
to .c files corresponding to function declaration header files.
This requires adding shared header for metadata implementation
metadata_internal.h. Some metadata header files did not have
a corresponding .c file - in this case it is added in this
commit.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 16:35:49 +01:00
Adam Rutkowski
02405e989d Removing 'hash' word from misspelled metadata functions
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
18e35c390b Remove last references to "hash" metadata implementation
Hashed metadata is now default and only implemetation.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
5fb4d68c7f Remove get and set from metadata raw ifc
Memcopy based metadata interface is an unnecessary
overhead and is being removed.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
6dfdd6940b Remove metadata ifc structure
At this point it is not used

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
4d97b1611f Move metadata layout field outside meteadata ifc
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
b5d6cdb398 Rename metadata iface_priv to priv
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:50 +01:00
Adam Rutkowski
05c0826c0f Remove metadata bits manipulation abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
98cba1603f Replace metadata ifc wrappers with direct calls to hash ifc
Metadata wrapper functions (calling iface->func) in header
files are changed to be declarations only. Hash interface
implementation functions in metadata_hash.c are given an
external linkage and are renamed to drop "hash" prefix.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d796e1f400 Remove metadata layout abstraction
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
d2af0bafda Remove redundant locks from metadata flush/load all
Locks acquired in ocf_metadata_flush(/load)_all are
acquired only for the duration of queueing asynch
service for flush/load, no actual metadata accesses
are performed there.

Also flush/load all are always performed with metadata
marked as deinitialized (metadata reference counter freezed),
so no I/O is reading nor writing the metadata. The only source
of potential concurrent metadata accesses are other management
operations, which should be synchronized using management lock.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-20 15:09:49 +01:00
Adam Rutkowski
44efe3e49e Refactor LRU code to use part rather than part_id
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Adam Rutkowski
41a767de97 Multiple LRU lists
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-17 14:35:27 +01:00
Adam Rutkowski
fec61528e6 Remove memcpy from collision/eviction policy metadata api
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-12-07 17:58:44 +01:00
Robert Baldyga
9a23787c6b
Merge pull request #406 from arutk/flush2
Propagate I/O flags (e.g. FUA) to metadata flush I/O
2020-10-06 12:49:22 +02:00
Adam Rutkowski
716edcc637 Flush cache volume after writing config metadata segments
After writing metadata configuration to disk we must
send a flush request to make sure configuration sections
are commited to non-volatile storage.

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-30 10:40:03 +02:00
Adam Rutkowski
c945db356c Propagate I/O flags (e.g. FUA) to metadata flush I/O
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2020-09-29 14:46:27 +02:00
Robert Baldyga
0dfdcb05e9 Fix core volume lifecycle management
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-08-21 16:40:41 +02:00
Robert Baldyga
d5ecdc16dd Make CRC mismatch on recovery a warning instead of error
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:49:29 +02:00
Robert Baldyga
d946124a01 Calculate CRC for runtime metadata sections only on clean load
During recovery procedure there is no guarantee that checksums
of runtime sections were flushed correctly before dirty shutdown.

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-07-28 07:45:53 +02:00
Rafal Stefanowski
38e7e19290 Update copyright statements
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@intel.com>
2020-04-28 13:37:54 +02:00
Robert Baldyga
ac7b5aba6b metadata: Allocate memory with ENV_MEM_NOIO flag
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 12:03:21 +01:00
Robert Baldyga
b7e59ee04a metadata: Use proper function for freeing memory
a_req is allocated using env_vmalloc() so we need to free it
using env_vfree(), not env_free().

Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-02-14 10:29:15 +01:00
Michał Wysoczański
fabd41250b
Merge pull request #342 from mmichal10/fix-metadata-flush
Fix metadata flush
2020-01-24 17:59:58 +01:00
Michal Mielewczyk
d9c987e068 Flush metadata after changing status of each sector
In case of cleaning metadata used to be flushed only when status of whole cache
line changed to clean.

This patch ensures that metadata flush is triggered after changin status of each
single sector is cache line.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Michal Mielewczyk
2f10365086 Flush metadata after setting dirty status of each sector.
After second dirty write to cache line which was already dirty, metadata flush
was not triggered. In case of dirty shutdown, this led to data corruption.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2020-01-24 11:27:56 -05:00
Robert Baldyga
7d82f20614 Remove unused include
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Robert Baldyga
4d25bbe4b3 metadata: Relax memory allocation requirements
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2020-01-24 11:21:04 +01:00
Jan Musial
adc52ba71e Detect cache devices that would overflow ocf_cacheline_t
Signed-off-by: Jan Musial <jan.musial@intel.com>
2020-01-21 15:29:24 +01:00
Adam Rutkowski
92b36c3484 Change DIV_ROUND_UP to OCF_DIV_ROUND_UP
This fixes compilation in SPDK env

Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
2019-12-28 18:24:12 -05:00
Robert Baldyga
d1249e5238 Limit number of concurrent io submitted by metadata_io_i_asynch()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-19 16:50:41 +01:00
Michal Mielewczyk
fb95f048fd Revert "Limit number of concurrent io submitted by metadata_io_i_asynch()"
Starting big caches hangs.

This reverts commit c2c9307b9b.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@intel.com>
2019-12-17 10:29:44 -05:00
Ostrokrzew
fc1847cf55 Add reschedule to metadata hash init
Signed-off-by: Ostrokrzew <slawomir.jankowski@intel.com>
2019-12-17 09:56:45 +01:00
Robert Baldyga
c2c9307b9b Limit number of concurrent io submitted by metadata_io_i_asynch()
Signed-off-by: Robert Baldyga <robert.baldyga@intel.com>
2019-12-11 14:57:20 +01:00
Adam Rutkowski
5113542c7f
Merge pull request #297 from mmichal10/pp-params-in-sb
Store PP config params in cache superblock.
2019-10-01 12:32:15 +02:00