Compare commits

...

201 Commits

Author SHA1 Message Date
Katarzyna Treder
f73a209371 Merge pull request #1644 from katlapinka/kasiat/fuzzy-start-device-fix
Make test_fuzzy_start_cache_device use only required disks
2025-04-14 08:12:07 +02:00
Katarzyna Treder
56ded4c7fd Make test_fuzzy_start_cache_device use only required disks
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-14 08:11:29 +02:00
Katarzyna Treder
3a5df70abe Merge pull request #1643 from katlapinka/kasiat/di-unplug-fix
Fix data integrity unplug test to work with fio newer than 3.30
2025-04-14 08:10:57 +02:00
Katarzyna Treder
289355b83a Fix data integrity unplug test to work with fio newer than 3.30
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-14 08:10:18 +02:00
Robert Baldyga
99af7ee9b5 Merge pull request #1642 from robertbaldyga/xfs-ioclass-fix
Fix io classification for XFS
2025-04-10 09:02:18 +02:00
Katarzyna Treder
b239bdb624 Merge pull request #1594 from katlapinka/kasiat/promotion-tests
Add tests for promotion policy
2025-04-09 13:12:01 +02:00
Katarzyna Treder
e189584557 Add tests for promotion policy
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-09 13:11:37 +02:00
Robert Baldyga
3c19caae1e Merge pull request #1646 from mmichal10/configure-preempt
configure: add preemption_model_*() functions
2025-04-09 11:20:05 +02:00
Michal Mielewczyk
f46de38db0 configure: add preemption_model_*() functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-04-09 10:49:31 +02:00
Robert Baldyga
73cd065bfb Merge pull request #1645 from jfckm/fix-linguist
fix: github-linguist still detects test directory
2025-04-08 13:59:45 +02:00
Jan Musial
46a486a442 fix: github-linguist still detects test directory
Signed-off-by: Jan Musial <jan.musial@huawei.com>
2025-04-08 13:14:36 +02:00
Katarzyna Treder
eee15d9ca4 Merge pull request #1613 from katlapinka/kasiat/test-data-path
Move tests data path to TF
2025-04-08 10:19:16 +02:00
Katarzyna Treder
b290fddceb Move tests data path to TF
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-08 09:38:05 +02:00
Katarzyna Treder
ede64a64f5 Merge pull request #1627 from Kamoppl/kamilg/update_api_march
test-api: api fixes
2025-04-07 15:10:07 +02:00
Kamil Gierszewski
d17157f9dd test-api: api fixes
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-04-07 15:01:35 +02:00
Katarzyna Treder
1e546d664c Merge pull request #1639 from robertbaldyga/fix-fault-injection-test
tests: Fix fault injection test
2025-04-07 14:27:59 +02:00
Robert Baldyga
779d9e96b4 tests: fault_injection: Fix block to request calculation
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-07 13:12:57 +02:00
Robert Baldyga
ceb208eb78 Fix io classification for XFS
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-04 19:46:12 +02:00
Robert Baldyga
0c6a3f699a Merge pull request #1641 from robertbaldyga/update-ocf-20250402
Update OCF submodule
2025-04-02 15:41:14 +02:00
Robert Baldyga
94677ad6bf Update OCF submodule
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-02 15:34:30 +02:00
Robert Baldyga
767eea8f1a Merge pull request #1640 from robertbaldyga/kernel-6.14-bdev-fix
Fix bdev handling on kernel v6.14
2025-04-02 14:03:53 +02:00
Robert Baldyga
72ae9b8161 Allocate bdev suitable for submit_bio()
Starting from kernel 6.14, submit_bio() is supported only for non-mq bdevs.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-02 12:38:27 +02:00
Robert Baldyga
c4a1923215 exp_obj: Add missing error handling
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-02 12:37:08 +02:00
Robert Baldyga
783e0229a5 tests: fault_injection: Disable udev, purge cache and reset stats
Improve accounting precision by eliminating noise.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-01 23:14:08 +02:00
Robert Baldyga
1f89ce7cfc Merge pull request #1636 from robertbaldyga/update-version-v25.3
Update version to v25.3
2025-03-28 08:50:45 +01:00
Robert Baldyga
7cc1091a6a Update version to v25.3
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-27 20:23:39 +01:00
Robert Baldyga
8c6bf2c117 Merge pull request #1635 from robertbaldyga/kernel-6.14
Support kernel 6.14
2025-03-27 20:15:37 +01:00
Robert Baldyga
6aac52ed22 Support kernel 6.14
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-27 19:57:10 +01:00
Robert Baldyga
dad1e5af16 Merge pull request #1634 from mmichal10/upcate-ocf
Update OCF
2025-03-27 12:30:08 +01:00
Michal Mielewczyk
786651dea8 Update OCF
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-27 10:41:18 +01:00
Robert Baldyga
45a5d8a700 Merge pull request #1633 from robertbaldyga/update-ocf-20250326
Update OCF submodule
2025-03-26 08:27:41 +01:00
Robert Baldyga
84235350a0 Update OCF submodule
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-26 08:23:07 +01:00
Robert Baldyga
21d017d60b Merge pull request #1632 from mmichal10/preemption
Disable preemption when accessing current cpu id
2025-03-26 08:19:35 +01:00
Michal Mielewczyk
b1f61580fc Disable preemption when accessing current cpu id
Currently Open CAS doesn't support kernels with involuntary preemption
anyways and once we add the support, we can get rid of this workaround

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-26 07:57:53 +01:00
Robert Baldyga
debbfcc0d1 Merge pull request #1631 from robertbaldyga/update-ocf-20250324
Update OCF submodule
2025-03-25 10:16:39 +01:00
Robert Baldyga
d4877904e4 Update OCF submodule
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-25 09:56:55 +01:00
Robert Baldyga
9ca6d79035 Merge pull request #1626 from mmichal10/duplicated_warning
Fix duplicated warning
2025-03-19 19:20:42 +01:00
Robert Baldyga
9d0a6762c0 Merge pull request #1623 from mmichal10/preemption
Involuntary preemption check
2025-03-19 12:49:17 +01:00
Michal Mielewczyk
0f23ae6950 Makefile: Error handling for failed modprobe
Print an additional error message and remove the installed kernel module

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-19 12:21:57 +01:00
Michal Mielewczyk
aa660ca0a5 Implement involuntary preemption check
Prevent loading the kernel module if the kernel can be involuntarily
preempted

CAS will work if the kernel has been compiled with either
CONFIG_PREEMPT_NONE, CONFIG_PREEMPT_VOLUNTARY, or CONFIG_PREEMPT_DYNAMIC.
If the dynamic configuration is enabled, the kernel must be booted with
preempt=none or preempt=voluntary.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-19 12:21:57 +01:00
Katarzyna Treder
a135a00627 Merge pull request #1602 from katlapinka/kasiat/test-identifier
Add unique test identifier to be able to manage logs
2025-03-19 11:27:20 +01:00
Katarzyna Treder
99b731d180 Add unique test identifier to be able to manage logs
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-03-19 10:12:08 +01:00
Michal Mielewczyk
c6f2371aea casadm: More specific warn for irresolvable cache
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-18 09:15:13 +01:00
Michal Mielewczyk
973023c459 casadm: Don't try to resolve detached cache path
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-18 09:13:25 +01:00
Robert Baldyga
2f827e2ff0 Merge pull request #1614 from Deixx/gitignore-update-gz
Update .gitignore after manpage installation fix
2025-03-11 11:08:08 +01:00
Katarzyna Treder
4d23c5f586 Merge pull request #1618 from katlapinka/kasiat/refactor-tests-description
Cleanup tests descriptions, prepare steps and values naming PART-1
2025-03-10 14:22:03 +01:00
Katarzyna Treder
476f62b2db Add separate steps for preparing devices, fix indent and move constants
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-03-10 14:21:15 +01:00
Katarzyna Treder
ba7d907775 Minor test description and names refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-03-10 14:21:15 +01:00
Robert Baldyga
d4de219fec Merge pull request #1619 from Deixx/io-direction-classifier
New IO class rule `io_direction`
2025-03-06 12:12:05 +01:00
Daniel Madej
4cc7a74534 Add io_direction to random params for IoClass
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:26 +01:00
Daniel Madej
1445982b91 Add io_direction to fuzzy test
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:26 +01:00
Daniel Madej
d3be9444e7 Add test for io_direction IO class rule
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:26 +01:00
Daniel Madej
df813d9978 New IO class rule io_direction
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:19 +01:00
Katarzyna Treder
f37f5afd7b Merge pull request #1596 from Kamoppl/kamilg/update_tests_dec
Update cli help test and remove duplicated test
2025-03-05 12:14:49 +01:00
Kamil Gierszewski
7f2b8fb229 tests: refactor test_cli_help test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-03-05 12:12:43 +01:00
Kamil Gierszewski
4c78a9f067 test-api: fix cli msg
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-03-05 12:12:43 +01:00
Kamil Gierszewski
f6545f2b06 tests: remove duplicated test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-03-05 12:12:43 +01:00
Robert Baldyga
ed113fd6da Merge pull request #1612 from Open-CAS/jfckm-patch-1
chore(GH): Make GH ignore the test/ dir while detecting repo languages
2025-03-03 21:04:02 +01:00
Robert Baldyga
372a29d562 Merge pull request #1549 from robertbaldyga/kernel-6.11
Support kernel 6.13
2025-02-28 16:26:19 +01:00
Katarzyna Treder
69fd4a3872 Merge pull request #1617 from Deixx/rebuild-gz-fix
Add force to gzip commands
2025-02-28 12:39:19 +01:00
Daniel Madej
d562602556 Add force to gzip commands
Without force make shows errors when .gz
files already exist.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-28 12:25:09 +01:00
Katarzyna Treder
2cc49a1cd0 Merge pull request #1615 from katlapinka/kasiat/attach-detach-tests
Introduce tests for cache attach/detach feature
2025-02-28 12:18:44 +01:00
Katarzyna Treder
d973b3850e Introduce tests for cache attach/detach feature
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-02-28 12:18:02 +01:00
Katarzyna Treder
3893fc2aa7 Merge pull request #1616 from Kamoppl/kamilg/update_checksec_path
Kamilg/update checksec path
2025-02-28 09:44:16 +01:00
Kamil Gierszewski
cef43f7778 tests: fix checksec test formating
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-02-28 02:27:55 +01:00
Kamil Gierszewski
8544e28788 tests: update test script path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-02-28 02:27:55 +01:00
Robert Baldyga
dd20fcbc8e Merge pull request #1590 from robertbaldyga/enable-attach-detach
Revert "Disable cache attach and detach"
2025-02-27 15:50:07 +01:00
Robert Baldyga
30d0cd0df0 Merge pull request #1565 from mmichal10/percpu-refcnt
Percpu refcnt
2025-02-27 15:14:22 +01:00
Daniel Madej
3e1dd26909 Update .gitignore after manpage installation fix
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-27 09:45:46 +01:00
Jan Musiał
78be601b1b chore(GH): Make GH ignore the test/ dir while detecting repo languages
Signed-off-by: Jan Musial <jfckm@pm.me>
2025-02-25 18:28:31 +01:00
Michal Mielewczyk
5acc1a3cf2 update ocf: refcnt
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-02-25 16:23:41 +01:00
Jan Musial
27eed48976 Per-cpu reference counters
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Jan Musial <jan.musial@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Ian Levine <ian.levine@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-02-25 16:21:02 +01:00
Jan Musial
4f43829e91 Implement env_atomic64_dec_return
Signed-off-by: Jan Musial <jan.musial@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-02-25 16:19:21 +01:00
Robert Baldyga
690cebae65 Merge pull request #1603 from Deixx/attach-error-msg
Fix error messages for metadata found during attach
2025-02-25 16:01:12 +01:00
Katarzyna Treder
d4f709ab9d Merge pull request #1611 from Kamoppl/kamilg/remove_memory_barrier
Kamilg/remove memory barrier check
2025-02-25 12:42:41 +01:00
Kamil Gierszewski
8c32742f8c github-actions: remove memory barrier warning
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-02-25 11:53:09 +01:00
Daniel Madej
37431273ea Add error message in test api
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-24 12:00:06 +01:00
Daniel Madej
69cdb458d2 Error msg for metadata found during attach
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-24 12:00:06 +01:00
Robert Baldyga
bafd1e79c4 Merge pull request #1608 from Deixx/gitignore-update
Added build/configuration output files to .gitignore
2025-02-21 11:00:26 +01:00
Robert Baldyga
c4b862a3e0 Merge pull request #1607 from robertbaldyga/fix-manpage
Fix manpage installation
2025-02-06 11:32:53 +01:00
Daniel Madej
4b411f837e Added build/configuration output files to .gitignore
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-06 10:40:33 +01:00
Katarzyna Treder
69a4da4b38 Merge pull request #1595 from Kamoppl/kamilg/update_api_dec
Few api fixes/improvements
2025-02-06 07:17:32 +01:00
Rafal Stefanowski
7ee78ac51e Kernel 6.13: Add setting queue limits of exported object
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
dbaeb21cb3 Kernel 6.13: Introduce cas_queue_limits_is_misaligned()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
6a275773ce Kernel 6.13: Introduce cas_queue_max_discard_sectors()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
e5607fe9dd Kernel 6.13: Introduce cas_queue_set_nonrot()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
3badd8b621 Kernel 6.13: Add another definition of cas_set_queue_flush_fua()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
efa2c0ad5e Kernel 6.13: Add another definition of cas_bd_get_next_part()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
eff9ad3c9d Kernel 6.13: Rearrange definitions of cas_copy_queue_limits()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
14f375f135 Kernel 6.13: Expand debug macros
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
7dcb9c92fe Fix checking for NULL instead of error pointer
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Robert Baldyga
52d0ff4c7b Merge pull request #1587 from Deixx/ioclass-0
Informative error for incorrect IO class 0 name
2025-02-04 16:46:40 +01:00
Robert Baldyga
0f6c122e17 Fix manpage installation
gzip manpage properly and update mandb after its installation.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-01-27 22:20:13 +01:00
Kamil Gierszewski
bf7c72ccba test-api: add a check for each stat parsing
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
7b52a2fc00 test-api: add attach cli msg
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
8ad0193a84 test-api: refactor cache imports
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
0ab7c2ca36 test-api: fix get status method
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
d3f4d80612 test-api: add attach/detach methods to Cache
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
c82d52bb47 test-api: add methods to statistics
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
537c9656b8 test-api: rename stat filter
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
1b52345732 test-api: fix core pool init
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
3606472e60 test-api: refactor to fix circular dependencies
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
0e24e52686 test-api: update parser
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
8cd3f4a631 test-api: add Byte unit
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
db3dc068f8 test-api: refactor casadm to use TestRun cache/core list
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
0f645ac10b test-api: Change Cache init to force use of the cache_id instead of cache_device
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:44 +01:00
Kamil Gierszewski
9fb333a73f test-api: minor refactors
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:44 +01:00
Kamil Gierszewski
f0753339dd test-api: change default file path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:44 +01:00
Kamil Gierszewski
7adc356889 conftest: move import to the top of file
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:23:52 +01:00
Kamil Gierszewski
bef461ccc2 conftest: add TestRun cache/core list
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 12:43:18 +01:00
Kamil Gierszewski
e1f8426527 conftest: fix typo
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 12:43:18 +01:00
Kamil Gierszewski
76336c3ef5 conftest: add cleanup after drbd test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 12:42:20 +01:00
Kamil Gierszewski
f7539b46a1 conftest: add create/destroy temporary directory in conftest
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-02 03:08:50 +01:00
Katarzyna Treder
1934e801e7 Merge pull request #1599 from katlapinka/kasiat/tf-refactor
Tests and CAS API fixes after TF refactor
2024-12-31 12:44:52 +01:00
Katarzyna Treder
d4ccac3e75 Refactor trim stress test to use fio instead of vdbench
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-31 12:06:27 +01:00
Katarzyna Treder
e740ce377f Fix imports
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-31 12:06:18 +01:00
Katarzyna Treder
f7e7d3aa7f Disk tools and fs tools refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:58:26 +01:00
Katarzyna Treder
940990e37a Iostat refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:12:54 +01:00
Katarzyna Treder
70defbdf0d Move is_kernel_module_loaded to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:12:16 +01:00
Katarzyna Treder
58d89121ad Fix names: rename types to type_def
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:10:46 +01:00
Katarzyna Treder
e0f6d58d80 Disk finder refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 09:04:46 +01:00
Katarzyna Treder
8a5d531a32 OS tools refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 07:58:50 +01:00
Katarzyna Treder
3e67a8c0f5 Rename systemd to systemctl and move it to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:46:53 +01:00
Katarzyna Treder
a11e3ca890 Remove kedr and kedr tests
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:39:56 +01:00
Katarzyna Treder
c8ce05617d Move scsi_debug to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:36:35 +01:00
Katarzyna Treder
b724419a4f Move git to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:32:50 +01:00
Katarzyna Treder
4e8ea659da Move fstab to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:28:35 +01:00
Katarzyna Treder
241a0c545a Remove generator from test utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:25:27 +01:00
Katarzyna Treder
0cc3b3270d Move dmesg to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:06:08 +01:00
Katarzyna Treder
4dca1c3c00 Move linux command and wait method to common tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:58:04 +01:00
Katarzyna Treder
cde7a3af16 Move error device to storage devices
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:47:18 +01:00
Katarzyna Treder
0be330ac1d Move checksec to scripts
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:39:16 +01:00
Katarzyna Treder
5121831bd8 Move singleton to common utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:24:57 +01:00
Katarzyna Treder
ee8b7b757f Move retry to connection utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:54:44 +01:00
Katarzyna Treder
4a6d6d39cd Move asynchronous to connection utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:50:43 +01:00
Katarzyna Treder
9460151ee5 Move output to connection utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:31:14 +01:00
Katarzyna Treder
81e792be99 Move Time to types
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:20:31 +01:00
Katarzyna Treder
d4e562caf9 Move size.py to types
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:18:38 +01:00
Robert Baldyga
75038692cd Revert "Disable cache attach and detach"
This reverts commit f34328adf2.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-11-27 13:41:00 +01:00
Katarzyna Treder
baa1f37432 Merge pull request #1589 from katlapinka/kasiat/initramfs-tests-update
Add initramfs update to LVM tests and conftest
2024-11-27 10:57:30 +01:00
Katarzyna Treder
809a9e407e TF update
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-27 10:26:35 +01:00
Daniel Madej
f15d3238ad Informative error for incorrect IO class 0 name
Instead of generic 'Invalid input parameter'

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-11-26 15:08:08 +01:00
Daniel Madej
0461de9e24 Fix typos
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-11-26 15:08:07 +01:00
Katarzyna Treder
3953e8b0f8 Add initramfs update to LVM tests and conftest
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-25 14:46:37 +01:00
Katarzyna Treder
c3bb599f0e Merge pull request #1576 from Kamoppl/kamilg/speed_up_TF
speed up tests/conftest
2024-11-25 14:23:08 +01:00
Kamil Gierszewski
e54732ef81 test-conftest: move dict creation outside loop function
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
677a5019fb test-conftest: Don't clean-up drives that won't be used
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
bf7711354d test-conftest: More readable RAID teardown
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
b8ccf403f0 test-conftest: Kill IO faster in prepare/teardown
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
720475f85c tests: update_test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Kamil Gierszewski
ed85411750 test-conftest: Use cached device_ids + fix posix path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Kamil Gierszewski
4626d87471 test-conftest: Don't prepare disks if test doesn't use them
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Kamil Gierszewski
92a8424dd0 test-conftest: reformat conftest
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Katarzyna Treder
c142610174 Merge pull request #1580 from katlapinka/kasiat/fix-lvm-tests
Fix tests after LVM API refactor
2024-11-13 13:29:00 +01:00
Katarzyna Treder
422a027f82 TF update
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-13 13:28:16 +01:00
Katarzyna Treder
6e3ac806b7 Fix tests after LVM API refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-13 13:28:16 +01:00
Katarzyna Treder
eb50ee5f5d Merge pull request #1524 from katlapinka/kasiat/loading-corrupted-metadata
Add test for loading corrupted metadata
2024-11-12 12:33:42 +01:00
Katarzyna Treder
cc0f4b1c8f Add test for loading corrupted metadata
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-12 12:17:26 +01:00
Robert Baldyga
aafc6b49a6 Merge pull request #1510 from Kamoppl/kamilg/add_checkpatch
github-actions: add checkpatch
2024-11-05 12:51:03 +01:00
Kamil Gierszewski
c7601847a1 github-actions: add checkpatch
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-05 11:46:54 +01:00
Katarzyna Treder
37d91fdbc2 Merge pull request #1578 from Deixx/mtab-fix
Fix for mtab changes
2024-10-31 11:04:26 +01:00
Daniel Madej
545a07098c Fix for mtab changes
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-31 11:02:31 +01:00
Katarzyna Treder
82ce9342a2 Merge pull request #1573 from Kamoppl/kamilg/fix_bugs
Kamilg/fix bugs
2024-10-30 14:16:16 +01:00
Kamil Gierszewski
c15b4d580b tests: Fix after changing function name
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:38 +01:00
Kamil Gierszewski
35850c7d9a test-api: adjust api to handle inactive core devices + add detached/inactive cores getter
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:38 +01:00
Kamil Gierszewski
908672fd66 test-api: add string representation of SeqCutOffPolicy
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:37 +01:00
Kamil Gierszewski
4ebc00bac8 tests: fix fault injestion interrupt test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:37 +01:00
Kamil Gierszewski
9ab60fe679 tests: change path type in test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:28:29 +01:00
Kamil Gierszewski
421c0e4641 test-api: fix stat type
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:28:29 +01:00
Robert Baldyga
e8b1c3ce81 Merge pull request #1514 from Deixx/mtab-check-optional
Handle missing /etc/mtab and modify output
2024-10-29 10:47:00 +01:00
Daniel Madej
0c0b10535e [tests] Update CLI messages and test
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-28 16:07:33 +01:00
Daniel Madej
f11f14d31a Refactor mounted device checks
Calling functions now print error messages.
All the mounted devices are printed (not just the first one).

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-28 16:07:33 +01:00
Daniel Madej
a2f3cc1f4a Mtab check optional
There are situations when /etc/mtab is not present in the
system (e.g. in certain container images). This blocks
stop/remove operations. With making this check optional
the duty of checking mounts falls to kernel.
Test modified to check operations with and without mtab.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-28 16:07:33 +01:00
Robert Baldyga
588b7756a9 Merge pull request #1574 from robertbaldyga/exp-obj-serial
Introduce exp_obj serial
2024-10-25 14:58:47 +02:00
Robert Baldyga
b6f604d4a9 Introduce exp_obj serial
This is meant to be used by lvm2 to recognize which one of the stacked
devices should be used (be it backend device, or one of the bottom levels
in multi-level cache configuration).

Signed-off-by: Robert Baldyga <robert.baldyga@open-cas.com>
2024-10-19 21:53:43 +02:00
Katarzyna Treder
7a3b0672f2 Merge pull request #1572 from katlapinka/kasiat/update-tf
Update TF submodule
2024-10-15 12:03:10 +02:00
Katarzyna Treder
7c9c9a54e2 Update TF submodule
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-10-15 11:15:26 +02:00
Katarzyna Treder
dfa36541f6 Merge pull request #1562 from Deixx/concurrent-flush
Small update to test_concurrent_caches_flush
2024-10-15 09:51:23 +02:00
Daniel Madej
75fd39ed7b Update/fix to test_concurrent_caches_flush
No need to run fio in background. This fixes the issue that
one of the tests didn't wait for fio to finish before
checking stats.
More informative error messages.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-15 09:46:36 +02:00
Katarzyna Treder
bffe87d071 Merge pull request #1560 from katlapinka/kasiat/test-security-fixes
Small fixes for security tests
2024-10-15 09:37:55 +02:00
Katarzyna Treder
20ee2fda1f Small fixes in security tests
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-10-15 09:36:36 +02:00
Katarzyna Treder
e7f14f7d00 Merge pull request #1538 from Kamoppl/kamilg/fix_scope_bugs_v4
Kamilg/fix scope bugs v4
2024-10-11 11:26:58 +02:00
Kamil Gierszewski
5cada7a0ec tests: add disabling udev in fault injection test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:53 +02:00
Kamil Gierszewski
1c26de3e7f tests: update getting metadata size on device in memory consumption test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:53 +02:00
Kamil Gierszewski
a70500ee44 tests: fix init test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:52 +02:00
Kamil Gierszewski
2f188f9766 tests: add dirty data check to acp test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:52 +02:00
Kamil Gierszewski
0fdd4933a2 tests-api: add statistics parse for metadata in GiB
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:51 +02:00
Kamil Gierszewski
6ce978f317 tests: fix io class tests
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:51 +02:00
Kamil Gierszewski
cf68fb226b tests: fix dmesg getting in test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:50 +02:00
Kamil Gierszewski
004062d9fd tests: fix test file path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:50 +02:00
Kamil Gierszewski
4b74c65969 tests: fix checksec permissions
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:49 +02:00
Kamil Gierszewski
51962e4684 tests: refactor test_inactive_cores
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:49 +02:00
Kamil Gierszewski
daea1a433a tests: fix test_simulation_startup
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 02:42:56 +02:00
Kamil Gierszewski
c32650af0b tests: fix test recovery
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 02:42:56 +02:00
Kamil Gierszewski
39afdaa6c1 test-api: fix cli help message
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 02:42:56 +02:00
Katarzyna Treder
c1ad2a8584 Merge pull request #1526 from katlapinka/kasiat/ioclass-file-size-core
Add tests for io classification statistics per core
2024-10-10 20:46:21 +02:00
Katarzyna Treder
375fce5a19 Add tests for io classification statistics per core
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-10-10 20:45:24 +02:00
Katarzyna Treder
df5c0c7d4c Merge pull request #1501 from Kamoppl/kamilg/add_old_tests
tests: add tests for read hit errors
2024-10-09 14:52:22 +02:00
Kamil Gierszewski
625cec7838 tests: update tests
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-09 14:47:48 +02:00
Robert Baldyga
f5ee206fb9 Merge pull request #1564 from robertbaldyga/readme-v24.9
README: Recommend the latest release
2024-10-08 14:47:09 +02:00
Robert Baldyga
0e46d30281 README: Recommend the latest release
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-08 14:38:19 +02:00
224 changed files with 5917 additions and 2556 deletions

28
.checkpatch.conf Normal file
View File

@@ -0,0 +1,28 @@
--max-line-length=80
--no-tree
--ignore AVOID_BUG
--ignore COMMIT_MESSAGE
--ignore FILE_PATH_CHANGES
--ignore PREFER_PR_LEVEL
--ignore SPDX_LICENSE_TAG
--ignore SPLIT_STRING
--ignore MEMORY_BARRIER
--exclude .github
--exclude casadm
--exclude configure.d
--exclude doc
--exclude ocf
--exclude test
--exclude tools
--exclude utils
--exclude .gitignore
--exclude .gitmodules
--exclude .pep8speaks.yml
--exclude LICENSE
--exclude Makefile
--exclude README.md
--exclude configure
--exclude requirements.txt
--exclude version

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
test/** -linguist-detectable

15
.github/workflows/checkpatch.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
name: checkpatch review
on: [pull_request]
jobs:
my_review:
name: checkpatch review
runs-on: ubuntu-latest
steps:
- name: 'Calculate PR commits + 1'
run: echo "PR_FETCH_DEPTH=$(( ${{ github.event.pull_request.commits }} + 1 ))" >> $GITHUB_ENV
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: ${{ env.PR_FETCH_DEPTH }}
- name: Run checkpatch review
uses: webispy/checkpatch-action@v9

7
.gitignore vendored
View File

@@ -11,8 +11,15 @@
tags
Module.symvers
Module.markers
*.mod
*.mod.c
*.out
modules.order
__pycache__/
*.py[cod]
*$py.class
*.gz
casadm/casadm
modules/include/ocf
modules/generated_defines.h

View File

@@ -25,14 +25,14 @@ Open CAS uses Safe string library (safeclib) that is MIT licensed.
We recommend using the latest version, which contains all the important fixes
and performance improvements. Bugfix releases are guaranteed only for the
latest major release line (currently 22.6.x).
latest major release line (currently 24.9.x).
To download the latest Open CAS Linux release run following commands:
```
wget https://github.com/Open-CAS/open-cas-linux/releases/download/v22.6.3/open-cas-linux-22.06.3.0725.release.tar.gz
tar -xf open-cas-linux-22.06.3.0725.release.tar.gz
cd open-cas-linux-22.06.3.0725.release/
wget https://github.com/Open-CAS/open-cas-linux/releases/download/v24.9/open-cas-linux-24.09.0.0900.release.tar.gz
tar -xf open-cas-linux-24.09.0.0900.release.tar.gz
cd open-cas-linux-24.09.0.0900.release/
```
Alternatively, if you want recent development (unstable) version, you can clone GitHub repository:

View File

@@ -1,5 +1,6 @@
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -121,7 +122,7 @@ LDFLAGS = -z noexecstack -z relro -z now -pie -pthread -lm
# Targets
#
all: sync
all: sync manpage
$(MAKE) build
build: $(TARGETS)
@@ -156,10 +157,14 @@ endif
-include $(addprefix $(OBJDIR),$(OBJS:.o=.d))
manpage:
gzip -k -f $(TARGET).8
clean:
@echo " CLEAN "
@rm -f *.a $(TARGETS)
@rm -f $(shell find -name \*.d) $(shell find -name \*.o)
@rm -f $(TARGET).8.gz
distclean: clean
@@ -168,11 +173,12 @@ install: install_files
install_files:
@echo "Installing casadm"
@install -m 755 -D $(TARGET) $(DESTDIR)$(BINARY_PATH)/$(TARGET)
@install -m 644 -D $(TARGET).8 $(DESTDIR)/usr/share/man/man8/$(TARGET).8
@install -m 644 -D $(TARGET).8.gz $(DESTDIR)/usr/share/man/man8/$(TARGET).8.gz
@mandb -q
uninstall:
@echo "Uninstalling casadm"
$(call remove-file,$(DESTDIR)$(BINARY_PATH)/$(TARGET))
$(call remove-file,$(DESTDIR)/usr/share/man/man8/$(TARGET).8)
$(call remove-file,$(DESTDIR)/usr/share/man/man8/$(TARGET).8.gz)
.PHONY: clean distclean all sync build install uninstall

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -45,8 +45,8 @@
#define CORE_ADD_MAX_TIMEOUT 30
int is_cache_mounted(int cache_id);
int is_core_mounted(int cache_id, int core_id);
bool device_mounts_detected(const char *pattern, int cmplen);
void print_mounted_devices(const char *pattern, int cmplen);
/* KCAS_IOCTL_CACHE_CHECK_DEVICE wrapper */
int _check_cache_device(const char *device_path,
@@ -70,7 +70,7 @@ static const char *core_states_name[] = {
#define STANDBY_DETACHED_STATE "Standby detached"
#define CACHE_STATE_LENGHT 20
#define CACHE_STATE_LENGTH 20
#define CAS_LOG_FILE "/var/log/opencas.log"
#define CAS_LOG_LEVEL LOG_INFO
@@ -1025,6 +1025,22 @@ static int _start_cache(uint16_t cache_id, unsigned int cache_init,
cache_device);
} else {
print_err(cmd.ext_err_code);
if (OCF_ERR_METADATA_FOUND == cmd.ext_err_code) {
/* print instructions specific for start/attach */
if (start) {
cas_printf(LOG_ERR,
"Please load cache metadata using --load"
" option or use --force to\n discard on-disk"
" metadata and start fresh cache instance.\n"
);
} else {
cas_printf(LOG_ERR,
"Please attach another device or use --force"
" to discard on-disk metadata\n"
" and attach this device to cache instance.\n"
);
}
}
}
return FAILURE;
}
@@ -1119,8 +1135,16 @@ int stop_cache(uint16_t cache_id, int flush)
int status;
/* Don't stop instance with mounted filesystem */
if (is_cache_mounted(cache_id) == FAILURE)
int cmplen = 0;
char pattern[80];
/* verify if any core (or core partition) for this cache is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-", cache_id) - 1;
if (device_mounts_detected(pattern, cmplen)) {
cas_printf(LOG_ERR, "Can't stop cache instance %d due to mounted devices:\n", cache_id);
print_mounted_devices(pattern, cmplen);
return FAILURE;
}
fd = open_ctrl_device();
if (fd == -1)
@@ -1803,58 +1827,52 @@ int add_core(unsigned int cache_id, unsigned int core_id, const char *core_devic
return SUCCESS;
}
int _check_if_mounted(int cache_id, int core_id)
bool device_mounts_detected(const char *pattern, int cmplen)
{
FILE *mtab;
struct mntent *mstruct;
char dev_buf[80];
int difference = 0, error = 0;
if (core_id >= 0) {
/* verify if specific core is mounted */
snprintf(dev_buf, sizeof(dev_buf), "/dev/cas%d-%d", cache_id, core_id);
} else {
/* verify if any core from given cache is mounted */
snprintf(dev_buf, sizeof(dev_buf), "/dev/cas%d-", cache_id);
}
int no_match = 0, error = 0;
mtab = setmntent("/etc/mtab", "r");
if (!mtab)
{
cas_printf(LOG_ERR, "Error while accessing /etc/mtab\n");
return FAILURE;
if (!mtab) {
/* if /etc/mtab not found then the kernel will check for mounts */
return false;
}
while ((mstruct = getmntent(mtab)) != NULL) {
error = strcmp_s(mstruct->mnt_fsname, PATH_MAX, dev_buf, &difference);
error = strcmp_s(mstruct->mnt_fsname, cmplen, pattern, &no_match);
/* mstruct->mnt_fsname is /dev/... block device path, not a mountpoint */
if (error != EOK)
return FAILURE;
if (!difference) {
if (core_id<0) {
cas_printf(LOG_ERR,
"Can't stop cache instance %d. Device %s is mounted!\n",
cache_id, mstruct->mnt_fsname);
} else {
cas_printf(LOG_ERR,
"Can't remove core %d from cache %d."
" Device %s is mounted!\n",
core_id, cache_id, mstruct->mnt_fsname);
}
return FAILURE;
}
return false;
if (no_match)
continue;
return true;
}
return SUCCESS;
return false;
}
int is_cache_mounted(int cache_id)
void print_mounted_devices(const char *pattern, int cmplen)
{
return _check_if_mounted(cache_id, -1);
}
FILE *mtab;
struct mntent *mstruct;
int no_match = 0, error = 0;
int is_core_mounted(int cache_id, int core_id)
{
return _check_if_mounted(cache_id, core_id);
mtab = setmntent("/etc/mtab", "r");
if (!mtab) {
/* should exist, but if /etc/mtab not found we cannot print mounted devices */
return;
}
while ((mstruct = getmntent(mtab)) != NULL) {
error = strcmp_s(mstruct->mnt_fsname, cmplen, pattern, &no_match);
/* mstruct->mnt_fsname is /dev/... block device path, not a mountpoint */
if (error != EOK || no_match)
continue;
cas_printf(LOG_ERR, "%s\n", mstruct->mnt_fsname);
}
}
int remove_core(unsigned int cache_id, unsigned int core_id,
@@ -1864,7 +1882,23 @@ int remove_core(unsigned int cache_id, unsigned int core_id,
struct kcas_remove_core cmd;
/* don't even attempt ioctl if filesystem is mounted */
if (SUCCESS != is_core_mounted(cache_id, core_id)) {
bool mounts_detected = false;
int cmplen = 0;
char pattern[80];
/* verify if specific core is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-%d", cache_id, core_id);
mounts_detected = device_mounts_detected(pattern, cmplen);
if (!mounts_detected) {
/* verify if any partition of the core is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-%dp", cache_id, core_id) - 1;
mounts_detected = device_mounts_detected(pattern, cmplen);
}
if (mounts_detected) {
cas_printf(LOG_ERR, "Can't remove core %d from "
"cache %d due to mounted devices:\n",
core_id, cache_id);
print_mounted_devices(pattern, cmplen);
return FAILURE;
}
@@ -1929,11 +1963,6 @@ int remove_inactive_core(unsigned int cache_id, unsigned int core_id,
int fd = 0;
struct kcas_remove_inactive cmd;
/* don't even attempt ioctl if filesystem is mounted */
if (SUCCESS != is_core_mounted(cache_id, core_id)) {
return FAILURE;
}
fd = open_ctrl_device();
if (fd == -1)
return FAILURE;
@@ -2189,7 +2218,7 @@ int partition_list(unsigned int cache_id, unsigned int output_format)
fclose(intermediate_file[1]);
if (!result && stat_format_output(intermediate_file[0], stdout,
use_csv?RAW_CSV:TEXT)) {
cas_printf(LOG_ERR, "An error occured during statistics formatting.\n");
cas_printf(LOG_ERR, "An error occurred during statistics formatting.\n");
result = FAILURE;
}
fclose(intermediate_file[0]);
@@ -2314,6 +2343,10 @@ static inline int partition_get_line(CSVFILE *csv,
}
strncpy_s(cnfg->info[part_id].name, sizeof(cnfg->info[part_id].name),
name, strnlen_s(name, sizeof(cnfg->info[part_id].name)));
if (0 == part_id && strcmp(name, "unclassified")) {
cas_printf(LOG_ERR, "IO class 0 must have the default name 'unclassified'\n");
return FAILURE;
}
/* Validate Priority*/
*error_col = part_csv_coll_prio;
@@ -2401,7 +2434,7 @@ int partition_get_config(CSVFILE *csv, struct kcas_io_classes *cnfg,
return FAILURE;
} else {
cas_printf(LOG_ERR,
"I/O error occured while reading"
"I/O error occurred while reading"
" IO Classes configuration file"
" supplied.\n");
return FAILURE;
@@ -2648,7 +2681,7 @@ void *list_printout(void *ctx)
struct list_printout_ctx *spc = ctx;
if (stat_format_output(spc->intermediate,
spc->out, spc->type)) {
cas_printf(LOG_ERR, "An error occured during statistics formatting.\n");
cas_printf(LOG_ERR, "An error occurred during statistics formatting.\n");
spc->result = FAILURE;
} else {
spc->result = SUCCESS;
@@ -2787,20 +2820,24 @@ int list_caches(unsigned int list_format, bool by_id_path)
for (i = 0; i < caches_count; ++i) {
curr_cache = caches[i];
char status_buf[CACHE_STATE_LENGHT];
char status_buf[CACHE_STATE_LENGTH];
const char *tmp_status;
char mode_string[12];
char exp_obj[32];
char cache_ctrl_dev[MAX_STR_LEN] = "-";
float cache_flush_prog;
float core_flush_prog;
bool cache_device_detached;
bool cache_device_detached =
((curr_cache->state & (1 << ocf_cache_state_standby)) |
(curr_cache->state & (1 << ocf_cache_state_detached)));
if (!by_id_path && !curr_cache->standby_detached) {
if (!by_id_path && !cache_device_detached) {
if (get_dev_path(curr_cache->device, curr_cache->device,
sizeof(curr_cache->device))) {
cas_printf(LOG_WARNING, "WARNING: Cannot resolve path "
"to cache. By-id path will be shown for that cache.\n");
cas_printf(LOG_WARNING,
"WARNING: Cannot resolve path to "
"cache %d. By-id path will be shown "
"for that cache.\n", curr_cache->id);
}
}
@@ -2826,11 +2863,6 @@ int list_caches(unsigned int list_format, bool by_id_path)
}
}
cache_device_detached =
((curr_cache->state & (1 << ocf_cache_state_standby)) |
(curr_cache->state & (1 << ocf_cache_state_detached)))
;
fprintf(intermediate_file[1], TAG(TREE_BRANCH)
"%s,%u,%s,%s,%s,%s\n",
"cache", /* type */
@@ -2854,7 +2886,7 @@ int list_caches(unsigned int list_format, bool by_id_path)
}
if (core_flush_prog || cache_flush_prog) {
snprintf(status_buf, CACHE_STATE_LENGHT,
snprintf(status_buf, CACHE_STATE_LENGTH,
"%s (%3.1f %%)", "Flushing", core_flush_prog);
tmp_status = status_buf;
} else {
@@ -2882,7 +2914,7 @@ int list_caches(unsigned int list_format, bool by_id_path)
pthread_join(thread, 0);
if (printout_ctx.result) {
result = 1;
cas_printf(LOG_ERR, "An error occured during list formatting.\n");
cas_printf(LOG_ERR, "An error occurred during list formatting.\n");
}
fclose(intermediate_file[0]);
@@ -3016,7 +3048,7 @@ int zero_md(const char *cache_device, bool force)
}
close(fd);
cas_printf(LOG_INFO, "OpenCAS's metadata wiped succesfully from device '%s'.\n", cache_device);
cas_printf(LOG_INFO, "OpenCAS's metadata wiped successfully from device '%s'.\n", cache_device);
return SUCCESS;
}

View File

@@ -2237,7 +2237,7 @@ static cli_command cas_commands[] = {
.options = attach_cache_options,
.command_handle_opts = start_cache_command_handle_option,
.handle = handle_cache_attach,
.flags = (CLI_SU_REQUIRED | CLI_COMMAND_BLOCKED),
.flags = CLI_SU_REQUIRED,
.help = NULL,
},
{
@@ -2247,7 +2247,7 @@ static cli_command cas_commands[] = {
.options = detach_options,
.command_handle_opts = command_handle_option,
.handle = handle_cache_detach,
.flags = (CLI_SU_REQUIRED | CLI_COMMAND_BLOCKED),
.flags = CLI_SU_REQUIRED,
.help = NULL,
},
{

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -151,9 +151,7 @@ struct {
},
{
OCF_ERR_METADATA_FOUND,
"Old metadata found on device.\nPlease load cache metadata using --load"
" option or use --force to\n discard on-disk metadata and"
" start fresh cache instance.\n"
"Old metadata found on device"
},
{
OCF_ERR_SUPERBLOCK_MISMATCH,

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -10,16 +10,19 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/genhd.h" ||
compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/blkdev.h"
if compile_module $cur_name "struct block_device bd; bdev_partno;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct block_device bd; bd = *disk_part_iter_next(NULL);" "linux/blk_types.h" "linux/genhd.h"
elif compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/genhd.h" ||
compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
elif compile_module $cur_name "struct hd_struct hd; hd = *disk_part_iter_next(NULL);" "linux/genhd.h"
elif compile_module $cur_name "struct block_device bd; bd = *disk_part_iter_next(NULL);" "linux/blk_types.h" "linux/genhd.h"
then
echo $cur_name "3" >> $config_file_path
elif compile_module $cur_name "struct hd_struct hd; hd = *disk_part_iter_next(NULL);" "linux/genhd.h"
then
echo $cur_name "4" >> $config_file_path
else
echo $cur_name "X" >> $config_file_path
fi
@@ -37,7 +40,7 @@ apply() {
unsigned long idx;
xa_for_each(&disk->part_tbl, idx, part) {
if ((part_no = part->bd_partno)) {
if ((part_no = bdev_partno(part))) {
break;
}
}
@@ -47,6 +50,23 @@ apply() {
"2")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{
int part_no = 0;
struct gendisk *disk = bd->bd_disk;
struct block_device *part;
unsigned long idx;
xa_for_each(&disk->part_tbl, idx, part) {
if ((part_no = part->bd_partno)) {
break;
}
}
return part_no;
}" ;;
"3")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{
int part_no = 0;
struct gendisk *disk = bd->bd_disk;
@@ -66,7 +86,7 @@ apply() {
return part_no;
}" ;;
"3")
"4")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{

View File

@@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -22,7 +23,7 @@ apply() {
case "$1" in
"1")
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
static inline void _cas_cleanup_disk(struct gendisk *gd)
{
blk_cleanup_disk(gd);
}"
@@ -31,7 +32,7 @@ apply() {
"2")
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
static inline void _cas_cleanup_disk(struct gendisk *gd)
{
put_disk(gd);
}"

View File

@@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -33,7 +34,8 @@ apply() {
add_function "
static inline void cas_cleanup_queue(struct request_queue *q)
{
blk_mq_destroy_queue(q);
if (queue_is_mq(q))
blk_mq_destroy_queue(q);
}"
;;

View File

@@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -22,6 +23,11 @@ check() {
output=$((output+2))
fi
if compile_module $cur_name "BLK_MQ_F_SHOULD_MERGE ;" "linux/blk-mq.h"
then
output=$((output+4))
fi
echo $cur_name $output >> $config_file_path
}
@@ -42,6 +48,14 @@ apply() {
else
add_define "CAS_BLK_MQ_F_BLOCKING 0"
fi
if ((arg & 4))
then
add_define "CAS_BLK_MQ_F_SHOULD_MERGE \\
BLK_MQ_F_SHOULD_MERGE"
else
add_define "CAS_BLK_MQ_F_SHOULD_MERGE 0"
fi
}
conf_run $@

View File

@@ -0,0 +1,45 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "page_folio((struct page *)NULL);" "linux/page-flags.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline struct address_space *cas_page_mapping(struct page *page)
{
struct folio *folio = page_folio(page);
return folio->mapping;
}" ;;
"2")
add_function "
static inline struct address_space *cas_page_mapping(struct page *page)
{
if (PageCompound(page))
return NULL;
return page->mapping;
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -0,0 +1,52 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "preempt_model_voluntary();" "linux/preempt.h" &&
compile_module $cur_name "preempt_model_none();" "linux/preempt.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline int cas_preempt_model_voluntary(void)
{
return preempt_model_voluntary();
}"
add_function "
static inline int cas_preempt_model_none(void)
{
return preempt_model_none();
}" ;;
"2")
add_function "
static inline int cas_preempt_model_voluntary(void)
{
return IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY);
}"
add_function "
static inline int cas_preempt_model_none(void)
{
return IS_ENABLED(CONFIG_PREEMPT_NONE);
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -0,0 +1,48 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_queue_max_discard_sectors(NULL, 0);" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline void cas_queue_max_discard_sectors(
struct request_queue *q,
unsigned int max_discard_sectors)
{
blk_queue_max_discard_sectors(q, max_discard_sectors);
}" ;;
"2")
add_function "
static inline void cas_queue_max_discard_sectors(
struct request_queue *q,
unsigned int max_discard_sectors)
{
struct queue_limits *lim = &q->limits;
lim->max_hw_discard_sectors = max_discard_sectors;
lim->max_discard_sectors =
min(max_discard_sectors, lim->max_user_discard_sectors);
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -12,18 +12,18 @@ check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct queue_limits q; q.limits_aux;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct queue_limits q; q.max_write_zeroes_sectors;" "linux/blkdev.h"
if compile_module $cur_name "struct queue_limits q; q.max_write_zeroes_sectors;" "linux/blkdev.h"
then
if compile_module $cur_name "struct queue_limits q; q.max_write_same_sectors;" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "3" >> $config_file_path
echo $cur_name "2" >> $config_file_path
fi
elif compile_module $cur_name "struct queue_limits q; q.max_write_same_sectors;" "linux/blkdev.h"
then
echo $cur_name "3" >> $config_file_path
elif compile_module $cur_name "struct queue_limits q; q.limits_aux;" "linux/blkdev.h"
then
echo $cur_name "4" >> $config_file_path
else
@@ -37,6 +37,55 @@ apply() {
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"2")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"3")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"4")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
struct queue_limits_aux *l_aux = exp_q->limits.limits_aux;
exp_q->limits = *cache_q_limits;
@@ -63,55 +112,6 @@ apply() {
if (queue_virt_boundary(cache_q))
queue_flag_set(QUEUE_FLAG_NOMERGES, cache_q);
}" ;;
"2")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"3")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"4")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
*)

View File

@@ -0,0 +1,42 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct queue_limits q; q.misaligned;" "linux/blkdev.h"
then
echo $cur_name 1 >> $config_file_path
else
echo $cur_name 2 >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline bool cas_queue_limits_is_misaligned(
struct queue_limits *lim)
{
return lim->misaligned;
}" ;;
"2")
add_function "
static inline bool cas_queue_limits_is_misaligned(
struct queue_limits *lim)
{
return lim->features & BLK_FLAG_MISALIGNED;
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -0,0 +1,39 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "(int)QUEUE_FLAG_NONROT;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline void cas_queue_set_nonrot(struct request_queue *q)
{
q->queue_flags |= (1 << QUEUE_FLAG_NONROT);
}" ;;
"2")
add_function "
static inline void cas_queue_set_nonrot(struct request_queue *q)
{
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -11,15 +11,18 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL, NULL);" "linux/blk-mq.h"
if compile_module $cur_name "blk_alloc_disk(NULL, 0);" "linux/blkdev.h"
then
echo $cur_name 1 >> $config_file_path
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL);" "linux/blk-mq.h"
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL, NULL);" "linux/blk-mq.h"
then
echo $cur_name 2 >> $config_file_path
elif compile_module $cur_name "alloc_disk(0);" "linux/genhd.h"
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL);" "linux/blk-mq.h"
then
echo $cur_name 3 >> $config_file_path
elif compile_module $cur_name "alloc_disk(0);" "linux/genhd.h"
then
echo $cur_name 4 >> $config_file_path
else
echo $cur_name X >> $config_file_path
fi
@@ -28,50 +31,73 @@ check() {
apply() {
case "$1" in
"1")
add_typedef "struct queue_limits cas_queue_limits_t;"
add_function "
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = blk_mq_alloc_disk(tag_set, NULL, NULL);
if (!(*gd))
return -ENOMEM;
*gd = blk_alloc_disk(lim, NUMA_NO_NODE);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
cas_cleanup_disk(gd);
_cas_cleanup_disk(gd);
}"
;;
"2")
add_typedef "struct queue_limits cas_queue_limits_t;"
add_function "
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = blk_mq_alloc_disk(tag_set, NULL);
if (!(*gd))
return -ENOMEM;
*gd = blk_mq_alloc_disk(tag_set, lim, NULL);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
cas_cleanup_disk(gd);
_cas_cleanup_disk(gd);
}"
;;
"3")
add_typedef "void* cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = blk_mq_alloc_disk(tag_set, NULL);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
_cas_cleanup_disk(gd);
}"
;;
"4")
add_typedef "void* cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = alloc_disk(1);
if (!(*gd))
@@ -88,7 +114,7 @@ apply() {
}"
add_function "
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
blk_cleanup_queue(gd->queue);
gd->queue = NULL;

View File

@@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -9,12 +10,15 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_queue_write_cache(NULL, 0, 0);" "linux/blkdev.h"
if compile_module $cur_name "BLK_FEAT_WRITE_CACHE;" "linux/blk-mq.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct request_queue rq; rq.flush_flags;" "linux/blkdev.h"
elif compile_module $cur_name "blk_queue_write_cache(NULL, 0, 0);" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
elif compile_module $cur_name "struct request_queue rq; rq.flush_flags;" "linux/blkdev.h"
then
echo $cur_name "3" >> $config_file_path
else
echo $cur_name "X" >> $config_file_path
fi
@@ -23,21 +27,39 @@ check() {
apply() {
case "$1" in
"1")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
(q->limits.features & BLK_FEAT_WRITE_CACHE)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
(q->limits.features & BLK_FEAT_FUA)"
add_define "CAS_BLK_FEAT_WRITE_CACHE BLK_FEAT_WRITE_CACHE"
add_define "CAS_BLK_FEAT_FUA BLK_FEAT_FUA"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag) \\
({ lim->features |= flag; })"
add_function "
static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua) {}" ;;
"2")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
test_bit(QUEUE_FLAG_WC, &(q)->queue_flags)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)"
add_define "CAS_BLK_FEAT_WRITE_CACHE 0"
add_define "CAS_BLK_FEAT_FUA 0"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag)"
add_function "
static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua)
{
blk_queue_write_cache(q, flush, fua);
}" ;;
"2")
"3")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
CAS_IS_SET_FLUSH((q)->flush_flags)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
((q)->flush_flags & REQ_FUA)"
add_define "CAS_BLK_FEAT_WRITE_CACHE 0"
add_define "CAS_BLK_FEAT_FUA 0"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag)"
add_function "static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua)
{

View File

@@ -1,5 +1,6 @@
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
# If $(M) is defined, we've been invoked from the
@@ -52,7 +53,11 @@ distclean: clean distsync
install: install_files
@$(DEPMOD)
@$(MODPROBE) $(CACHE_MODULE)
@$(MODPROBE) $(CACHE_MODULE) || ( \
echo "See dmesg for more information" >&2 && \
rm -f $(DESTDIR)$(MODULES_DIR)/$(CACHE_MODULE).ko && exit 1 \
)
install_files:
@echo "Installing Open-CAS modules"

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -29,8 +29,8 @@
trace_printk(format, ##__VA_ARGS__)
#else
#define CAS_CLS_DEBUG_MSG(format, ...)
#define CAS_CLS_DEBUG_TRACE(format, ...)
#define CAS_CLS_DEBUG_MSG(format, ...) ({})
#define CAS_CLS_DEBUG_TRACE(format, ...) ({})
#endif
/* Done condition test - always accepts and stops evaluation */
@@ -53,7 +53,7 @@ static cas_cls_eval_t _cas_cls_metadata_test(struct cas_classifier *cls,
if (PageAnon(io->page))
return cas_cls_eval_no;
if (PageSlab(io->page) || PageCompound(io->page)) {
if (PageSlab(io->page)) {
/* A filesystem issues IO on pages that does not belongs
* to the file page cache. It means that it is a
* part of metadata
@@ -61,7 +61,7 @@ static cas_cls_eval_t _cas_cls_metadata_test(struct cas_classifier *cls,
return cas_cls_eval_yes;
}
if (!io->page->mapping) {
if (!cas_page_mapping(io->page)) {
/* XFS case, page are allocated internally and do not
* have references into inode
*/
@@ -221,6 +221,42 @@ static int _cas_cls_string_ctr(struct cas_classifier *cls,
return 0;
}
/* IO direction condition constructor. @data is expected to contain string
* translated to IO direction.
*/
static int _cas_cls_direction_ctr(struct cas_classifier *cls,
struct cas_cls_condition *c, char *data)
{
uint64_t direction;
struct cas_cls_numeric *ctx;
if (!data) {
CAS_CLS_MSG(KERN_ERR, "Missing IO direction specifier\n");
return -EINVAL;
}
if (strncmp("read", data, 5) == 0) {
direction = READ;
} else if (strncmp("write", data, 6) == 0) {
direction = WRITE;
} else {
CAS_CLS_MSG(KERN_ERR, "Invalid IO direction specifier '%s'\n"
" allowed specifiers: 'read', 'write'\n", data);
return -EINVAL;
}
ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->operator = cas_cls_numeric_eq;
ctx->v_u64 = direction;
c->context = ctx;
return 0;
}
/* Unsigned int numeric test function */
static cas_cls_eval_t _cas_cls_numeric_test_u(
struct cas_cls_condition *c, uint64_t val)
@@ -664,6 +700,14 @@ static cas_cls_eval_t _cas_cls_request_size_test(
return _cas_cls_numeric_test_u(c, CAS_BIO_BISIZE(io->bio));
}
/* Request IO direction test function */
static cas_cls_eval_t _cas_cls_request_direction_test(
struct cas_classifier *cls, struct cas_cls_condition *c,
struct cas_cls_io *io, ocf_part_id_t part_id)
{
return _cas_cls_numeric_test_u(c, bio_data_dir(io->bio));
}
/* Array of condition handlers */
static struct cas_cls_condition_handler _handlers[] = {
{ "done", _cas_cls_done_test, _cas_cls_generic_ctr },
@@ -689,6 +733,8 @@ static struct cas_cls_condition_handler _handlers[] = {
_cas_cls_generic_dtr },
{ "request_size", _cas_cls_request_size_test, _cas_cls_numeric_ctr,
_cas_cls_generic_dtr },
{ "io_direction", _cas_cls_request_direction_test,
_cas_cls_direction_ctr, _cas_cls_generic_dtr },
#ifdef CAS_WLTH_SUPPORT
{ "wlth", _cas_cls_wlth_test, _cas_cls_numeric_ctr,
_cas_cls_generic_dtr},
@@ -757,7 +803,7 @@ static struct cas_cls_condition * _cas_cls_create_condition(
return c;
}
/* Read single codnition from text input and return cas_cls_condition
/* Read single condition from text input and return cas_cls_condition
* representation. *rule pointer is advanced to point to next condition.
* Input @rule string is modified to speed up parsing (selected bytes are
* overwritten with 0).
@@ -765,7 +811,7 @@ static struct cas_cls_condition * _cas_cls_create_condition(
* *l_op contains logical operator from previous condition and gets overwritten
* with operator read from currently parsed condition.
*
* Returns pointer to condition if successfull.
* Returns pointer to condition if successful.
* Returns NULL if no more conditions in string.
* Returns error pointer in case of syntax or runtime error.
*/
@@ -1050,9 +1096,11 @@ int cas_cls_rule_create(ocf_cache_t cache,
return -ENOMEM;
r = _cas_cls_rule_create(cls, part_id, _rule);
if (IS_ERR(r))
if (IS_ERR(r)) {
CAS_CLS_DEBUG_MSG(
"Cannot create rule: %s => %d\n", rule, part_id);
ret = _cas_cls_rule_err_to_cass_err(PTR_ERR(r));
else {
} else {
CAS_CLS_DEBUG_MSG("Created rule: %s => %d\n", rule, part_id);
*cls_rule = r;
ret = 0;
@@ -1181,6 +1229,7 @@ static void _cas_cls_get_bio_context(struct bio *bio,
struct cas_cls_io *ctx)
{
struct page *page = NULL;
struct address_space *mapping;
if (!bio)
return;
@@ -1198,13 +1247,14 @@ static void _cas_cls_get_bio_context(struct bio *bio,
if (PageAnon(page))
return;
if (PageSlab(page) || PageCompound(page))
if (PageSlab(page))
return;
if (!page->mapping)
mapping = cas_page_mapping(page);
if (!mapping)
return;
ctx->inode = page->mapping->host;
ctx->inode = mapping->host;
return;
}

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <linux/module.h>
@@ -351,7 +351,8 @@ static int _cas_init_tag_set(struct cas_disk *dsk, struct blk_mq_tag_set *set)
set->queue_depth = CAS_BLKDEV_DEFAULT_RQ;
set->cmd_size = 0;
set->flags = BLK_MQ_F_SHOULD_MERGE | CAS_BLK_MQ_F_STACKING | CAS_BLK_MQ_F_BLOCKING;
set->flags = CAS_BLK_MQ_F_SHOULD_MERGE | CAS_BLK_MQ_F_STACKING |
CAS_BLK_MQ_F_BLOCKING;
set->driver_data = dsk;
@@ -388,12 +389,36 @@ static int _cas_exp_obj_check_path(const char *dev_name)
return result;
}
static ssize_t device_attr_serial_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct gendisk *gd = dev_to_disk(dev);
struct cas_disk *dsk = gd->private_data;
struct cas_exp_obj *exp_obj = dsk->exp_obj;
return sysfs_emit(buf, "opencas-%s", exp_obj->dev_name);
}
static struct device_attribute device_attr_serial =
__ATTR(serial, 0444, device_attr_serial_show, NULL);
static struct attribute *device_attrs[] = {
&device_attr_serial.attr,
NULL,
};
static const struct attribute_group device_attr_group = {
.attrs = device_attrs,
.name = "device",
};
int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
struct module *owner, struct cas_exp_obj_ops *ops, void *priv)
{
struct cas_exp_obj *exp_obj;
struct request_queue *queue;
struct gendisk *gd;
cas_queue_limits_t queue_limits;
int result = 0;
BUG_ON(!owner);
@@ -442,7 +467,15 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
goto error_init_tag_set;
}
result = cas_alloc_mq_disk(&gd, &queue, &exp_obj->tag_set);
if (exp_obj->ops->set_queue_limits) {
result = exp_obj->ops->set_queue_limits(dsk, priv,
&queue_limits);
if (result)
goto error_set_queue_limits;
}
result = cas_alloc_disk(&gd, &queue, &exp_obj->tag_set,
&queue_limits);
if (result) {
goto error_alloc_mq_disk;
}
@@ -473,9 +506,14 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
goto error_set_geometry;
}
if (cas_add_disk(gd))
result = cas_add_disk(gd);
if (result)
goto error_add_disk;
result = sysfs_create_group(&disk_to_dev(gd)->kobj, &device_attr_group);
if (result)
goto error_sysfs;
result = bd_claim_by_disk(cas_disk_get_blkdev(dsk), dsk, gd);
if (result)
goto error_bd_claim;
@@ -483,15 +521,18 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
return 0;
error_bd_claim:
sysfs_remove_group(&disk_to_dev(gd)->kobj, &device_attr_group);
error_sysfs:
del_gendisk(dsk->exp_obj->gd);
error_add_disk:
error_set_geometry:
exp_obj->private = NULL;
_cas_exp_obj_clear_dev_t(dsk);
error_exp_obj_set_dev_t:
cas_cleanup_mq_disk(gd);
cas_cleanup_disk(gd);
exp_obj->gd = NULL;
error_alloc_mq_disk:
error_set_queue_limits:
blk_mq_free_tag_set(&exp_obj->tag_set);
error_init_tag_set:
module_put(owner);

View File

@@ -1,11 +1,12 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __CASDISK_EXP_OBJ_H__
#define __CASDISK_EXP_OBJ_H__
#include "linux_kernel_version.h"
#include <linux/fs.h>
struct cas_disk;
@@ -17,6 +18,12 @@ struct cas_exp_obj_ops {
*/
int (*set_geometry)(struct cas_disk *dsk, void *private);
/**
* @brief Set queue limits of exported object (top) block device.
*/
int (*set_queue_limits)(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim);
/**
* @brief submit_bio of exported object (top) block device.
*

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -2405,7 +2405,8 @@ static int cache_mngt_check_bdev(struct ocf_mngt_cache_device_config *cfg,
printk(KERN_WARNING "New cache device block properties "
"differ from the previous one.\n");
}
if (tmp_limits.misaligned) {
if (cas_queue_limits_is_misaligned(&tmp_limits)) {
reattach_properties_diff = true;
printk(KERN_WARNING "New cache device block interval "
"doesn't line up with the previous one.\n");

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -40,7 +40,6 @@
#include <linux/mm.h>
#include <linux/blk-mq.h>
#include <linux/ktime.h>
#include "exp_obj.h"
#include "generated_defines.h"

View File

@@ -1,5 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -42,10 +43,56 @@ MODULE_PARM_DESC(seq_cut_off_mb,
ocf_ctx_t cas_ctx;
struct cas_module cas_module;
static inline uint32_t involuntary_preemption_enabled(void)
{
bool config_dynamic = IS_ENABLED(CONFIG_PREEMPT_DYNAMIC);
bool config_rt = IS_ENABLED(CONFIG_PREEMPT_RT);
bool config_preempt = IS_ENABLED(CONFIG_PREEMPT);
bool config_lazy = IS_ENABLED(CONFIG_PREEMPT_LAZY);
bool config_none = IS_ENABLED(CONFIG_PREEMPT_NONE);
if (!config_dynamic && !config_rt && !config_preempt && !config_lazy)
return false;
if (config_none)
return false;
if (config_rt || config_preempt || config_lazy) {
printk(KERN_ERR OCF_PREFIX_SHORT
"The kernel has been built with involuntary preemption "
"enabled.\nFailed to load Open CAS kernel module.\n");
return true;
}
#ifdef CONFIG_PREEMPT_DYNAMIC
printk(KERN_WARNING OCF_PREFIX_SHORT
"The kernel has been compiled with preemption configurable\n"
"at boot time (PREEMPT_DYNAMIC=y). Open CAS doesn't support\n"
"kernels with involuntary preemption so make sure to set\n"
"\"preempt=\" to \"none\" or \"voluntary\" in the kernel"
" command line\n");
if (!cas_preempt_model_none() && !cas_preempt_model_voluntary()) {
printk(KERN_ERR OCF_PREFIX_SHORT
"The kernel has been booted with involuntary "
"preemption enabled.\nFailed to load Open CAS kernel "
"module.\n");
return true;
} else {
return false;
}
#endif
return false;
}
static int __init cas_init_module(void)
{
int result = 0;
if (involuntary_preemption_enabled())
return -ENOTSUP;
if (!writeback_queue_unblock_size || !max_writeback_queue_size) {
printk(KERN_ERR OCF_PREFIX_SHORT
"Invalid module parameter.\n");

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -407,6 +407,11 @@ static inline u64 env_atomic64_inc_return(env_atomic64 *a)
return atomic64_inc_return(a);
}
static inline u64 env_atomic64_dec_return(env_atomic64 *a)
{
return atomic64_dec_return(a);
}
static inline u64 env_atomic64_cmpxchg(atomic64_t *a, u64 old, u64 new)
{
return atomic64_cmpxchg(a, old, new);

View File

@@ -0,0 +1,345 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "ocf_env_refcnt.h"
#include "ocf/ocf_err.h"
#include "ocf_env.h"
#define ENV_REFCNT_CB_ARMING 1
#define ENV_REFCNT_CB_ARMED 2
static void _env_refcnt_do_on_cpus_cb(struct work_struct *work)
{
struct notify_cpu_work *ctx =
container_of(work, struct notify_cpu_work, work);
ctx->cb(ctx->priv);
env_atomic_dec(&ctx->rc->notify.to_notify);
wake_up(&ctx->rc->notify.notify_wait_queue);
}
static void _env_refcnt_do_on_cpus(struct env_refcnt *rc,
env_refcnt_do_on_cpu_cb_t cb, void *priv)
{
int cpu_no;
struct notify_cpu_work *work;
ENV_BUG_ON(env_atomic_read(&rc->notify.to_notify));
for_each_online_cpu(cpu_no) {
work = rc->notify.notify_work_items[cpu_no];
env_atomic_inc(&rc->notify.to_notify);
work->cb = cb;
work->rc = rc;
work->priv = priv;
INIT_WORK(&work->work, _env_refcnt_do_on_cpus_cb);
queue_work_on(cpu_no, rc->notify.notify_work_queue,
&work->work);
}
wait_event(rc->notify.notify_wait_queue,
!env_atomic_read(&rc->notify.to_notify));
}
static void _env_refcnt_init_pcpu(void *ctx)
{
struct env_refcnt *rc = ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(rc->pcpu);
pcpu->freeze = false;
env_atomic64_set(&pcpu->counter, 0);
}
int env_refcnt_init(struct env_refcnt *rc, const char *name, size_t name_len)
{
int cpu_no, result;
env_memset(rc, sizeof(*rc), 0);
env_strncpy(rc->name, sizeof(rc->name), name, name_len);
rc->pcpu = alloc_percpu(struct env_refcnt_pcpu);
if (!rc->pcpu)
return -OCF_ERR_NO_MEM;
init_waitqueue_head(&rc->notify.notify_wait_queue);
rc->notify.notify_work_queue = alloc_workqueue("refcnt_%s", 0,
0, rc->name);
if (!rc->notify.notify_work_queue) {
result = -OCF_ERR_NO_MEM;
goto cleanup_pcpu;
}
rc->notify.notify_work_items = env_vzalloc(
sizeof(*rc->notify.notify_work_items) * num_online_cpus());
if (!rc->notify.notify_work_items) {
result = -OCF_ERR_NO_MEM;
goto cleanup_wq;
}
for_each_online_cpu(cpu_no) {
rc->notify.notify_work_items[cpu_no] = env_vmalloc(
sizeof(*rc->notify.notify_work_items[cpu_no]));
if (!rc->notify.notify_work_items[cpu_no]) {
result = -OCF_ERR_NO_MEM;
goto cleanup_work;
}
}
result = env_spinlock_init(&rc->freeze.lock);
if (result)
goto cleanup_work;
_env_refcnt_do_on_cpus(rc, _env_refcnt_init_pcpu, rc);
rc->callback.pfn = NULL;
rc->callback.priv = NULL;
return 0;
cleanup_work:
for_each_online_cpu(cpu_no) {
if (rc->notify.notify_work_items[cpu_no]) {
env_vfree(rc->notify.notify_work_items[cpu_no]);
rc->notify.notify_work_items[cpu_no] = NULL;
}
}
env_vfree(rc->notify.notify_work_items);
rc->notify.notify_work_items = NULL;
cleanup_wq:
destroy_workqueue(rc->notify.notify_work_queue);
rc->notify.notify_work_queue = NULL;
cleanup_pcpu:
free_percpu(rc->pcpu);
rc->pcpu = NULL;
return result;
}
void env_refcnt_deinit(struct env_refcnt *rc)
{
int cpu_no;
env_spinlock_destroy(&rc->freeze.lock);
ENV_BUG_ON(env_atomic_read(&rc->notify.to_notify));
for_each_online_cpu(cpu_no) {
if (rc->notify.notify_work_items[cpu_no]) {
env_vfree(rc->notify.notify_work_items[cpu_no]);
rc->notify.notify_work_items[cpu_no] = NULL;
}
}
env_vfree(rc->notify.notify_work_items);
rc->notify.notify_work_items = NULL;
destroy_workqueue(rc->notify.notify_work_queue);
rc->notify.notify_work_queue = NULL;
free_percpu(rc->pcpu);
rc->pcpu = NULL;
}
static inline void _env_refcnt_call_freeze_cb(struct env_refcnt *rc)
{
bool fire;
fire = (env_atomic_cmpxchg(&rc->callback.armed, ENV_REFCNT_CB_ARMED, 0)
== ENV_REFCNT_CB_ARMED);
smp_mb();
if (fire)
rc->callback.pfn(rc->callback.priv);
}
void env_refcnt_dec(struct env_refcnt *rc)
{
struct env_refcnt_pcpu *pcpu;
bool freeze;
int64_t countdown = 0;
bool callback;
unsigned long flags;
pcpu = get_cpu_ptr(rc->pcpu);
freeze = pcpu->freeze;
if (!freeze)
env_atomic64_dec(&pcpu->counter);
put_cpu_ptr(pcpu);
if (freeze) {
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
countdown = env_atomic64_dec_return(&rc->freeze.countdown);
callback = !rc->freeze.initializing && countdown == 0;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
}
bool env_refcnt_inc(struct env_refcnt *rc)
{
struct env_refcnt_pcpu *pcpu;
bool freeze;
pcpu = get_cpu_ptr(rc->pcpu);
freeze = pcpu->freeze;
if (!freeze)
env_atomic64_inc(&pcpu->counter);
put_cpu_ptr(pcpu);
return !freeze;
}
struct env_refcnt_freeze_ctx {
struct env_refcnt *rc;
env_atomic64 sum;
};
static void _env_refcnt_freeze_pcpu(void *_ctx)
{
struct env_refcnt_freeze_ctx *ctx = _ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(ctx->rc->pcpu);
pcpu->freeze = true;
env_atomic64_add(env_atomic64_read(&pcpu->counter), &ctx->sum);
}
void env_refcnt_freeze(struct env_refcnt *rc)
{
struct env_refcnt_freeze_ctx ctx;
int freeze_cnt;
bool callback;
unsigned long flags;
ctx.rc = rc;
env_atomic64_set(&ctx.sum, 0);
/* initiate freeze */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
freeze_cnt = ++(rc->freeze.counter);
if (freeze_cnt > 1) {
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return;
}
rc->freeze.initializing = true;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
/* notify CPUs about freeze */
_env_refcnt_do_on_cpus(rc, _env_refcnt_freeze_pcpu, &ctx);
/* update countdown */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
env_atomic64_add(env_atomic64_read(&ctx.sum), &rc->freeze.countdown);
rc->freeze.initializing = false;
callback = (env_atomic64_read(&rc->freeze.countdown) == 0);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
/* if countdown finished trigger callback */
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
void env_refcnt_register_zero_cb(struct env_refcnt *rc, env_refcnt_cb_t cb,
void *priv)
{
bool callback;
bool concurrent_arming;
unsigned long flags;
concurrent_arming = (env_atomic_inc_return(&rc->callback.armed)
> ENV_REFCNT_CB_ARMING);
ENV_BUG_ON(concurrent_arming);
/* arm callback */
rc->callback.pfn = cb;
rc->callback.priv = priv;
smp_wmb();
env_atomic_set(&rc->callback.armed, ENV_REFCNT_CB_ARMED);
/* fire callback in case countdown finished */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
callback = (
env_atomic64_read(&rc->freeze.countdown) == 0 &&
!rc->freeze.initializing
);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
static void _env_refcnt_unfreeze_pcpu(void *_ctx)
{
struct env_refcnt_freeze_ctx *ctx = _ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(ctx->rc->pcpu);
ENV_BUG_ON(!pcpu->freeze);
env_atomic64_set(&pcpu->counter, 0);
pcpu->freeze = false;
}
void env_refcnt_unfreeze(struct env_refcnt *rc)
{
struct env_refcnt_freeze_ctx ctx;
int freeze_cnt;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
freeze_cnt = --(rc->freeze.counter);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
ENV_BUG_ON(freeze_cnt < 0);
if (freeze_cnt > 0)
return;
ENV_BUG_ON(env_atomic64_read(&rc->freeze.countdown));
/* disarm callback */
env_atomic_set(&rc->callback.armed, 0);
smp_wmb();
/* notify CPUs about unfreeze */
ctx.rc = rc;
_env_refcnt_do_on_cpus(rc, _env_refcnt_unfreeze_pcpu, &ctx);
}
bool env_refcnt_frozen(struct env_refcnt *rc)
{
bool frozen;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
frozen = !!rc->freeze.counter;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return frozen;
}
bool env_refcnt_zeroed(struct env_refcnt *rc)
{
bool frozen;
bool initializing;
int64_t countdown;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
frozen = !!rc->freeze.counter;
initializing = rc->freeze.initializing;
countdown = env_atomic64_read(&rc->freeze.countdown);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return frozen && !initializing && countdown == 0;
}

View File

@@ -0,0 +1,104 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __OCF_ENV_REFCNT_H__
#define __OCF_ENV_REFCNT_H__
#include "ocf_env.h"
typedef void (*env_refcnt_cb_t)(void *priv);
struct env_refcnt_pcpu {
env_atomic64 counter;
bool freeze;
};
typedef void (*env_refcnt_do_on_cpu_cb_t)(void *priv);
struct notify_cpu_work {
struct work_struct work;
/* function to call on each cpu */
env_refcnt_do_on_cpu_cb_t cb;
/* priv passed to cb */
void *priv;
/* refcnt instance */
struct env_refcnt *rc;
};
struct env_refcnt {
struct env_refcnt_pcpu __percpu *pcpu __aligned(64);
struct {
/* freeze counter */
int counter;
/* global counter used instead of per-CPU ones after
* freeze
*/
env_atomic64 countdown;
/* freeze initializing - freeze was requested but not all
* CPUs were notified.
*/
bool initializing;
env_spinlock lock;
} freeze;
struct {
struct notify_cpu_work **notify_work_items;
env_atomic to_notify;
wait_queue_head_t notify_wait_queue;
struct workqueue_struct *notify_work_queue;
} notify;
struct {
env_atomic armed;
env_refcnt_cb_t pfn;
void *priv;
} callback;
char name[32];
};
/* Initialize reference counter */
int env_refcnt_init(struct env_refcnt *rc, const char *name, size_t name_len);
void env_refcnt_deinit(struct env_refcnt *rc);
/* Try to increment counter. Returns counter value (> 0) if successful, 0
* if counter is frozen
*/
bool env_refcnt_inc(struct env_refcnt *rc);
/* Decrement reference counter */
void env_refcnt_dec(struct env_refcnt *rc);
/* Disallow incrementing of underlying counter - attempts to increment counter
* will be failing until env_refcnt_unfreeze is called.
* It's ok to call freeze multiple times, in which case counter is frozen
* until all freeze calls are offset by a corresponding unfreeze.
*/
void env_refcnt_freeze(struct env_refcnt *rc);
/* Cancel the effect of single env_refcnt_freeze call */
void env_refcnt_unfreeze(struct env_refcnt *rc);
bool env_refcnt_frozen(struct env_refcnt *rc);
bool env_refcnt_zeroed(struct env_refcnt *rc);
/* Register callback to be called when reference counter drops to 0.
* Must be called after counter is frozen.
* Cannot be called until previously regsitered callback had fired.
*/
void env_refcnt_register_zero_cb(struct env_refcnt *rc, env_refcnt_cb_t cb,
void *priv);
#endif // __OCF_ENV_REFCNT_H__

View File

@@ -86,10 +86,6 @@ long cas_service_ioctl_ctrl(struct file *filp, unsigned int cmd,
GET_CMD_INFO(cmd_info, arg);
printk(KERN_ERR "Cache attach is not supported!\n");
retval = -ENOTSUP;
RETURN_CMD_RESULT(cmd_info, arg, retval);
cache_name_from_id(cache_name, cmd_info->cache_id);
retval = cache_mngt_attach_cache_cfg(cache_name, OCF_CACHE_NAME_SIZE,
@@ -108,9 +104,6 @@ long cas_service_ioctl_ctrl(struct file *filp, unsigned int cmd,
char cache_name[OCF_CACHE_NAME_SIZE];
GET_CMD_INFO(cmd_info, arg);
printk(KERN_ERR "Cache detach is not supported!\n");
retval = -ENOTSUP;
RETURN_CMD_RESULT(cmd_info, arg, retval);
cache_name_from_id(cache_name, cmd_info->cache_id);

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -73,6 +73,7 @@ static int _cas_cleaner_thread(void *data)
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
struct cas_thread_info *info;
uint32_t ms;
ocf_queue_t queue;
BUG_ON(!c);
@@ -94,7 +95,10 @@ static int _cas_cleaner_thread(void *data)
atomic_set(&info->kicked, 0);
init_completion(&info->sync_compl);
ocf_cleaner_run(c, cache_priv->io_queues[smp_processor_id()]);
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
ocf_cleaner_run(c, queue);
wait_for_completion(&info->sync_compl);
/*

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -208,9 +208,13 @@ void *cas_rpool_try_get(struct cas_reserve_pool *rpool_master, int *cpu)
CAS_DEBUG_TRACE();
get_cpu();
*cpu = smp_processor_id();
current_rpool = &rpool_master->rpools[*cpu];
put_cpu();
spin_lock_irqsave(&current_rpool->lock, flags);
if (!list_empty(&current_rpool->list)) {

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies Co., Ltd.
* Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -63,13 +63,14 @@ static void blkdev_set_discard_properties(ocf_cache_t cache,
CAS_SET_DISCARD_ZEROES_DATA(exp_q->limits, 0);
if (core_q && cas_has_discard_support(core_bd)) {
blk_queue_max_discard_sectors(exp_q, core_q->limits.max_discard_sectors);
cas_queue_max_discard_sectors(exp_q,
core_q->limits.max_discard_sectors);
exp_q->limits.discard_alignment =
bdev_discard_alignment(core_bd);
exp_q->limits.discard_granularity =
core_q->limits.discard_granularity;
} else {
blk_queue_max_discard_sectors(exp_q,
cas_queue_max_discard_sectors(exp_q,
min((uint64_t)core_sectors, (uint64_t)UINT_MAX));
exp_q->limits.discard_granularity = ocf_cache_get_line_size(cache);
exp_q->limits.discard_alignment = 0;
@@ -129,7 +130,37 @@ static int blkdev_core_set_geometry(struct cas_disk *dsk, void *private)
blkdev_set_discard_properties(cache, exp_q, core_bd, sectors);
exp_q->queue_flags |= (1 << QUEUE_FLAG_NONROT);
cas_queue_set_nonrot(exp_q);
return 0;
}
static int blkdev_core_set_queue_limits(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim)
{
ocf_core_t core = private;
ocf_cache_t cache = ocf_core_get_cache(core);
ocf_volume_t core_vol = ocf_core_get_volume(core);
struct bd_object *bd_core_vol;
struct request_queue *core_q;
bool flush, fua;
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
bd_core_vol = bd_object(core_vol);
core_q = cas_disk_get_queue(bd_core_vol->dsk);
flush = (CAS_CHECK_QUEUE_FLUSH(core_q) ||
cache_priv->device_properties.flush);
fua = (CAS_CHECK_QUEUE_FUA(core_q) ||
cache_priv->device_properties.fua);
memset(lim, 0, sizeof(cas_queue_limits_t));
if (flush)
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_WRITE_CACHE);
if (fua)
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_FUA);
return 0;
}
@@ -217,12 +248,16 @@ static int blkdev_handle_data_single(struct bd_object *bvol, struct bio *bio,
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_queue_t queue;
ocf_io_t io;
struct blk_data *data;
uint64_t flags = CAS_BIO_OP_FLAGS(bio);
int ret;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
data = cas_alloc_blk_data(bio_segments(bio), GFP_NOIO);
if (!data) {
CAS_PRINT_RL(KERN_CRIT "BIO data vector allocation error\n");
@@ -332,9 +367,13 @@ static void blkdev_handle_discard(struct bd_object *bvol, struct bio *bio)
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_queue_t queue;
ocf_io_t io;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
io = ocf_volume_new_io(bvol->front_volume, queue,
CAS_BIO_BISECTOR(bio) << SECTOR_SHIFT,
CAS_BIO_BISIZE(bio), OCF_WRITE, 0, 0);
@@ -380,9 +419,13 @@ static void blkdev_handle_flush(struct bd_object *bvol, struct bio *bio)
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_queue_t queue;
ocf_io_t io;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
io = ocf_volume_new_io(bvol->front_volume, queue, 0, 0, OCF_WRITE, 0,
CAS_SET_FLUSH(0));
if (!io) {
@@ -428,6 +471,7 @@ static void blkdev_core_submit_bio(struct cas_disk *dsk,
static struct cas_exp_obj_ops kcas_core_exp_obj_ops = {
.set_geometry = blkdev_core_set_geometry,
.set_queue_limits = blkdev_core_set_queue_limits,
.submit_bio = blkdev_core_submit_bio,
};
@@ -470,6 +514,37 @@ static int blkdev_cache_set_geometry(struct cas_disk *dsk, void *private)
return 0;
}
static int blkdev_cache_set_queue_limits(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim)
{
ocf_cache_t cache;
ocf_volume_t volume;
struct bd_object *bvol;
struct request_queue *cache_q;
struct block_device *bd;
BUG_ON(!private);
cache = private;
volume = ocf_cache_get_volume(cache);
bvol = bd_object(volume);
bd = cas_disk_get_blkdev(bvol->dsk);
BUG_ON(!bd);
cache_q = bd->bd_disk->queue;
memset(lim, 0, sizeof(cas_queue_limits_t));
if (CAS_CHECK_QUEUE_FLUSH(cache_q))
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_WRITE_CACHE);
if (CAS_CHECK_QUEUE_FUA(cache_q))
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_FUA);
return 0;
}
static void blkdev_cache_submit_bio(struct cas_disk *dsk,
struct bio *bio, void *private)
{
@@ -485,6 +560,7 @@ static void blkdev_cache_submit_bio(struct cas_disk *dsk,
static struct cas_exp_obj_ops kcas_cache_exp_obj_ops = {
.set_geometry = blkdev_cache_set_geometry,
.set_queue_limits = blkdev_cache_set_queue_limits,
.submit_bio = blkdev_cache_submit_bio,
};

2
ocf

Submodule ocf updated: 6ad1007e6f...a63479c7cd

View File

@@ -1,36 +1,59 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from api.cas.casadm_parser import *
from datetime import timedelta
from typing import List
from api.cas import casadm
from api.cas.cache_config import (
CacheLineSize,
CleaningPolicy,
CacheStatus,
CacheMode,
FlushParametersAlru,
FlushParametersAcp,
SeqCutOffParameters,
SeqCutOffPolicy,
PromotionPolicy,
PromotionParametersNhit,
CacheConfig,
)
from api.cas.casadm_params import StatsFilter
from api.cas.casadm_parser import (get_cas_devices_dict, get_cores, get_flush_parameters_alru,
get_flush_parameters_acp, get_io_class_list)
from api.cas.core import Core
from api.cas.dmesg import get_metadata_size_on_device
from api.cas.statistics import CacheStats, CacheIoClassStats
from test_utils.os_utils import *
from test_utils.output import Output
from connection.utils.output import Output
from storage_devices.device import Device
from test_tools.os_tools import sync
from type_def.size import Size
class Cache:
def __init__(self, device: Device, cache_id: int = None) -> None:
self.cache_device = device
self.cache_id = cache_id if cache_id else self.__get_cache_id()
self.__cache_line_size = None
def __get_cache_id(self) -> int:
device_path = self.__get_cache_device_path()
def __init__(
self, cache_id: int, device: Device = None, cache_line_size: CacheLineSize = None
) -> None:
self.cache_id = cache_id
self.cache_device = device if device else self.__get_cache_device()
self.__cache_line_size = cache_line_size
def __get_cache_device(self) -> Device | None:
caches_dict = get_cas_devices_dict()["caches"]
cache = next(
iter([cache for cache in caches_dict.values() if cache["id"] == self.cache_id])
)
for cache in caches_dict.values():
if cache["device_path"] == device_path:
return int(cache["id"])
if not cache:
return None
raise Exception(f"There is no cache started on {device_path}")
if cache["device_path"] is "-":
return None
def __get_cache_device_path(self) -> str:
return self.cache_device.path if self.cache_device is not None else "-"
return Device(path=cache["device_path"])
def get_core_devices(self) -> list:
return get_cores(self.cache_id)
@@ -194,8 +217,8 @@ class Cache:
def set_params_nhit(self, promotion_params_nhit: PromotionParametersNhit) -> Output:
return casadm.set_param_promotion_nhit(
self.cache_id,
threshold=promotion_params_nhit.threshold.get_value(),
trigger=promotion_params_nhit.trigger
threshold=promotion_params_nhit.threshold,
trigger=promotion_params_nhit.trigger,
)
def get_cache_config(self) -> CacheConfig:
@@ -208,10 +231,18 @@ class Cache:
def standby_detach(self, shortcut: bool = False) -> Output:
return casadm.standby_detach_cache(cache_id=self.cache_id, shortcut=shortcut)
def standby_activate(self, device, shortcut: bool = False) -> Output:
def standby_activate(self, device: Device, shortcut: bool = False) -> Output:
return casadm.standby_activate_cache(
cache_id=self.cache_id, cache_dev=device, shortcut=shortcut
)
def attach(self, device: Device, force: bool = False) -> Output:
cmd_output = casadm.attach_cache(cache_id=self.cache_id, device=device, force=force)
return cmd_output
def detach(self) -> Output:
cmd_output = casadm.detach_cache(cache_id=self.cache_id)
return cmd_output
def has_volatile_metadata(self) -> bool:
return self.get_metadata_size_on_disk() == Size.zero()

View File

@@ -1,14 +1,14 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from enum import Enum, IntFlag
from test_utils.os_utils import get_kernel_module_parameter
from test_utils.size import Size, Unit
from test_utils.time import Time
from test_tools.os_tools import get_kernel_module_parameter
from type_def.size import Size, Unit
from type_def.time import Time
class CacheLineSize(Enum):
@@ -72,9 +72,9 @@ class CacheMode(Enum):
class SeqCutOffPolicy(Enum):
full = 0
always = 1
never = 2
full = "full"
always = "always"
never = "never"
DEFAULT = full
@classmethod
@@ -85,6 +85,9 @@ class SeqCutOffPolicy(Enum):
raise ValueError(f"{name} is not a valid sequential cut off name")
def __str__(self):
return self.value
class MetadataMode(Enum):
normal = "normal"
@@ -122,6 +125,7 @@ class CacheStatus(Enum):
incomplete = "incomplete"
standby = "standby"
standby_detached = "standby detached"
detached = "detached"
def __str__(self):
return self.value
@@ -240,7 +244,7 @@ class SeqCutOffParameters:
class PromotionParametersNhit:
def __init__(self, threshold: Size = None, trigger: int = None):
def __init__(self, threshold: int = None, trigger: int = None):
self.threshold = threshold
self.trigger = trigger

View File

@@ -6,8 +6,7 @@
from enum import Enum
from core.test_run import TestRun
from test_utils import os_utils
from test_utils.os_utils import ModuleRemoveMethod
from test_tools.os_tools import unload_kernel_module, load_kernel_module
class CasModule(Enum):
@@ -15,12 +14,12 @@ class CasModule(Enum):
def reload_all_cas_modules():
os_utils.unload_kernel_module(CasModule.cache.value, ModuleRemoveMethod.modprobe)
os_utils.load_kernel_module(CasModule.cache.value)
unload_kernel_module(CasModule.cache.value)
load_kernel_module(CasModule.cache.value)
def unload_all_cas_modules():
os_utils.unload_kernel_module(CasModule.cache.value, os_utils.ModuleRemoveMethod.rmmod)
unload_kernel_module(CasModule.cache.value)
def is_cas_management_dev_present():

View File

@@ -9,7 +9,7 @@ import os
import re
from core.test_run import TestRun
from test_tools.fs_utils import check_if_directory_exists, find_all_files
from test_tools.fs_tools import check_if_directory_exists, find_all_files
from test_tools.linux_packaging import DebSet, RpmSet

View File

@@ -9,13 +9,13 @@ from datetime import timedelta
from string import Template
from textwrap import dedent
from test_tools.fs_utils import (
from test_tools.fs_tools import (
check_if_directory_exists,
create_directory,
write_file,
remove,
)
from test_utils.systemd import reload_daemon
from test_tools.systemctl import reload_daemon
opencas_drop_in_directory = Path("/etc/systemd/system/open-cas.service.d/")
test_drop_in_file = Path("10-modified-timeout.conf")

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -20,9 +20,9 @@ from api.cas.cli import *
from api.cas.core import Core
from core.test_run import TestRun
from storage_devices.device import Device
from test_utils.os_utils import reload_kernel_module
from test_utils.output import CmdException, Output
from test_utils.size import Size, Unit
from test_tools.os_tools import reload_kernel_module
from connection.utils.output import CmdException, Output
from type_def.size import Size, Unit
# casadm commands
@@ -48,6 +48,7 @@ def start_cache(
)
_cache_id = str(cache_id) if cache_id is not None else None
_cache_mode = cache_mode.name.lower() if cache_mode else None
output = TestRun.executor.run(
start_cmd(
cache_dev=cache_dev.path,
@@ -59,33 +60,71 @@ def start_cache(
shortcut=shortcut,
)
)
if output.exit_code != 0:
raise CmdException("Failed to start cache.", output)
return Cache(cache_dev)
if not _cache_id:
from api.cas.casadm_parser import get_caches
cache_list = get_caches()
attached_cache_list = [cache for cache in cache_list if cache.cache_device is not None]
# compare path of old and new caches, returning the only one created now.
# This will be needed in case cache_id not present in cli command
new_cache = next(
cache for cache in attached_cache_list if cache.cache_device.path == cache_dev.path
)
_cache_id = new_cache.cache_id
cache = Cache(cache_id=int(_cache_id), device=cache_dev, cache_line_size=_cache_line_size)
TestRun.dut.cache_list.append(cache)
return cache
def load_cache(device: Device, shortcut: bool = False) -> Cache:
from api.cas.casadm_parser import get_caches
caches_before_load = get_caches()
output = TestRun.executor.run(load_cmd(cache_dev=device.path, shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to load cache.", output)
return Cache(device)
caches_after_load = get_caches()
new_cache = next(cache for cache in caches_after_load if cache.cache_id not in
[cache.cache_id for cache in caches_before_load])
cache = Cache(cache_id=new_cache.cache_id, device=new_cache.cache_device)
TestRun.dut.cache_list.append(cache)
return cache
def attach_cache(cache_id: int, device: Device, force: bool, shortcut: bool = False) -> Output:
def attach_cache(
cache_id: int, device: Device, force: bool = False, shortcut: bool = False
) -> Output:
output = TestRun.executor.run(
attach_cache_cmd(
cache_dev=device.path, cache_id=str(cache_id), force=force, shortcut=shortcut
)
)
if output.exit_code != 0:
raise CmdException("Failed to attach cache.", output)
attached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
attached_cache.cache_device = device
return output
def detach_cache(cache_id: int, shortcut: bool = False) -> Output:
output = TestRun.executor.run(detach_cache_cmd(cache_id=str(cache_id), shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to detach cache.", output)
detached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
detached_cache.cache_device = None
return output
@@ -93,8 +132,16 @@ def stop_cache(cache_id: int, no_data_flush: bool = False, shortcut: bool = Fals
output = TestRun.executor.run(
stop_cmd(cache_id=str(cache_id), no_data_flush=no_data_flush, shortcut=shortcut)
)
if output.exit_code != 0:
raise CmdException("Failed to stop cache.", output)
TestRun.dut.cache_list = [
cache for cache in TestRun.dut.cache_list if cache.cache_id != cache_id
]
TestRun.dut.core_list = [core for core in TestRun.dut.core_list if core.cache_id != cache_id]
return output
@@ -192,7 +239,7 @@ def set_param_promotion(cache_id: int, policy: PromotionPolicy, shortcut: bool =
def set_param_promotion_nhit(
cache_id: int, threshold: int = None, trigger: int = None, shortcut: bool = False
cache_id: int, threshold: int = None, trigger: int = None, shortcut: bool = False
) -> Output:
_threshold = str(threshold) if threshold is not None else None
_trigger = str(trigger) if trigger is not None else None
@@ -267,7 +314,7 @@ def get_param_cleaning_acp(
def get_param_promotion(
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
) -> Output:
_output_format = output_format.name if output_format else None
output = TestRun.executor.run(
@@ -281,7 +328,7 @@ def get_param_promotion(
def get_param_promotion_nhit(
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
) -> Output:
_output_format = output_format.name if output_format else None
output = TestRun.executor.run(
@@ -325,7 +372,11 @@ def add_core(cache: Cache, core_dev: Device, core_id: int = None, shortcut: bool
)
if output.exit_code != 0:
raise CmdException("Failed to add core.", output)
return Core(core_dev.path, cache.cache_id)
core = Core(core_dev.path, cache.cache_id)
TestRun.dut.core_list.append(core)
return core
def remove_core(cache_id: int, core_id: int, force: bool = False, shortcut: bool = False) -> Output:
@@ -336,6 +387,12 @@ def remove_core(cache_id: int, core_id: int, force: bool = False, shortcut: bool
)
if output.exit_code != 0:
raise CmdException("Failed to remove core.", output)
TestRun.dut.core_list = [
core
for core in TestRun.dut.core_list
if core.cache_id != cache_id or core.core_id != core_id
]
return output
@@ -485,22 +542,41 @@ def standby_init(
shortcut=shortcut,
)
)
if output.exit_code != 0:
raise CmdException("Failed to init standby cache.", output)
return Cache(cache_dev)
return Cache(cache_id=cache_id, device=cache_dev)
def standby_load(cache_dev: Device, shortcut: bool = False) -> Cache:
from api.cas.casadm_parser import get_caches
caches_before_load = get_caches()
output = TestRun.executor.run(standby_load_cmd(cache_dev=cache_dev.path, shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to load standby cache.", output)
return Cache(cache_dev)
raise CmdException("Failed to load cache.", output)
caches_after_load = get_caches()
# compare ids of old and new caches, returning the only one created now
new_cache = next(
cache
for cache in caches_after_load
if cache.cache_id not in [cache.cache_id for cache in caches_before_load]
)
cache = Cache(cache_id=new_cache.cache_id, device=new_cache.cache_device)
TestRun.dut.cache_list.append(cache)
return cache
def standby_detach_cache(cache_id: int, shortcut: bool = False) -> Output:
output = TestRun.executor.run(standby_detach_cmd(cache_id=str(cache_id), shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to detach standby cache.", output)
detached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
detached_cache.cache_device = None
return output
@@ -510,6 +586,10 @@ def standby_activate_cache(cache_dev: Device, cache_id: int, shortcut: bool = Fa
)
if output.exit_code != 0:
raise CmdException("Failed to activate standby cache.", output)
activated_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
activated_cache.cache_device = cache_dev
return output

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -26,7 +26,7 @@ class OutputFormat(Enum):
class StatsFilter(Enum):
all = "all"
conf = "configuration"
conf = "config"
usage = "usage"
req = "request"
blk = "block"

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -14,11 +14,12 @@ from typing import List
from api.cas import casadm
from api.cas.cache_config import *
from api.cas.casadm_params import *
from api.cas.core_config import CoreStatus
from api.cas.ioclass_config import IoClass
from api.cas.version import CasVersion
from core.test_run_utils import TestRun
from storage_devices.device import Device
from test_utils.output import CmdException
from connection.utils.output import CmdException
class Stats(dict):
@@ -54,12 +55,12 @@ def get_caches() -> list:
def get_cores(cache_id: int) -> list:
from api.cas.core import Core, CoreStatus
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_active(core):
return CoreStatus[core["status"].lower()] == CoreStatus.active
return core["status"] == CoreStatus.active
return [
Core(core["device_path"], core["cache_id"])
@@ -68,6 +69,36 @@ def get_cores(cache_id: int) -> list:
]
def get_inactive_cores(cache_id: int) -> list:
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_inactive(core):
return core["status"] == CoreStatus.inactive
return [
Core(core["device_path"], core["cache_id"])
for core in cores_dict
if is_inactive(core) and core["cache_id"] == cache_id
]
def get_detached_cores(cache_id: int) -> list:
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_detached(core):
return core["status"] == CoreStatus.detached
return [
Core(core["device_path"], core["cache_id"])
for core in cores_dict
if is_detached(core) and core["cache_id"] == cache_id
]
def get_cas_devices_dict() -> dict:
device_list = list(csv.DictReader(casadm.list_caches(OutputFormat.csv).stdout.split("\n")))
devices = {"caches": {}, "cores": {}, "core_pool": {}}
@@ -80,21 +111,21 @@ def get_cas_devices_dict() -> dict:
params = [
("id", cache_id),
("device_path", device["disk"]),
("status", device["status"]),
("status", CacheStatus(device["status"].lower())),
]
devices["caches"][cache_id] = dict([(key, value) for key, value in params])
elif device["type"] == "core":
params = [
("cache_id", cache_id),
("core_id", (int(device["id"]) if device["id"] != "-" else device["id"])),
("device_path", device["disk"]),
("status", device["status"]),
("status", CoreStatus(device["status"].lower())),
("exp_obj", device["device"]),
]
if core_pool:
params.append(("core_pool", device))
devices["core_pool"][device["disk"]] = dict(
[(key, value) for key, value in params]
)
devices["core_pool"][device["disk"]] = dict([(key, value) for key, value in params])
else:
devices["cores"][(cache_id, int(device["id"]))] = dict(
[(key, value) for key, value in params]
@@ -205,11 +236,14 @@ def get_io_class_list(cache_id: int) -> list:
return ret
def get_core_info_by_path(core_disk_path) -> dict | None:
def get_core_info_for_cache_by_path(core_disk_path: str, target_cache_id: int) -> dict | None:
output = casadm.list_caches(OutputFormat.csv, by_id_path=True)
reader = csv.DictReader(io.StringIO(output.stdout))
cache_id = -1
for row in reader:
if row["type"] == "core" and row["disk"] == core_disk_path:
if row["type"] == "cache":
cache_id = int(row["id"])
if row["type"] == "core" and row["disk"] == core_disk_path and target_cache_id == cache_id:
return {
"core_id": row["id"],
"core_device": row["disk"],

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -192,7 +192,7 @@ remove_core_help = [
remove_inactive_help = [
r"casadm --remove-inactive --cache-id \<ID\> --core-id \<ID\> \[option\.\.\.\]",
r"Usage: casadm --remove-inactive --cache-id \<ID\> --core-id \<ID\> \[option\.\.\.\]",
r"Remove inactive core device from cache instance",
r"Options that are valid with --remove-inactive are:",
r"-i --cache-id \<ID\> Identifier of cache instance \<1-16384\>",
@@ -285,7 +285,7 @@ standby_help = [
]
zero_metadata_help = [
r"Usage: casadm --zero-metadata --device \<DEVICE\> \[option\.\.\.\]]",
r"Usage: casadm --zero-metadata --device \<DEVICE\> \[option\.\.\.\]",
r"Clear metadata from caching device",
r"Options that are valid with --zero-metadata are:",
r"-d --device \<DEVICE\> Path to device on which metadata would be cleared",

View File

@@ -1,13 +1,27 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
import re
from connection.utils.output import Output
from core.test_run import TestRun
from test_utils.output import Output
attach_not_enough_memory = [
r"Not enough free RAM\.\nYou need at least \d+.\d+GB to attach a device to cache "
r"with cache line size equal \d+kB.\n"
r"Try with greater cache line size\."
]
attach_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device",
r"Please attach another device or use --force to discard on-disk metadata",
r" and attach this device to cache instance\."
]
load_inactive_core_missing = [
r"WARNING: Can not resolve path to core \d+ from cache \d+\. By-id path will be shown for that "
@@ -17,11 +31,18 @@ load_inactive_core_missing = [
start_cache_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device\.",
r"Old metadata found on device",
r"Please load cache metadata using --load option or use --force to",
r" discard on-disk metadata and start fresh cache instance\.",
]
attach_cache_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device",
r"Please attach another device or use --force to discard on-disk metadata",
r" and attach this device to cache instance\.",
]
start_cache_on_already_used_dev = [
r"Error inserting cache \d+",
r"Cache device \'\/dev\/\S+\' is already used as cache\.",
@@ -84,11 +105,20 @@ already_cached_core = [
]
remove_mounted_core = [
r"Can\'t remove core \d+ from cache \d+\. Device /dev/cas\d+-\d+ is mounted\!"
r"Can\'t remove core \d+ from cache \d+ due to mounted devices:"
]
remove_mounted_core_kernel = [
r"Error while removing core device \d+ from cache instance \d+",
r"Device opens or mount are pending to this cache",
]
stop_cache_mounted_core = [
r"Error while removing cache \d+",
r"Can\'t stop cache instance \d+ due to mounted devices:"
]
stop_cache_mounted_core_kernel = [
r"Error while stopping cache \d+",
r"Device opens or mount are pending to this cache",
]
@@ -224,6 +254,12 @@ malformed_io_class_header = [
unexpected_cls_option = [r"Option '--cache-line-size \(-x\)' is not allowed"]
attach_not_enough_memory = [
r"Not enough free RAM\.\nYou need at least \d+.\d+GB to attach a device to cache "
r"with cache line size equal \d+kB.\n"
r"Try with greater cache line size\."
]
def check_stderr_msg(output: Output, expected_messages, negate=False):
return __check_string_msg(output.stderr, expected_messages, negate)
@@ -242,7 +278,7 @@ def __check_string_msg(text: str, expected_messages, negate=False):
msg_ok = False
elif matches and negate:
TestRun.LOGGER.error(
f"Message is incorrect, expected to not find: {msg}\n " f"actual: {text}."
f"Message is incorrect, expected to not find: {msg}\n actual: {text}."
)
msg_ok = False
return msg_ok

View File

@@ -1,30 +1,24 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from datetime import timedelta
from typing import List
from enum import Enum
from api.cas import casadm
from api.cas.cache_config import SeqCutOffParameters, SeqCutOffPolicy
from api.cas.casadm_params import StatsFilter
from api.cas.casadm_parser import get_seq_cut_off_parameters, get_core_info_by_path
from api.cas.casadm_parser import get_seq_cut_off_parameters, get_cas_devices_dict
from api.cas.core_config import CoreStatus
from api.cas.statistics import CoreStats, CoreIoClassStats
from core.test_run_utils import TestRun
from storage_devices.device import Device
from test_tools import fs_utils, disk_utils
from test_utils.os_utils import wait, sync
from test_utils.size import Unit, Size
class CoreStatus(Enum):
empty = 0
active = 1
inactive = 2
detached = 3
from test_tools.fs_tools import Filesystem, ls_item
from test_tools.os_tools import sync
from test_tools.common.wait import wait
from type_def.size import Unit, Size
SEQ_CUTOFF_THRESHOLD_MAX = Size(4194181, Unit.KibiByte)
@@ -35,20 +29,35 @@ class Core(Device):
def __init__(self, core_device: str, cache_id: int):
self.core_device = Device(core_device)
self.path = None
self.cache_id = cache_id
core_info = self.__get_core_info()
# "-" is special case for cores in core pool
if core_info["core_id"] != "-":
self.core_id = int(core_info["core_id"])
if core_info["exp_obj"] != "-":
Device.__init__(self, core_info["exp_obj"])
self.cache_id = cache_id
self.partitions = []
self.block_size = None
def __get_core_info(self):
return get_core_info_by_path(self.core_device.path)
def __get_core_info(self) -> dict | None:
core_dicts = get_cas_devices_dict()["cores"].values()
# for core
core_device = [
core
for core in core_dicts
if core["cache_id"] == self.cache_id and core["device_path"] == self.core_device.path
]
if core_device:
return core_device[0]
def create_filesystem(self, fs_type: disk_utils.Filesystem, force=True, blocksize=None):
# for core pool
core_pool_dicts = get_cas_devices_dict()["core_pool"].values()
core_pool_device = [
core for core in core_pool_dicts if core["device_path"] == self.core_device.path
]
return core_pool_device[0]
def create_filesystem(self, fs_type: Filesystem, force=True, blocksize=None):
super().create_filesystem(fs_type, force, blocksize)
self.core_device.filesystem = self.filesystem
@@ -76,8 +85,8 @@ class Core(Device):
percentage_val=percentage_val,
)
def get_status(self):
return CoreStatus[self.__get_core_info()["status"].lower()]
def get_status(self) -> CoreStatus:
return self.__get_core_info()["status"]
def get_seq_cut_off_parameters(self):
return get_seq_cut_off_parameters(self.cache_id, self.core_id)
@@ -137,7 +146,7 @@ class Core(Device):
def check_if_is_present_in_os(self, should_be_visible=True):
device_in_system_message = "CAS device exists in OS."
device_not_in_system_message = "CAS device does not exist in OS."
item = fs_utils.ls_item(f"{self.path}")
item = ls_item(self.path)
if item is not None:
if should_be_visible:
TestRun.LOGGER.info(device_in_system_message)

View File

@@ -0,0 +1,16 @@
#
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from enum import Enum
class CoreStatus(Enum):
empty = "empty"
active = "active"
inactive = "inactive"
detached = "detached"
def __str__(self):
return self.value

View File

@@ -6,8 +6,8 @@
import re
from test_utils.dmesg import get_dmesg
from test_utils.size import Size, Unit
from test_tools.dmesg import get_dmesg
from type_def.size import Size, Unit
def get_metadata_size_on_device(cache_id: int) -> Size:

View File

@@ -7,8 +7,7 @@
from api.cas import casadm_parser
from api.cas.cache_config import CacheMode
from storage_devices.device import Device
from test_tools import fs_utils
from test_tools.fs_tools import remove, write_file
opencas_conf_path = "/etc/opencas/opencas.conf"
@@ -34,7 +33,7 @@ class InitConfig:
@staticmethod
def remove_config_file():
fs_utils.remove(opencas_conf_path, force=False)
remove(opencas_conf_path, force=False)
def save_config_file(self):
config_lines = []
@@ -47,7 +46,7 @@ class InitConfig:
config_lines.append(CoreConfigLine.header)
for c in self.core_config_lines:
config_lines.append(str(c))
fs_utils.write_file(opencas_conf_path, "\n".join(config_lines), False)
write_file(opencas_conf_path, "\n".join(config_lines), False)
@classmethod
def create_init_config_from_running_configuration(
@@ -69,7 +68,7 @@ class InitConfig:
@classmethod
def create_default_init_config(cls):
cas_version = casadm_parser.get_casadm_version()
fs_utils.write_file(opencas_conf_path, f"version={cas_version.base}")
write_file(opencas_conf_path, f"version={cas_version.base}")
class CacheConfigLine:

View File

@@ -9,8 +9,9 @@ import os
from core.test_run import TestRun
from api.cas import cas_module
from api.cas.version import get_installed_cas_version
from test_utils import os_utils, git
from test_utils.output import CmdException
from test_tools import git
from connection.utils.output import CmdException
from test_tools.os_tools import is_kernel_module_loaded
def rsync_opencas_sources():
@@ -98,7 +99,7 @@ def reinstall_opencas(version: str = ""):
def check_if_installed(version: str = ""):
TestRun.LOGGER.info("Check if Open CAS Linux is installed")
output = TestRun.executor.run("which casadm")
modules_loaded = os_utils.is_kernel_module_loaded(cas_module.CasModule.cache.value)
modules_loaded = is_kernel_module_loaded(cas_module.CasModule.cache.value)
if output.exit_code != 0 or not modules_loaded:
TestRun.LOGGER.info("CAS is not installed")

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -14,11 +14,10 @@ from datetime import timedelta
from packaging import version
from core.test_run import TestRun
from test_tools import fs_utils
from test_utils import os_utils
from test_utils.generator import random_string
from test_tools.fs_tools import write_file
from test_tools.os_tools import get_kernel_version
default_config_file_path = "/tmp/opencas_ioclass.conf"
default_config_file_path = TestRun.TEST_RUN_DATA_PATH + "/opencas_ioclass.conf"
MAX_IO_CLASS_ID = 32
MAX_IO_CLASS_PRIORITY = 255
@@ -109,7 +108,7 @@ class IoClass:
ioclass_config_path: str = default_config_file_path,
):
TestRun.LOGGER.info(f"Creating config file {ioclass_config_path}")
fs_utils.write_file(
write_file(
ioclass_config_path, IoClass.list_to_csv(ioclass_list, add_default_rule)
)
@@ -167,7 +166,7 @@ class IoClass:
"file_offset",
"request_size",
]
if os_utils.get_kernel_version() >= version.Version("4.13"):
if get_kernel_version() >= version.Version("4.13"):
rules.append("wlth")
rule = random.choice(rules)
@@ -178,13 +177,17 @@ class IoClass:
def add_random_params(rule: str):
if rule == "directory":
allowed_chars = string.ascii_letters + string.digits + "/"
rule += f":/{random_string(random.randint(1, 40), allowed_chars)}"
rule += f":/{''.join(random.choices(allowed_chars, k=random.randint(1, 40)))}"
elif rule in ["file_size", "lba", "pid", "file_offset", "request_size", "wlth"]:
rule += f":{Operator(random.randrange(len(Operator))).name}:{random.randrange(1000000)}"
elif rule == "io_class":
rule += f":{random.randrange(MAX_IO_CLASS_PRIORITY + 1)}"
elif rule in ["extension", "process_name", "file_name_prefix"]:
rule += f":{random_string(random.randint(1, 10))}"
allowed_chars = string.ascii_letters + string.digits
rule += f":{''.join(random.choices(allowed_chars, k=random.randint(1, 10)))}"
elif rule == "io_direction":
direction = random.choice(["read", "write"])
rule += f":{direction}"
if random.randrange(2):
rule += "&done"
return rule

View File

@@ -10,7 +10,7 @@ from datetime import timedelta
import paramiko
from core.test_run import TestRun
from test_utils.os_utils import wait
from test_tools.common.wait import wait
def check_progress_bar(command: str, progress_bar_expected: bool = True):

View File

@@ -1,17 +1,18 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import csv
from datetime import timedelta
from enum import Enum
from typing import List
from api.cas import casadm
from api.cas.casadm_params import StatsFilter
from test_utils.size import Size, Unit
from connection.utils.output import CmdException
from type_def.size import Size, Unit
class UnitType(Enum):
@@ -22,6 +23,7 @@ class UnitType(Enum):
kibibyte = "[KiB]"
gibibyte = "[GiB]"
seconds = "[s]"
byte = "[B]"
def __str__(self):
return self.value
@@ -57,6 +59,9 @@ class CacheStats:
case StatsFilter.err:
self.error_stats = ErrorStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __str__(self):
# stats_list contains all Class.__str__ methods initialized in CacheStats
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
@@ -68,6 +73,9 @@ class CacheStats:
getattr(other, stats_item) for stats_item in other.__dict__
]
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreStats:
def __init__(
@@ -92,6 +100,9 @@ class CoreStats:
case StatsFilter.err:
self.error_stats = ErrorStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __str__(self):
# stats_list contains all Class.__str__ methods initialized in CacheStats
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
@@ -103,6 +114,9 @@ class CoreStats:
getattr(other, stats_item) for stats_item in other.__dict__
]
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreIoClassStats:
def __init__(
@@ -128,6 +142,9 @@ class CoreIoClassStats:
case StatsFilter.blk:
self.block_stats = BlockStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __eq__(self, other):
# check if all initialized variable in self(CacheStats) match other(CacheStats)
return [getattr(self, stats_item) for stats_item in self.__dict__] == [
@@ -139,6 +156,9 @@ class CoreIoClassStats:
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
return "\n".join(stats_list)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CacheIoClassStats(CoreIoClassStats):
def __init__(
@@ -173,12 +193,31 @@ class CacheConfigStats:
self.cache_line_size = parse_value(
value=stats_dict["Cache line size [KiB]"], unit_type=UnitType.kibibyte
)
footprint_prefix = "Metadata Memory Footprint "
footprint_key = next(k for k in stats_dict if k.startswith(footprint_prefix))
self.metadata_memory_footprint = parse_value(
value=stats_dict["Metadata Memory Footprint [MiB]"], unit_type=UnitType.mebibyte
value=stats_dict[footprint_key],
unit_type=UnitType(footprint_key[len(footprint_prefix) :]),
)
self.dirty_for = parse_value(value=stats_dict["Dirty for [s]"], unit_type=UnitType.seconds)
self.status = stats_dict["Status"]
del stats_dict["Cache Id"]
del stats_dict["Cache Size [4KiB Blocks]"]
del stats_dict["Cache Size [GiB]"]
del stats_dict["Cache Device"]
del stats_dict["Exported Object"]
del stats_dict["Core Devices"]
del stats_dict["Inactive Core Devices"]
del stats_dict["Write Policy"]
del stats_dict["Cleaning Policy"]
del stats_dict["Promotion Policy"]
del stats_dict["Cache line size [KiB]"]
del stats_dict[footprint_key]
del stats_dict["Dirty for [s]"]
del stats_dict["Dirty for"]
del stats_dict["Status"]
def __str__(self):
return (
f"Config stats:\n"
@@ -216,10 +255,13 @@ class CacheConfigStats:
and self.status == other.status
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreConfigStats:
def __init__(self, stats_dict):
self.core_id = stats_dict["Core Id"]
self.core_id = int(stats_dict["Core Id"])
self.core_dev = stats_dict["Core Device"]
self.exp_obj = stats_dict["Exported Object"]
self.core_size = parse_value(
@@ -232,6 +274,17 @@ class CoreConfigStats:
)
self.seq_cutoff_policy = stats_dict["Seq cutoff policy"]
del stats_dict["Core Id"]
del stats_dict["Core Device"]
del stats_dict["Exported Object"]
del stats_dict["Core Size [4KiB Blocks]"]
del stats_dict["Core Size [GiB]"]
del stats_dict["Dirty for [s]"]
del stats_dict["Dirty for"]
del stats_dict["Status"]
del stats_dict["Seq cutoff threshold [KiB]"]
del stats_dict["Seq cutoff policy"]
def __str__(self):
return (
f"Config stats:\n"
@@ -259,6 +312,9 @@ class CoreConfigStats:
and self.seq_cutoff_policy == other.seq_cutoff_policy
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class IoClassConfigStats:
def __init__(self, stats_dict):
@@ -267,6 +323,11 @@ class IoClassConfigStats:
self.eviction_priority = stats_dict["Eviction priority"]
self.max_size = stats_dict["Max size"]
del stats_dict["IO class ID"]
del stats_dict["IO class name"]
del stats_dict["Eviction priority"]
del stats_dict["Max size"]
def __str__(self):
return (
f"Config stats:\n"
@@ -286,6 +347,9 @@ class IoClassConfigStats:
and self.max_size == other.max_size
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class UsageStats:
def __init__(self, stats_dict, percentage_val):
@@ -307,6 +371,18 @@ class UsageStats:
value=stats_dict[f"Inactive Dirty {unit}"], unit_type=unit
)
for unit in [UnitType.percentage, UnitType.block_4k]:
del stats_dict[f"Occupancy {unit}"]
del stats_dict[f"Free {unit}"]
del stats_dict[f"Clean {unit}"]
del stats_dict[f"Dirty {unit}"]
if f"Inactive Dirty {unit}" in stats_dict:
del stats_dict[f"Inactive Occupancy {unit}"]
if f"Inactive Clean {unit}" in stats_dict:
del stats_dict[f"Inactive Clean {unit}"]
if f"Inactive Dirty {unit}" in stats_dict:
del stats_dict[f"Inactive Dirty {unit}"]
def __str__(self):
return (
f"Usage stats:\n"
@@ -332,6 +408,9 @@ class UsageStats:
def __ne__(self, other):
return not self == other
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class IoClassUsageStats:
def __init__(self, stats_dict, percentage_val):
@@ -340,6 +419,11 @@ class IoClassUsageStats:
self.clean = parse_value(value=stats_dict[f"Clean {unit}"], unit_type=unit)
self.dirty = parse_value(value=stats_dict[f"Dirty {unit}"], unit_type=unit)
for unit in [UnitType.percentage, UnitType.block_4k]:
del stats_dict[f"Occupancy {unit}"]
del stats_dict[f"Clean {unit}"]
del stats_dict[f"Dirty {unit}"]
def __str__(self):
return (
f"Usage stats:\n"
@@ -363,15 +447,22 @@ class IoClassUsageStats:
def __ne__(self, other):
return not self == other
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class RequestStats:
def __init__(self, stats_dict, percentage_val):
unit = UnitType.percentage if percentage_val else UnitType.requests
self.read = RequestStatsChunk(
stats_dict=stats_dict, percentage_val=percentage_val, operation=OperationType.read
stats_dict=stats_dict,
percentage_val=percentage_val,
operation=OperationType.read,
)
self.write = RequestStatsChunk(
stats_dict=stats_dict, percentage_val=percentage_val, operation=OperationType.write
stats_dict=stats_dict,
percentage_val=percentage_val,
operation=OperationType.write,
)
self.pass_through_reads = parse_value(
value=stats_dict[f"Pass-Through reads {unit}"], unit_type=unit
@@ -386,6 +477,17 @@ class RequestStats:
value=stats_dict[f"Total requests {unit}"], unit_type=unit
)
for unit in [UnitType.percentage, UnitType.requests]:
for operation in [OperationType.read, OperationType.write]:
del stats_dict[f"{operation} hits {unit}"]
del stats_dict[f"{operation} partial misses {unit}"]
del stats_dict[f"{operation} full misses {unit}"]
del stats_dict[f"{operation} total {unit}"]
del stats_dict[f"Pass-Through reads {unit}"]
del stats_dict[f"Pass-Through writes {unit}"]
del stats_dict[f"Serviced requests {unit}"]
del stats_dict[f"Total requests {unit}"]
def __str__(self):
return (
f"Request stats:\n"
@@ -409,6 +511,9 @@ class RequestStats:
and self.requests_total == other.requests_total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class RequestStatsChunk:
def __init__(self, stats_dict, percentage_val: bool, operation: OperationType):
@@ -440,6 +545,9 @@ class RequestStatsChunk:
and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BlockStats:
def __init__(self, stats_dict, percentage_val):
@@ -455,6 +563,12 @@ class BlockStats:
device="exported object",
)
for unit in [UnitType.percentage, UnitType.block_4k]:
for device in ["core", "cache", "exported object"]:
del stats_dict[f"Reads from {device} {unit}"]
del stats_dict[f"Writes to {device} {unit}"]
del stats_dict[f"Total to/from {device} {unit}"]
def __str__(self):
return (
f"Block stats:\n"
@@ -470,6 +584,9 @@ class BlockStats:
self.core == other.core and self.cache == other.cache and self.exp_obj == other.exp_obj
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class ErrorStats:
def __init__(self, stats_dict, percentage_val):
@@ -482,6 +599,13 @@ class ErrorStats:
)
self.total_errors = parse_value(value=stats_dict[f"Total errors {unit}"], unit_type=unit)
for unit in [UnitType.percentage, UnitType.requests]:
for device in ["Core", "Cache"]:
del stats_dict[f"{device} read errors {unit}"]
del stats_dict[f"{device} write errors {unit}"]
del stats_dict[f"{device} total errors {unit}"]
del stats_dict[f"Total errors {unit}"]
def __str__(self):
return (
f"Error stats:\n"
@@ -499,6 +623,9 @@ class ErrorStats:
and self.total_errors == other.total_errors
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BasicStatsChunk:
def __init__(self, stats_dict: dict, percentage_val: bool, device: str):
@@ -517,6 +644,9 @@ class BasicStatsChunk:
self.reads == other.reads and self.writes == other.writes and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BasicStatsChunkError:
def __init__(self, stats_dict: dict, percentage_val: bool, device: str):
@@ -535,6 +665,9 @@ class BasicStatsChunkError:
self.reads == other.reads and self.writes == other.writes and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
def get_stat_value(stat_dict: dict, key: str):
idx = key.index("[")
@@ -580,10 +713,10 @@ def _get_section_filters(filter: List[StatsFilter], io_class_stats: bool = False
def get_stats_dict(
filter: List[StatsFilter],
cache_id: int,
core_id: int = None,
io_class_id: int = None
filter: List[StatsFilter],
cache_id: int,
core_id: int = None,
io_class_id: int = None,
):
csv_stats = casadm.print_statistics(
cache_id=cache_id,

View File

@@ -6,9 +6,9 @@
import re
from test_utils import git
from test_tools import git
from core.test_run import TestRun
from test_utils.output import CmdException
from connection.utils.output import CmdException
class CasVersion:
@@ -43,7 +43,7 @@ class CasVersion:
def get_available_cas_versions():
release_tags = git.get_release_tags()
release_tags = git.get_tags()
versions = [CasVersion.from_git_tag(tag) for tag in release_tags]

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -12,9 +12,8 @@ from core.test_run import TestRun
from api.cas import casadm
from storage_devices.disk import DiskType, DiskTypeSet
from api.cas.cache_config import CacheMode
from test_tools import fs_utils
from test_tools.disk_utils import Filesystem
from test_utils.size import Size, Unit
from test_tools.fs_tools import Filesystem, remove, create_directory
from type_def.size import Size, Unit
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
@@ -29,11 +28,10 @@ block_sizes = [1, 2, 4, 5, 8, 16, 32, 64, 128]
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.hdd, DiskType.nand]))
def test_support_different_io_size(cache_mode):
"""
title: OpenCAS supports different IO sizes
description: |
OpenCAS supports IO of size in rage from 512b to 128K
title: Support for different I/O sizes
description: Verify support for I/O of size in rage from 512B to 128KiB
pass_criteria:
- No IO errors
- No I/O errors
"""
with TestRun.step("Prepare cache and core devices"):
@@ -48,12 +46,12 @@ def test_support_different_io_size(cache_mode):
)
core = cache.add_core(core_disk.partitions[0])
with TestRun.step("Load the default ioclass config file"):
with TestRun.step("Load the default io class config file"):
cache.load_io_class(opencas_ioclass_conf_path)
with TestRun.step("Create a filesystem on the core device and mount it"):
fs_utils.remove(path=mountpoint, force=True, recursive=True, ignore_errors=True)
fs_utils.create_directory(path=mountpoint)
remove(path=mountpoint, force=True, recursive=True, ignore_errors=True)
create_directory(path=mountpoint)
core.create_filesystem(Filesystem.xfs)
core.mount(mountpoint)

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -17,12 +17,11 @@ from api.cas.cli_messages import (
)
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools import fs_utils
from test_tools.dd import Dd
from test_tools.disk_utils import Filesystem
from test_tools.fs_tools import Filesystem, read_file
from test_utils.filesystem.file import File
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from connection.utils.output import CmdException
from type_def.size import Size, Unit
version_file_path = r"/var/lib/opencas/cas_version"
mountpoint = "/mnt"
@@ -31,46 +30,45 @@ mountpoint = "/mnt"
@pytest.mark.CI
def test_cas_version():
"""
title: Test for CAS version
title: Test for version number
description:
Check if CAS print version cmd returns consistent version with version file
Check if version printed by cmd returns value consistent with version file
pass criteria:
- casadm version command succeeds
- versions from cmd and file in /var/lib/opencas/cas_version are consistent
- Version command succeeds
- Versions from cmd and file in /var/lib/opencas/cas_version are consistent
"""
with TestRun.step("Read cas version using casadm cmd"):
with TestRun.step("Read version using casadm cmd"):
output = casadm.print_version(output_format=OutputFormat.csv)
cmd_version = output.stdout
cmd_cas_versions = [version.split(",")[1] for version in cmd_version.split("\n")[1:]]
with TestRun.step(f"Read cas version from {version_file_path} location"):
file_read = fs_utils.read_file(version_file_path).split("\n")
with TestRun.step(f"Read version from {version_file_path} location"):
file_read = read_file(version_file_path).split("\n")
file_cas_version = next(
(line.split("=")[1] for line in file_read if "CAS_VERSION=" in line)
)
with TestRun.step("Compare cmd and file versions"):
if not all(file_cas_version == cmd_cas_version for cmd_cas_version in cmd_cas_versions):
TestRun.LOGGER.error(f"Cmd and file versions doesn`t match")
TestRun.LOGGER.error(f"Cmd and file versions doesn't match")
@pytest.mark.CI
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
def test_negative_start_cache():
"""
title: Test start cache negative on cache device
title: Negative test for starting cache
description:
Check for negative cache start scenarios
Check starting cache using the same device or cache ID twice
pass criteria:
- Cache start succeeds
- Fails to start cache on the same device with another id
- Fails to start cache on another partition with the same id
- Starting cache on the same device with another ID fails
- Starting cache on another partition with the same ID fails
"""
with TestRun.step("Prepare cache device"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)] * 2)
cache_dev_1 = cache_dev.partitions[0]

View File

@@ -9,7 +9,7 @@ import pytest
from api.cas import casadm
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
@pytest.mark.CI

View File

@@ -0,0 +1,262 @@
#
# Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import posixpath
import random
import time
import pytest
from api.cas import casadm_parser, casadm
from api.cas.cache_config import CacheLineSize, CacheMode
from api.cas.cli import attach_cache_cmd
from api.cas.cli_messages import check_stderr_msg, attach_with_existing_metadata
from connection.utils.output import CmdException
from core.test_run import TestRun
from core.test_run_utils import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from storage_devices.nullblk import NullBlk
from test_tools.dmesg import clear_dmesg
from test_tools.fs_tools import Filesystem, create_directory, create_random_test_file, \
check_if_directory_exists, remove
from type_def.size import Size, Unit
mountpoint = "/mnt/cas"
test_file_path = f"{mountpoint}/test_file"
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.require_disk("core2", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", CacheMode)
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
def test_attach_device_with_existing_metadata(cache_mode, cache_line_size):
"""
title: Test attaching cache with valid and relevant metadata.
description: |
Attach disk with valid and relevant metadata and verify whether the running configuration
wasn't affected by the values from the old metadata.
pass_criteria:
- no cache crash during attach and detach.
- old metadata doesn't affect running cache.
- no kernel panic
"""
with TestRun.step("Prepare random cache line size and cache mode (different than tested)"):
random_cache_mode = _get_random_uniq_cache_mode(cache_mode)
cache_mode1, cache_mode2 = cache_mode, random_cache_mode
random_cache_line_size = _get_random_uniq_cache_line_size(cache_line_size)
cache_line_size1, cache_line_size2 = cache_line_size, random_cache_line_size
with TestRun.step("Clear dmesg log"):
clear_dmesg()
with TestRun.step("Prepare devices for caches and cores"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
cache_dev2.create_partitions([Size(2, Unit.GibiByte)])
cache_dev2 = cache_dev2.partitions[0]
core_dev1 = TestRun.disks["core"]
core_dev2 = TestRun.disks["core2"]
core_dev1.create_partitions([Size(2, Unit.GibiByte)] * 2)
core_dev2.create_partitions([Size(2, Unit.GibiByte)] * 2)
with TestRun.step("Start 2 caches with different parameters and add core to each"):
cache1 = casadm.start_cache(
cache_dev, force=True, cache_line_size=cache_line_size1
)
if cache1.has_volatile_metadata():
pytest.skip("Non-volatile metadata needed to run this test")
for core in core_dev1.partitions:
cache1.add_core(core)
cache2 = casadm.start_cache(
cache_dev2, force=True, cache_line_size=cache_line_size2
)
for core in core_dev2.partitions:
cache2.add_core(core)
cores_in_cache1_before = {
core.core_device.path for core in casadm_parser.get_cores(cache_id=cache1.cache_id)
}
with TestRun.step(f"Set cache modes for caches to {cache_mode1} and {cache_mode2}"):
cache1.set_cache_mode(cache_mode1)
cache2.set_cache_mode(cache_mode2)
with TestRun.step("Stop second cache"):
cache2.stop()
with TestRun.step("Detach first cache device"):
cache1.detach()
with TestRun.step("Try to attach the other cache device to first cache without force flag"):
try:
cache1.attach(device=cache_dev2)
TestRun.fail("Cache attached successfully"
"Expected: cache fail to attach")
except CmdException as exc:
check_stderr_msg(exc.output, attach_with_existing_metadata)
TestRun.LOGGER.info("Cache attach failed as expected")
with TestRun.step("Attach the other cache device to first cache with force flag"):
cache1.attach(device=cache_dev2, force=True)
cores_after_attach = casadm_parser.get_cores(cache_id=cache1.cache_id)
with TestRun.step("Verify if old configuration doesn`t affect new cache"):
cores_in_cache1 = {core.core_device.path for core in cores_after_attach}
if cores_in_cache1 != cores_in_cache1_before:
TestRun.fail(
f"After attaching cache device, core list has changed:"
f"\nUsed {cores_in_cache1}"
f"\nShould use {cores_in_cache1_before}."
)
if cache1.get_cache_line_size() == cache_line_size2:
TestRun.fail(
f"After attaching cache device, cache line size changed:"
f"\nUsed {cache_line_size2}"
f"\nShould use {cache_line_size1}."
)
if cache1.get_cache_mode() != cache_mode1:
TestRun.fail(
f"After attaching cache device, cache mode changed:"
f"\nUsed {cache1.get_cache_mode()}"
f"\nShould use {cache_mode1}."
)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", [CacheMode.WB, CacheMode.WT])
def test_attach_detach_md5sum(cache_mode):
"""
title: Test for md5sum of file after attach/detach operation.
description: |
Test data integrity after detach/attach operations
pass_criteria:
- CAS doesn't crash during attach and detach.
- md5sums before and after operations match each other
"""
with TestRun.step("Prepare cache and core devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
cache_dev2.create_partitions([Size(3, Unit.GibiByte)])
cache_dev2 = cache_dev2.partitions[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(6, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
core = cache.add_core(core_dev)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step("Create a filesystem on the core device and mount it"):
if check_if_directory_exists(mountpoint):
remove(mountpoint, force=True, recursive=True)
create_directory(path=mountpoint)
core.create_filesystem(Filesystem.xfs)
core.mount(mountpoint)
with TestRun.step("Write data to the exported object"):
test_file_main = create_random_test_file(
target_file_path=posixpath.join(mountpoint, "test_file"),
file_size=Size(5, Unit.GibiByte),
)
with TestRun.step("Calculate test file md5sums before detach"):
test_file_md5sum_before = test_file_main.md5sum()
with TestRun.step("Detach cache device"):
cache.detach()
with TestRun.step("Attach different cache device"):
cache.attach(device=cache_dev2, force=True)
with TestRun.step("Calculate cache test file md5sums after cache attach"):
test_file_md5sum_after = test_file_main.md5sum()
with TestRun.step("Compare test file md5sums"):
if test_file_md5sum_before != test_file_md5sum_after:
TestRun.fail(
f"MD5 sums of core before and after do not match."
f"Expected: {test_file_md5sum_before}"
f"Actual: {test_file_md5sum_after}"
)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", CacheMode)
def test_stop_cache_during_attach(cache_mode):
"""
title: Test cache stop during attach.
description: Test for handling concurrent cache attach and stop.
pass_criteria:
- No system crash.
- Stop operation completed successfully.
"""
with TestRun.step("Create null_blk device for cache"):
nullblk = NullBlk.create(size_gb=1500)
with TestRun.step("Prepare cache and core devices"):
cache_dev = nullblk[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(2, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step(f"Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
cache.add_core(core_dev)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step("Detach cache"):
cache.detach()
with TestRun.step("Start cache re-attach in background"):
TestRun.executor.run_in_background(
attach_cache_cmd(str(cache.cache_id), cache_dev.path)
)
time.sleep(1)
with TestRun.step("Stop cache"):
cache.stop()
with TestRun.step("Verify if cache stopped"):
caches = casadm_parser.get_caches()
if caches:
TestRun.fail(
"Cache is still running despite stop operation"
"expected behaviour: Cache stopped"
"actual behaviour: Cache running"
)
def _get_random_uniq_cache_line_size(cache_line_size) -> CacheLineSize:
return random.choice([c for c in list(CacheLineSize) if c is not cache_line_size])
def _get_random_uniq_cache_mode(cache_mode) -> CacheMode:
return random.choice([c for c in list(CacheMode) if c is not cache_mode])

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -17,8 +17,8 @@ from api.cas.cache_config import (
)
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from core.test_run import TestRun
from test_utils.size import Size, Unit
from test_utils.os_utils import Udev
from type_def.size import Size, Unit
from test_tools.udev import Udev
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
@@ -65,10 +65,10 @@ def test_cleaning_policies_in_write_back(cleaning_policy: CleaningPolicy):
cache.set_cleaning_policy(cleaning_policy=cleaning_policy)
set_cleaning_policy_params(cache, cleaning_policy)
with TestRun.step("Check for running CAS cleaner"):
with TestRun.step("Check for running cleaner process"):
output = TestRun.executor.run(f"pgrep {cas_cleaner_process_name}")
if output.exit_code != 0:
TestRun.fail("CAS cleaner process is not running!")
TestRun.fail("Cleaner process is not running!")
with TestRun.step(f"Add {cores_count} cores to the cache"):
cores = [cache.add_core(partition) for partition in core_dev.partitions]
@@ -133,10 +133,10 @@ def test_cleaning_policies_in_write_through(cleaning_policy):
cache.set_cleaning_policy(cleaning_policy=cleaning_policy)
set_cleaning_policy_params(cache, cleaning_policy)
with TestRun.step("Check for running CAS cleaner"):
with TestRun.step("Check for running cleaner process"):
output = TestRun.executor.run(f"pgrep {cas_cleaner_process_name}")
if output.exit_code != 0:
TestRun.fail("CAS cleaner process is not running!")
TestRun.fail("Cleaner process is not running!")
with TestRun.step(f"Add {cores_count} cores to the cache"):
cores = [cache.add_core(partition) for partition in core_dev.partitions]
@@ -193,12 +193,12 @@ def set_cleaning_policy_params(cache, cleaning_policy):
if current_acp_params.wake_up_time != acp_params.wake_up_time:
failed_params += (
f"Wake Up time is {current_acp_params.wake_up_time}, "
f"Wake up time is {current_acp_params.wake_up_time}, "
f"should be {acp_params.wake_up_time}\n"
)
if current_acp_params.flush_max_buffers != acp_params.flush_max_buffers:
failed_params += (
f"Flush Max Buffers is {current_acp_params.flush_max_buffers}, "
f"Flush max buffers is {current_acp_params.flush_max_buffers}, "
f"should be {acp_params.flush_max_buffers}\n"
)
TestRun.LOGGER.error(f"ACP parameters did not switch properly:\n{failed_params}")
@@ -215,22 +215,22 @@ def set_cleaning_policy_params(cache, cleaning_policy):
failed_params = ""
if current_alru_params.wake_up_time != alru_params.wake_up_time:
failed_params += (
f"Wake Up time is {current_alru_params.wake_up_time}, "
f"Wake up time is {current_alru_params.wake_up_time}, "
f"should be {alru_params.wake_up_time}\n"
)
if current_alru_params.staleness_time != alru_params.staleness_time:
failed_params += (
f"Staleness Time is {current_alru_params.staleness_time}, "
f"Staleness time is {current_alru_params.staleness_time}, "
f"should be {alru_params.staleness_time}\n"
)
if current_alru_params.flush_max_buffers != alru_params.flush_max_buffers:
failed_params += (
f"Flush Max Buffers is {current_alru_params.flush_max_buffers}, "
f"Flush max buffers is {current_alru_params.flush_max_buffers}, "
f"should be {alru_params.flush_max_buffers}\n"
)
if current_alru_params.activity_threshold != alru_params.activity_threshold:
failed_params += (
f"Activity Threshold is {current_alru_params.activity_threshold}, "
f"Activity threshold is {current_alru_params.activity_threshold}, "
f"should be {alru_params.activity_threshold}\n"
)
TestRun.LOGGER.error(f"ALRU parameters did not switch properly:\n{failed_params}")
@@ -245,9 +245,9 @@ def check_cleaning_policy_operation(
case CleaningPolicy.alru:
if core_writes_before_wait_for_cleaning != Size.zero():
TestRun.LOGGER.error(
"CAS cleaner started to clean dirty data right after IO! "
"Cleaner process started to clean dirty data right after I/O! "
"According to ALRU parameters set in this test cleaner should "
"wait 10 seconds after IO before cleaning dirty data"
"wait 10 seconds after I/O before cleaning dirty data"
)
if core_writes_after_wait_for_cleaning <= core_writes_before_wait_for_cleaning:
TestRun.LOGGER.error(
@@ -266,9 +266,9 @@ def check_cleaning_policy_operation(
case CleaningPolicy.acp:
if core_writes_before_wait_for_cleaning == Size.zero():
TestRun.LOGGER.error(
"CAS cleaner did not start cleaning dirty data right after IO! "
"Cleaner process did not start cleaning dirty data right after I/O! "
"According to ACP policy cleaner should start "
"cleaning dirty data right after IO"
"cleaning dirty data right after I/O"
)
if core_writes_after_wait_for_cleaning <= core_writes_before_wait_for_cleaning:
TestRun.LOGGER.error(

View File

@@ -1,21 +1,22 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from time import sleep
import pytest
from api.cas import casadm, casadm_parser, cli
from api.cas.cache_config import CacheMode, CleaningPolicy, CacheModeTrait, SeqCutOffPolicy
from api.cas.casadm_params import StatsFilter
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from connection.utils.output import CmdException
from type_def.size import Size, Unit
@pytest.mark.parametrize("cache_mode", CacheMode.with_traits(CacheModeTrait.LazyWrites))
@@ -23,10 +24,10 @@ from test_utils.size import Size, Unit
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.hdd, DiskType.hdd4k]))
def test_concurrent_cores_flush(cache_mode: CacheMode):
"""
title: Fail to flush two cores simultaneously.
title: Flush two cores simultaneously - negative.
description: |
CAS should return an error on attempt to flush second core if there is already
one flush in progress.
Validate that the attempt to flush another core when there is already one flush in
progress on the same cache will fail.
pass_criteria:
- No system crash.
- First core flushing should finish successfully.
@@ -39,7 +40,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
core_dev = TestRun.disks["core"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
core_dev.create_partitions([Size(5, Unit.GibiByte)] * 2)
core_dev.create_partitions([Size(2, Unit.GibiByte)] * 2)
cache_part = cache_dev.partitions[0]
core_part1 = core_dev.partitions[0]
@@ -48,7 +49,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
with TestRun.step("Start cache"):
cache = casadm.start_cache(cache_part, cache_mode, force=True)
with TestRun.step(f"Add both core devices to cache"):
with TestRun.step("Add both core devices to cache"):
core1 = cache.add_core(core_part1)
core2 = cache.add_core(core_part2)
@@ -56,37 +57,34 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step("Run concurrent fio on both cores"):
fio_pids = []
with TestRun.step("Run fio on both cores"):
data_per_core = cache.size / 2
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.size(data_per_core)
.block_size(Size(4, Unit.MebiByte))
.read_write(ReadWrite.write)
.direct(1)
)
for core in [core1, core2]:
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.target(core.path)
.size(core.size)
.block_size(Size(4, Unit.MebiByte))
.read_write(ReadWrite.write)
.direct(1)
)
fio_pid = fio.run_in_background()
fio_pids.append(fio_pid)
for fio_pid in fio_pids:
if not TestRun.executor.check_if_process_exists(fio_pid):
TestRun.fail("Fio failed to start")
with TestRun.step("Wait for fio to finish"):
for fio_pid in fio_pids:
while TestRun.executor.check_if_process_exists(fio_pid):
sleep(1)
fio.add_job().target(core.path)
fio.run()
with TestRun.step("Check if both cores contain dirty blocks"):
if core1.get_dirty_blocks() == Size.zero():
TestRun.fail("The first core does not contain dirty blocks")
if core2.get_dirty_blocks() == Size.zero():
TestRun.fail("The second core does not contain dirty blocks")
core2_dirty_blocks_before = core2.get_dirty_blocks()
required_dirty_data = (
(data_per_core * 0.9).align_down(Unit.Blocks4096.value).set_unit(Unit.Blocks4096)
)
core1_dirty_data = core1.get_dirty_blocks()
if core1_dirty_data < required_dirty_data:
TestRun.fail(f"Core {core1.core_id} does not contain enough dirty data.\n"
f"Expected at least {required_dirty_data}, actual {core1_dirty_data}.")
core2_dirty_data_before = core2.get_dirty_blocks()
if core2_dirty_data_before < required_dirty_data:
TestRun.fail(f"Core {core2.core_id} does not contain enough dirty data.\n"
f"Expected at least {required_dirty_data}, actual "
f" {core2_dirty_data_before}.")
with TestRun.step("Start flushing the first core in background"):
output_pid = TestRun.executor.run_in_background(
@@ -104,7 +102,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
pass
with TestRun.step(
"Wait until first core reach 40% flush and start flush operation on the second core"
"Wait until first core reaches 40% flush and start flush operation on the second core"
):
percentage = 0
while percentage < 40:
@@ -131,18 +129,20 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
except CmdException:
TestRun.LOGGER.info("The first core is not flushing dirty data anymore")
with TestRun.step("Check number of dirty data on both cores"):
if core1.get_dirty_blocks() > Size.zero():
with TestRun.step("Check the size of dirty data on both cores"):
core1_dirty_data = core1.get_dirty_blocks()
if core1_dirty_data > Size.zero():
TestRun.LOGGER.error(
"The quantity of dirty cache lines on the first core "
"after completed flush should be zero"
"There should not be any dirty data on the first core after completed flush.\n"
f"Dirty data: {core1_dirty_data}."
)
core2_dirty_blocks_after = core2.get_dirty_blocks()
if core2_dirty_blocks_before != core2_dirty_blocks_after:
core2_dirty_data_after = core2.get_dirty_blocks()
if core2_dirty_data_after != core2_dirty_data_before:
TestRun.LOGGER.error(
"The quantity of dirty cache lines on the second core "
"after failed flush should not change"
"Dirty data on the second core after failed flush should not change."
f"Dirty data before flush: {core2_dirty_data_before}, "
f"after: {core2_dirty_data_after}"
)
@@ -151,9 +151,9 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_concurrent_caches_flush(cache_mode: CacheMode):
"""
title: Success to flush two caches simultaneously.
title: Flush multiple caches simultaneously.
description: |
CAS should successfully flush multiple caches if there is already other flush in progress.
Check for flushing multiple caches if there is already other flush in progress.
pass_criteria:
- No system crash.
- Flush for each cache should finish successfully.
@@ -178,28 +178,29 @@ def test_concurrent_caches_flush(cache_mode: CacheMode):
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step(f"Add core devices to caches"):
with TestRun.step("Add cores to caches"):
cores = [cache.add_core(core_dev=core_dev.partitions[i]) for i, cache in enumerate(caches)]
with TestRun.step("Run fio on all cores"):
fio_pids = []
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.block_size(Size(4, Unit.MebiByte))
.size(cache.size)
.read_write(ReadWrite.write)
.direct(1)
)
for core in cores:
fio = (
Fio()
.create_command()
.target(core)
.io_engine(IoEngine.libaio)
.block_size(Size(4, Unit.MebiByte))
.size(core.size)
.read_write(ReadWrite.write)
.direct(1)
)
fio_pids.append(fio.run_in_background())
fio.add_job().target(core)
fio.run()
with TestRun.step("Check if each cache is full of dirty blocks"):
for cache in caches:
if not cache.get_dirty_blocks() != core.size:
TestRun.fail(f"The cache {cache.cache_id} does not contain dirty blocks")
cache_stats = cache.get_statistics(stat_filter=[StatsFilter.usage], percentage_val=True)
if cache_stats.usage_stats.dirty < 90:
TestRun.fail(f"Cache {cache.cache_id} should contain at least 90% of dirty data, "
f"actual dirty data: {cache_stats.usage_stats.dirty}%")
with TestRun.step("Start flush operation on all caches simultaneously"):
flush_pids = [
@@ -214,8 +215,9 @@ def test_concurrent_caches_flush(cache_mode: CacheMode):
with TestRun.step("Check number of dirty data on each cache"):
for cache in caches:
if cache.get_dirty_blocks() > Size.zero():
dirty_blocks = cache.get_dirty_blocks()
if dirty_blocks > Size.zero():
TestRun.LOGGER.error(
f"The quantity of dirty cache lines on the cache "
f"{str(cache.cache_id)} after complete flush should be zero"
f"The quantity of dirty data on cache {cache.cache_id} after complete "
f"flush should be zero, is: {dirty_blocks.set_unit(Unit.Blocks4096)}"
)

View File

@@ -5,15 +5,14 @@
#
import random
import pytest
from api.cas import casadm
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeLowerThan, DiskTypeSet
from test_tools.disk_utils import Filesystem
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from test_tools.fs_tools import Filesystem
from connection.utils.output import CmdException
from type_def.size import Size, Unit
mount_point = "/mnt/cas"
cores_amount = 3

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -15,8 +15,9 @@ from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, VerifyMethod
from test_utils.os_utils import Udev, sync
from test_utils.size import Size, Unit
from test_tools.os_tools import sync
from test_tools.udev import Udev
from type_def.size import Size, Unit
io_size = Size(10000, Unit.Blocks4096)
@@ -45,7 +46,7 @@ def test_cache_stop_and_load(cache_mode):
"""
title: Test for stopping and loading cache back with dynamic cache mode switching.
description: |
Validate the ability of the CAS to switch cache modes at runtime and
Validate the ability to switch cache modes at runtime and
check if all of them are working properly after switching and
after stopping and reloading cache back.
Check also other parameters consistency after reload.
@@ -137,10 +138,8 @@ def test_cache_stop_and_load(cache_mode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cache_mode_switching_during_io(cache_mode_1, cache_mode_2, flush, io_mode):
"""
title: Test for dynamic cache mode switching during IO.
description: |
Validate the ability of CAS to switch cache modes
during working IO on CAS device.
title: Test for dynamic cache mode switching during I/O.
description: Validate the ability to switch cache modes during I/O on exported object.
pass_criteria:
- Cache mode is switched without errors.
"""
@@ -181,7 +180,7 @@ def test_cache_mode_switching_during_io(cache_mode_1, cache_mode_2, flush, io_mo
):
cache.set_cache_mode(cache_mode=cache_mode_2, flush=flush)
with TestRun.step(f"Check if cache mode has switched properly during IO"):
with TestRun.step("Check if cache mode has switched properly during I/O"):
cache_mode_after_switch = cache.get_cache_mode()
if cache_mode_after_switch != cache_mode_2:
TestRun.fail(
@@ -228,7 +227,7 @@ def run_io_and_verify(cache, core, io_mode):
):
TestRun.fail(
"Write-Back cache mode is not working properly! "
"There should be some writes to CAS device and none to the core"
"There should be some writes to exported object and none to the core"
)
case CacheMode.PT:
if (

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -9,7 +9,7 @@ import pytest
from api.cas import casadm, cli, cli_messages
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@@ -18,11 +18,11 @@ def test_remove_multilevel_core():
"""
title: Test of the ability to remove a core used in a multilevel cache.
description: |
Negative test if OpenCAS does not allow to remove a core when the related exported object
Negative test for removing a core when the related exported object
is used as a core device for another cache instance.
pass_criteria:
- No system crash.
- OpenCAS does not allow removing a core used in a multilevel cache instance.
- Removing a core used in a multilevel cache instance is forbidden.
"""
with TestRun.step("Prepare cache and core devices"):

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -21,12 +21,12 @@ from api.cas.casadm_params import StatsFilter
from core.test_run_utils import TestRun
from storage_devices.disk import DiskTypeSet, DiskTypeLowerThan, DiskType
from test_tools.dd import Dd
from test_tools.disk_utils import Filesystem
from test_tools.fs_tools import Filesystem
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from test_utils.os_utils import Udev
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from test_tools.udev import Udev
from connection.utils.output import CmdException
from type_def.size import Size, Unit
random_thresholds = random.sample(range(1028, 1024**2, 4), 3)
random_stream_numbers = random.sample(range(2, 128), 3)
@@ -57,7 +57,7 @@ def test_multistream_seq_cutoff_functional(streams_number, threshold):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step(f"Start cache in Write-Back"):
with TestRun.step(f"Start cache in Write-Back cache mode"):
cache_disk = TestRun.disks["cache"]
core_disk = TestRun.disks["core"]
cache = casadm.start_cache(cache_disk, CacheMode.WB, force=True)
@@ -105,7 +105,7 @@ def test_multistream_seq_cutoff_functional(streams_number, threshold):
with TestRun.step(
"Write random number of 4k block requests to each stream and check if all "
"writes were sent in pass-through mode"
"writes were sent in pass-through"
):
core_statistics_before = core.get_statistics([StatsFilter.req, StatsFilter.blk])
random.shuffle(offsets)
@@ -170,7 +170,7 @@ def test_multistream_seq_cutoff_stress_raw(streams_seq_rand):
with TestRun.step("Reset core statistics counters"):
core.reset_counters()
with TestRun.step("Run FIO on core device"):
with TestRun.step("Run fio on core device"):
stream_size = min(core_disk.size / 256, Size(256, Unit.MebiByte))
sequential_streams = streams_seq_rand[0]
random_streams = streams_seq_rand[1]
@@ -216,12 +216,14 @@ def test_multistream_seq_cutoff_stress_fs(streams_seq_rand, filesystem, cache_mo
- No system crash
"""
with TestRun.step(f"Disable udev"):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Create filesystem on core device"):
with TestRun.step("Prepare cache and core devices"):
cache_disk = TestRun.disks["cache"]
core_disk = TestRun.disks["core"]
with TestRun.step("Create filesystem on core device"):
core_disk.create_filesystem(filesystem)
with TestRun.step("Start cache and add core"):
@@ -231,7 +233,7 @@ def test_multistream_seq_cutoff_stress_fs(streams_seq_rand, filesystem, cache_mo
with TestRun.step("Mount core"):
core.mount(mount_point=mount_point)
with TestRun.step(f"Set seq-cutoff policy to always and threshold to 20MiB"):
with TestRun.step("Set sequential cutoff policy to always and threshold to 20MiB"):
core.set_seq_cutoff_policy(policy=SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold=Size(20, Unit.MebiByte))
@@ -279,7 +281,7 @@ def run_dd(target_path, count, seek):
TestRun.LOGGER.info(f"dd command:\n{dd}")
output = dd.run()
if output.exit_code != 0:
raise CmdException("Error during IO", output)
raise CmdException("Error during I/O", output)
def check_statistics(stats_before, stats_after, expected_pt_writes, expected_writes_to_cache):

View File

@@ -0,0 +1,263 @@
#
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
import math
import random
import pytest
from api.cas import casadm
from api.cas.cache_config import SeqCutOffPolicy, CleaningPolicy, PromotionPolicy, \
PromotionParametersNhit, CacheMode
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from test_tools.dd import Dd
from test_tools.udev import Udev
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_promotion_policy_nhit_threshold():
"""
title: Functional test for promotion policy nhit - threshold
description: |
Test checking if data is cached only after number of hits to given cache line
accordingly to specified promotion nhit threshold.
pass_criteria:
- Promotion policy and hit parameters are set properly
- Data is cached only after number of hits to given cache line specified by threshold param
- Data is written in pass-through before number of hits to given cache line specified by
threshold param
- After meeting specified number of hits to given cache line, writes to other cache lines
are handled in pass-through
"""
random_thresholds = random.sample(range(2, 1000), 10)
additional_writes_count = 10
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(value=5, unit=Unit.GibiByte)])
core_device.create_partitions([Size(value=10, unit=Unit.GibiByte)])
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, cache_mode=CacheMode.WB)
core = cache.add_core(core_parts)
with TestRun.step("Disable sequential cut-off and cleaning"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.reset_counters()
with TestRun.step("Check if statistics of writes to cache and writes to core are zeros"):
check_statistics(
cache,
expected_writes_to_cache=Size.zero(),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Set nhit promotion policy"):
cache.set_promotion_policy(PromotionPolicy.nhit)
for iteration, threshold in enumerate(
TestRun.iteration(
random_thresholds,
"Set and validate nhit promotion policy threshold"
)
):
with TestRun.step(f"Set threshold to {threshold} and trigger to 0%"):
cache.set_params_nhit(
PromotionParametersNhit(
threshold=threshold,
trigger=0
)
)
with TestRun.step("Purge cache"):
cache.purge_cache()
with TestRun.step("Reset counters"):
cache.reset_counters()
with TestRun.step(
"Run dd and check if number of writes to cache and writes to core increase "
"accordingly to nhit parameters"
):
# dd_seek is counted as below to use different part of the cache in each iteration
dd_seek = int(
cache.size.get_value(Unit.Blocks4096) // len(random_thresholds) * iteration
)
for count in range(1, threshold + additional_writes_count):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(1) \
.seek(dd_seek) \
.run()
if count < threshold:
expected_writes_to_cache = Size.zero()
expected_writes_to_core = Size(count, Unit.Blocks4096)
else:
expected_writes_to_cache = Size(count - threshold + 1, Unit.Blocks4096)
expected_writes_to_core = Size(threshold - 1, Unit.Blocks4096)
check_statistics(cache, expected_writes_to_cache, expected_writes_to_core)
with TestRun.step("Write to other cache line and check if it was handled in pass-through"):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(1) \
.seek(int(dd_seek + Unit.Blocks4096.value)) \
.run()
expected_writes_to_core = expected_writes_to_core + Size(1, Unit.Blocks4096)
check_statistics(cache, expected_writes_to_cache, expected_writes_to_core)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_promotion_policy_nhit_trigger():
"""
title: Functional test for promotion policy nhit - trigger
description: |
Test checking if data is cached accordingly to nhit threshold parameter only after reaching
cache occupancy specified by nhit trigger value
pass_criteria:
- Promotion policy and hit parameters are set properly
- Data is cached accordingly to nhit threshold parameter only after reaching
cache occupancy specified by nhit trigger value
- Data is cached without nhit policy before reaching the trigger
"""
random_triggers = random.sample(range(0, 100), 10)
threshold = 2
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(value=50, unit=Unit.MebiByte)])
core_device.create_partitions([Size(value=100, unit=Unit.MebiByte)])
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
for trigger in TestRun.iteration(
random_triggers,
"Validate nhit promotion policy trigger"
):
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, cache_mode=CacheMode.WB, force=True)
core = cache.add_core(core_parts)
with TestRun.step("Disable sequential cut-off and cleaning"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step("Purge cache"):
cache.purge_cache()
with TestRun.step("Reset counters"):
cache.reset_counters()
with TestRun.step("Check if statistics of writes to cache and writes to core are zeros"):
check_statistics(
cache,
expected_writes_to_cache=Size.zero(),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Set nhit promotion policy"):
cache.set_promotion_policy(PromotionPolicy.nhit)
with TestRun.step(f"Set threshold to {threshold} and trigger to {trigger}%"):
cache.set_params_nhit(
PromotionParametersNhit(
threshold=threshold,
trigger=trigger
)
)
with TestRun.step(f"Run dd to fill {trigger}% of cache size with data"):
blocks_count = math.ceil(cache.size.get_value(Unit.Blocks4096) * trigger / 100)
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(blocks_count) \
.seek(0) \
.run()
with TestRun.step("Check if all written data was cached"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Write to free cached volume sectors"):
free_seek = (blocks_count + 1)
pt_blocks_count = int(cache.size.get_value(Unit.Blocks4096) - blocks_count)
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(pt_blocks_count) \
.seek(free_seek) \
.run()
with TestRun.step("Check if recently written data was written in pass-through"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size(pt_blocks_count, Unit.Blocks4096)
)
with TestRun.step("Write to recently written sectors one more time"):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(pt_blocks_count) \
.seek(free_seek) \
.run()
with TestRun.step("Check if recently written data was cached"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count + pt_blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size(pt_blocks_count, Unit.Blocks4096)
)
with TestRun.step("Stop cache"):
cache.stop(no_data_flush=True)
def check_statistics(cache, expected_writes_to_cache, expected_writes_to_core):
cache_stats = cache.get_statistics()
writes_to_cache = cache_stats.block_stats.cache.writes
writes_to_core = cache_stats.block_stats.core.writes
if writes_to_cache != expected_writes_to_cache:
TestRun.LOGGER.error(
f"Number of writes to cache should be "
f"{expected_writes_to_cache.get_value(Unit.Blocks4096)} "
f"but it is {writes_to_cache.get_value(Unit.Blocks4096)}")
if writes_to_core != expected_writes_to_core:
TestRun.LOGGER.error(
f"Number of writes to core should be: "
f"{expected_writes_to_core.get_value(Unit.Blocks4096)} "
f"but it is {writes_to_core.get_value(Unit.Blocks4096)}")

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -15,8 +15,9 @@ from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, CpusAllowedPolicy
from test_utils.os_utils import Udev, sync, get_dut_cpu_physical_cores
from test_utils.size import Size, Unit
from test_tools.os_tools import sync, get_dut_cpu_physical_cores
from test_tools.udev import Udev
from type_def.size import Size, Unit
class VerifyType(Enum):
@@ -39,15 +40,14 @@ class VerifyType(Enum):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_size):
"""
title: Sequential cut-off tests during sequential and random IO 'always' policy with 4 cores
title: Functional sequential cutoff test with multiple cores
description: |
Testing if amount of data written to cache after sequential writes for different
sequential cut-off thresholds on each core, while running sequential IO on 3 out of 4
cores and random IO against the last core, is correct.
Test checking if data is cached properly with sequential cutoff "always" policy
when sequential and random I/O is running to multiple cores.
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off threshold for three first cores.
- Amount of written blocks to cache is equal to io size run against last core.
with sequential cutoff threshold for three first cores.
- Amount of written blocks to cache is equal to I/O size run against last core.
"""
with TestRun.step("Prepare cache and core devices"):
@@ -75,7 +75,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
)
core_list = [cache.add_core(core_dev=core_part) for core_part in core_parts]
with TestRun.step("Set sequential cut-off parameters for all cores"):
with TestRun.step("Set sequential cutoff parameters for all cores"):
writes_before_list = []
fio_additional_size = Size(10, Unit.Blocks4096)
thresholds_list = [
@@ -95,7 +95,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
core.set_seq_cutoff_policy(SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential IO against first three cores"):
with TestRun.step("Prepare sequential I/O against first three cores"):
block_size = Size(4, Unit.KibiByte)
fio = Fio().create_command().io_engine(IoEngine.libaio).block_size(block_size).direct(True)
@@ -106,7 +106,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
fio_job.target(core.path)
writes_before_list.append(core.get_statistics().block_stats.cache.writes)
with TestRun.step("Prepare random IO against the last core"):
with TestRun.step("Prepare random I/O against the last core"):
fio_job = fio.add_job(f"core_{core_list[-1].core_id}")
fio_job.size(io_sizes_list[-1])
fio_job.read_write(io_type_last)
@@ -116,7 +116,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
with TestRun.step("Run fio against all cores"):
fio.run()
with TestRun.step("Verify writes to cache count after IO"):
with TestRun.step("Verify writes to cache count after I/O"):
margins = [
min(block_size * (core.get_seq_cut_off_parameters().promotion_count - 1), threshold)
for core, threshold in zip(core_list[:-1], thresholds_list[:-1])
@@ -158,17 +158,16 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cache_line_size):
def test_seq_cutoff_multi_core_cpu_pinned(cache_mode, io_type, io_type_last, cache_line_size):
"""
title: Sequential cut-off tests during sequential and random IO 'always' policy with 4 cores
title: Functional sequential cutoff test with multiple cores and cpu pinned I/O
description: |
Testing if amount of data written to cache after sequential writes for different
sequential cut-off thresholds on each core, while running sequential IO, pinned,
on 3 out of 4 cores and random IO against the last core, is correct.
Test checking if data is cached properly with sequential cutoff "always" policy
when sequential and random cpu pinned I/O is running to multiple cores.
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off threshold for three first cores.
- Amount of written blocks to cache is equal to io size run against last core.
with sequential cutoff threshold for three first cores.
- Amount of written blocks to cache is equal to I/O size run against last core.
"""
with TestRun.step("Partition cache and core devices"):
@@ -197,7 +196,7 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
)
core_list = [cache.add_core(core_dev=core_part) for core_part in core_parts]
with TestRun.step(f"Set sequential cut-off parameters for all cores"):
with TestRun.step("Set sequential cutoff parameters for all cores"):
writes_before_list = []
fio_additional_size = Size(10, Unit.Blocks4096)
thresholds_list = [
@@ -217,7 +216,9 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
core.set_seq_cutoff_policy(SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential IO against first three cores"):
with TestRun.step(
"Prepare sequential I/O against first three cores and random I/O against the last one"
):
fio = (
Fio()
.create_command()
@@ -243,10 +244,10 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
fio_job.target(core_list[-1].path)
writes_before_list.append(core_list[-1].get_statistics().block_stats.cache.writes)
with TestRun.step("Running IO against all cores"):
with TestRun.step("Running I/O against all cores"):
fio.run()
with TestRun.step("Verifying writes to cache count after IO"):
with TestRun.step("Verifying writes to cache count after I/O"):
for core, writes, threshold, io_size in zip(
core_list[:-1], writes_before_list[:-1], thresholds_list[:-1], io_sizes_list[:-1]
):
@@ -281,16 +282,14 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
"""
title: Sequential cut-off tests for writes and reads for 'never', 'always' and 'full' policies
title: Functional test for sequential cutoff threshold parameter
description: |
Testing if amount of data written to cache after sequential writes and reads for different
sequential cut-off policies with cache configured with different cache line size
is valid for sequential cut-off threshold parameter, assuming that cache occupancy
doesn't reach 100% during test.
Check if data is cached properly according to sequential cutoff policy and
threshold parameter
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off parameter in case of 'always' policy.
- Amount of written blocks to cache is at least equal io size in case of 'never' and 'full'
- Amount of blocks written to cache is less than or equal to amount set
with sequential cutoff parameter in case of 'always' policy.
- Amount of blocks written to cache is at least equal to io size in case of 'never' and 'full'
policy.
"""
@@ -325,13 +324,13 @@ def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
)
io_size = (threshold + fio_additional_size).align_down(0x1000)
with TestRun.step(f"Setting cache sequential cut off policy mode to {policy}"):
with TestRun.step(f"Setting cache sequential cutoff policy mode to {policy}"):
cache.set_seq_cutoff_policy(policy)
with TestRun.step(f"Setting cache sequential cut off policy threshold to {threshold}"):
with TestRun.step(f"Setting cache sequential cutoff policy threshold to {threshold}"):
cache.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential IO against core"):
with TestRun.step("Prepare sequential I/O against core"):
sync()
writes_before = core.get_statistics().block_stats.cache.writes
fio = (
@@ -363,16 +362,15 @@ def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
"""
title: Sequential cut-off tests during writes and reads on full cache for 'full' policy
title: Functional test for sequential cutoff threshold parameter and 'full' policy
description: |
Testing if amount of data written to cache after sequential io against fully occupied
cache for 'full' sequential cut-off policy with cache configured with different cache
line sizes is valid for sequential cut-off threshold parameter.
Check if data is cached properly according to sequential cutoff 'full' policy and given
threshold parameter
pass_criteria:
- Amount of written blocks to cache is big enough to fill cache when 'never' sequential
cut-off policy is set
cutoff policy is set
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off parameter in case of 'full' policy.
with sequential cutoff parameter in case of 'full' policy.
"""
with TestRun.step("Partition cache and core devices"):
@@ -406,10 +404,10 @@ def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
)
io_size = (threshold + fio_additional_size).align_down(0x1000)
with TestRun.step(f"Setting cache sequential cut off policy mode to {SeqCutOffPolicy.never}"):
with TestRun.step(f"Setting cache sequential cutoff policy mode to {SeqCutOffPolicy.never}"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step("Prepare sequential IO against core"):
with TestRun.step("Prepare sequential I/O against core"):
sync()
fio = (
Fio()
@@ -431,13 +429,13 @@ def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
f"Cache occupancy is too small: {occupancy_percentage}, expected at least 95%"
)
with TestRun.step(f"Setting cache sequential cut off policy mode to {SeqCutOffPolicy.full}"):
with TestRun.step(f"Setting cache sequential cutoff policy mode to {SeqCutOffPolicy.full}"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.full)
with TestRun.step(f"Setting cache sequential cut off policy threshold to {threshold}"):
with TestRun.step(f"Setting cache sequential cutoff policy threshold to {threshold}"):
cache.set_seq_cutoff_threshold(threshold)
with TestRun.step(f"Running sequential IO ({io_dir})"):
with TestRun.step(f"Running sequential I/O ({io_dir})"):
sync()
writes_before = core.get_statistics().block_stats.cache.writes
fio = (

View File

@@ -1,16 +1,17 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm
from api.cas.cache_config import CacheMode
from api.cas.cache_config import CacheMode, CacheModeTrait
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.os_utils import Udev
from test_utils.size import Unit, Size
from test_tools.udev import Udev
from type_def.size import Unit, Size
from test_tools.dd import Dd
from test_tools.iostat import IOstatBasic
@@ -19,19 +20,17 @@ dd_count = 100
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrize("cache_mode", [CacheMode.WT, CacheMode.WA, CacheMode.WB])
@pytest.mark.parametrize("cache_mode", CacheMode.with_traits(CacheModeTrait.InsertRead))
@pytest.mark.CI()
def test_ci_read(cache_mode):
"""
title: Verification test for write mode: write around
description: Verify if write mode: write around, works as expected and cache only reads
and does not cache write
title: Verification test for caching reads in various cache modes
description: Check if reads are properly cached in various cache modes
pass criteria:
- writes are not cached
- reads are cached
- Reads are cached
"""
with TestRun.step("Prepare partitions"):
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@@ -44,7 +43,7 @@ def test_ci_read(cache_mode):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step(f"Start cache with cache_mode={cache_mode}"):
with TestRun.step(f"Start cache in {cache_mode} cache mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=cache_mode)
casadm.add_core(cache, core_device)
@@ -62,7 +61,7 @@ def test_ci_read(cache_mode):
dd.run()
with TestRun.step("Collect iostat"):
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_1 = iostat[0].total_reads
with TestRun.step("Generate cache hits using reads"):
@@ -77,7 +76,7 @@ def test_ci_read(cache_mode):
dd.run()
with TestRun.step("Collect iostat"):
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_2 = iostat[0].total_reads
with TestRun.step("Stop cache"):
@@ -98,7 +97,14 @@ def test_ci_read(cache_mode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.CI()
def test_ci_write_around_write():
with TestRun.step("Prepare partitions"):
"""
title: Verification test for writes in Write-Around cache mode
description: Validate I/O statistics after writing to exported object in Write-Around cache mode
pass criteria:
- Writes are not cached
- After inserting writes to core, data is read from core and not from cache
"""
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@@ -111,16 +117,16 @@ def test_ci_write_around_write():
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start CAS Linux in Write Around mode"):
with TestRun.step("Start cache in Write-Around mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=CacheMode.WA)
casadm.add_core(cache, core_device)
with TestRun.step("Collect iostat before I/O"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_0 = iostat_core[0].total_writes
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_0 = iostat_cache[0].total_writes
with TestRun.step("Submit writes to exported object"):
@@ -136,11 +142,11 @@ def test_ci_write_around_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_1 = iostat_core[0].total_writes
read_core_1 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_1 = iostat_cache[0].total_writes
read_cache_1 = iostat_cache[0].total_reads
@@ -156,10 +162,10 @@ def test_ci_write_around_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
read_core_2 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_2 = iostat_cache[0].total_reads
with TestRun.step("Stop cache"):
@@ -182,14 +188,14 @@ def test_ci_write_around_write():
else:
TestRun.LOGGER.error(f"Writes to cache: {write_cache_delta_1} != 0")
with TestRun.step("Verify that reads propagated to core"):
with TestRun.step("Verify that data was read from core"):
read_core_delta_2 = read_core_2 - read_core_1
if read_core_delta_2 == data_write:
TestRun.LOGGER.info(f"Reads from core: {read_core_delta_2} == {data_write}")
else:
TestRun.LOGGER.error(f"Reads from core: {read_core_delta_2} != {data_write}")
with TestRun.step("Verify that reads did not occur on cache"):
with TestRun.step("Verify that data was not read from cache"):
read_cache_delta_2 = read_cache_2 - read_cache_1
if read_cache_delta_2.value == 0:
TestRun.LOGGER.info(f"Reads from cache: {read_cache_delta_2} == 0")
@@ -202,7 +208,15 @@ def test_ci_write_around_write():
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.CI()
def test_ci_write_through_write():
with TestRun.step("Prepare partitions"):
"""
title: Verification test for Write-Through cache mode
description: |
Validate if reads and writes are cached properly for cache in Write-Through mode
pass criteria:
- Writes are inserted to cache and core
- Reads are not cached
"""
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@@ -215,16 +229,16 @@ def test_ci_write_through_write():
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start CAS Linux in Write Through mode"):
with TestRun.step("Start cache in Write-Through mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=CacheMode.WT)
casadm.add_core(cache, core_device)
with TestRun.step("Collect iostat before I/O"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_0 = iostat_core[0].total_writes
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_0 = iostat_cache[0].total_writes
with TestRun.step("Insert data into the cache using writes"):
@@ -241,11 +255,11 @@ def test_ci_write_through_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_1 = iostat_core[0].total_writes
read_core_1 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_1 = iostat_cache[0].total_writes
read_cache_1 = iostat_cache[0].total_reads
@@ -262,10 +276,10 @@ def test_ci_write_through_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
read_core_2 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_2 = iostat_cache[0].total_reads
with TestRun.step("Stop cache"):

View File

@@ -1,69 +1,121 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm
from api.cas.cas_module import CasModule
from api.cas.cli_messages import check_stderr_msg, attach_not_enough_memory
from connection.utils.output import CmdException
from core.test_run import TestRun
from test_utils.size import Unit
from test_utils.os_utils import (allocate_memory,
disable_memory_affecting_functions,
drop_caches,
get_mem_free,
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from type_def.size import Unit, Size
from test_tools.os_tools import (drop_caches,
is_kernel_module_loaded,
load_kernel_module,
unload_kernel_module,
)
from test_tools.memory import disable_memory_affecting_functions, get_mem_free, allocate_memory, \
get_mem_available, unmount_ramfs
@pytest.mark.os_dependent
def test_insufficient_memory_for_cas_module():
"""
title: Negative test for the ability of CAS to load the kernel module with insufficient memory.
title: Load CAS kernel module with insufficient memory
description: |
Check that the CAS kernel module won’t be loaded if enough memory is not available
Negative test for the ability to load the CAS kernel module with insufficient memory.
pass_criteria:
- CAS module cannot be loaded with not enough memory.
- Loading CAS with not enough memory returns error.
- CAS kernel module cannot be loaded with not enough memory.
- Loading CAS kernel module with not enough memory returns error.
"""
with TestRun.step("Disable caching and memory over-committing"):
disable_memory_affecting_functions()
drop_caches()
with TestRun.step("Measure memory usage without OpenCAS module"):
with TestRun.step("Measure memory usage without CAS kernel module"):
if is_kernel_module_loaded(CasModule.cache.value):
unload_kernel_module(CasModule.cache.value)
available_mem_before_cas = get_mem_free()
with TestRun.step("Load CAS module"):
with TestRun.step("Load CAS kernel module"):
load_kernel_module(CasModule.cache.value)
with TestRun.step("Measure memory usage with CAS module"):
with TestRun.step("Measure memory usage with CAS kernel module"):
available_mem_with_cas = get_mem_free()
memory_used_by_cas = available_mem_before_cas - available_mem_with_cas
TestRun.LOGGER.info(
f"OpenCAS module uses {memory_used_by_cas.get_value(Unit.MiB):.2f} MiB of DRAM."
f"CAS kernel module uses {memory_used_by_cas.get_value(Unit.MiB):.2f} MiB of DRAM."
)
with TestRun.step("Unload CAS module"):
with TestRun.step("Unload CAS kernel module"):
unload_kernel_module(CasModule.cache.value)
with TestRun.step("Allocate memory, leaving not enough memory for CAS module"):
memory_to_leave = get_mem_free() - (memory_used_by_cas * (3 / 4))
allocate_memory(memory_to_leave)
TestRun.LOGGER.info(
f"Memory left for OpenCAS module: {get_mem_free().get_value(Unit.MiB):0.2f} MiB."
f"Memory left for CAS kernel module: {get_mem_free().get_value(Unit.MiB):0.2f} MiB."
)
with TestRun.step(
"Try to load OpenCAS module and check if correct error message is printed on failure"
"Try to load CAS kernel module and check if correct error message is printed on failure"
):
output = load_kernel_module(CasModule.cache.value)
if output.stderr and output.exit_code != 0:
TestRun.LOGGER.info(f"Cannot load OpenCAS module as expected.\n{output.stderr}")
TestRun.LOGGER.info(f"Cannot load CAS kernel module as expected.\n{output.stderr}")
else:
TestRun.LOGGER.error("Loading OpenCAS module successfully finished, but should fail.")
TestRun.LOGGER.error("Loading CAS kernel module successfully finished, but should fail.")
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_attach_cache_min_ram():
"""
title: Test attach cache with insufficient memory.
description: |
Check for valid message when attaching cache with insufficient memory.
pass_criteria:
- CAS attach operation fail due to insufficient RAM.
- No system crash.
"""
with TestRun.step("Prepare devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
core_dev = TestRun.disks["core"]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True)
cache.add_core(core_dev)
with TestRun.step("Detach cache"):
cache.detach()
with TestRun.step("Set RAM workload"):
disable_memory_affecting_functions()
allocate_memory(get_mem_available() - Size(100, Unit.MegaByte))
with TestRun.step("Try to attach cache"):
try:
TestRun.LOGGER.info(
f"There is {get_mem_available().unit.MebiByte.value} available memory left"
)
cache.attach(device=cache_dev2, force=True)
TestRun.LOGGER.error(
f"Cache attached not as expected."
f"{get_mem_available()} is enough memory to complete operation")
except CmdException as exc:
check_stderr_msg(exc.output, attach_not_enough_memory)
with TestRun.step("Unlock RAM memory"):
unmount_ramfs()

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -8,14 +8,14 @@ import pytest
import time
from core.test_run_utils import TestRun
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
from api.cas import casadm
from api.cas.cache_config import CacheMode, CleaningPolicy
from test_utils.os_utils import Udev
from test_tools.udev import Udev
@pytest.mark.CI
@@ -23,14 +23,14 @@ from test_utils.os_utils import Udev
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cleaning_policy():
"""
Title: test_cleaning_policy
Title: Basic test for cleaning policy
description: |
The test is to see if dirty data will be removed from the Cache after changing the
cleaning policy from NOP to one that expects a flush.
Verify cleaning behaviour after changing cleaning policy from NOP
to one that expects a flush.
pass_criteria:
- Cache is successfully populated with dirty data
- Cleaning policy is changed successfully
- There is no dirty data after the policy change
- Cache is successfully populated with dirty data
- Cleaning policy is changed successfully
- There is no dirty data after the policy change
"""
wait_time = 60

View File

@@ -0,0 +1,61 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas.cli_help_messages import *
from api.cas.cli_messages import check_stderr_msg, check_stdout_msg
from core.test_run import TestRun
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_help(shortcut):
"""
title: Test for 'help' command.
description: |
Verifies that running command with 'help' param displays correct message for each
available command.
pass_criteria:
- Proper help message is displayed for every command.
- Proper help message is displayed after running command with wrong param.
"""
check_list_cmd = [
(" -S", " --start-cache", start_cache_help),
(None, " --attach-cache", attach_cache_help),
(None, " --detach-cache", detach_cache_help),
(" -T", " --stop-cache", stop_cache_help),
(" -X", " --set-param", set_params_help),
(" -G", " --get-param", get_params_help),
(" -Q", " --set-cache-mode", set_cache_mode_help),
(" -A", " --add-core", add_core_help),
(" -R", " --remove-core", remove_core_help),
(None, " --remove-inactive", remove_inactive_help),
(None, " --remove-detached", remove_detached_help),
(" -L", " --list-caches", list_caches_help),
(" -P", " --stats", stats_help),
(" -Z", " --reset-counters", reset_counters_help),
(" -F", " --flush-cache", flush_cache_help),
(" -C", " --io-class", ioclass_help),
(" -V", " --version", version_help),
# (None, " --standby", standby_help),
(" -H", " --help", help_help),
(None, " --zero-metadata", zero_metadata_help),
]
help = " -H" if shortcut else " --help"
with TestRun.step("Run 'help' for every 'casadm' command and check output"):
for cmds in check_list_cmd:
cmd = cmds[0] if shortcut else cmds[1]
if cmd:
output = TestRun.executor.run("casadm" + cmd + help)
check_stdout_msg(output, cmds[-1])
with TestRun.step("Run 'help' for command that doesn`t exist and check output"):
cmd = " -Y" if shortcut else " --yell"
output = TestRun.executor.run("casadm" + cmd + help)
check_stderr_msg(output, unrecognized_stderr)
check_stdout_msg(output, unrecognized_stdout)

View File

@@ -1,127 +0,0 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import re
import pytest
from api.cas import casadm
from api.cas.casadm_params import OutputFormat
from api.cas.cli_help_messages import *
from api.cas.cli_messages import check_stderr_msg, check_stdout_msg
from core.test_run import TestRun
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_help(shortcut):
"""