Compare commits

...

201 Commits

Author SHA1 Message Date
Katarzyna Treder
f73a209371
Merge pull request #1644 from katlapinka/kasiat/fuzzy-start-device-fix
Make test_fuzzy_start_cache_device use only required disks
2025-04-14 08:12:07 +02:00
Katarzyna Treder
56ded4c7fd Make test_fuzzy_start_cache_device use only required disks
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-14 08:11:29 +02:00
Katarzyna Treder
3a5df70abe
Merge pull request #1643 from katlapinka/kasiat/di-unplug-fix
Fix data integrity unplug test to work with fio newer than 3.30
2025-04-14 08:10:57 +02:00
Katarzyna Treder
289355b83a Fix data integrity unplug test to work with fio newer than 3.30
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-14 08:10:18 +02:00
Robert Baldyga
99af7ee9b5
Merge pull request #1642 from robertbaldyga/xfs-ioclass-fix
Fix io classification for XFS
2025-04-10 09:02:18 +02:00
Katarzyna Treder
b239bdb624
Merge pull request #1594 from katlapinka/kasiat/promotion-tests
Add tests for promotion policy
2025-04-09 13:12:01 +02:00
Katarzyna Treder
e189584557 Add tests for promotion policy
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-09 13:11:37 +02:00
Robert Baldyga
3c19caae1e
Merge pull request #1646 from mmichal10/configure-preempt
configure: add preemption_model_*() functions
2025-04-09 11:20:05 +02:00
Michal Mielewczyk
f46de38db0 configure: add preemption_model_*() functions
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-04-09 10:49:31 +02:00
Robert Baldyga
73cd065bfb
Merge pull request #1645 from jfckm/fix-linguist
fix: github-linguist still detects test directory
2025-04-08 13:59:45 +02:00
Jan Musial
46a486a442 fix: github-linguist still detects test directory
Signed-off-by: Jan Musial <jan.musial@huawei.com>
2025-04-08 13:14:36 +02:00
Katarzyna Treder
eee15d9ca4
Merge pull request #1613 from katlapinka/kasiat/test-data-path
Move tests data path to TF
2025-04-08 10:19:16 +02:00
Katarzyna Treder
b290fddceb Move tests data path to TF
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-04-08 09:38:05 +02:00
Katarzyna Treder
ede64a64f5
Merge pull request #1627 from Kamoppl/kamilg/update_api_march
test-api: api fixes
2025-04-07 15:10:07 +02:00
Kamil Gierszewski
d17157f9dd
test-api: api fixes
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-04-07 15:01:35 +02:00
Katarzyna Treder
1e546d664c
Merge pull request #1639 from robertbaldyga/fix-fault-injection-test
tests: Fix fault injection test
2025-04-07 14:27:59 +02:00
Robert Baldyga
779d9e96b4 tests: fault_injection: Fix block to request calculation
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-07 13:12:57 +02:00
Robert Baldyga
ceb208eb78 Fix io classification for XFS
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-04 19:46:12 +02:00
Robert Baldyga
0c6a3f699a
Merge pull request #1641 from robertbaldyga/update-ocf-20250402
Update OCF submodule
2025-04-02 15:41:14 +02:00
Robert Baldyga
94677ad6bf Update OCF submodule
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-02 15:34:30 +02:00
Robert Baldyga
767eea8f1a
Merge pull request #1640 from robertbaldyga/kernel-6.14-bdev-fix
Fix bdev handling on kernel v6.14
2025-04-02 14:03:53 +02:00
Robert Baldyga
72ae9b8161 Allocate bdev suitable for submit_bio()
Starting from kernel 6.14, submit_bio() is supported only for non-mq bdevs.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-02 12:38:27 +02:00
Robert Baldyga
c4a1923215 exp_obj: Add missing error handling
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-02 12:37:08 +02:00
Robert Baldyga
783e0229a5 tests: fault_injection: Disable udev, purge cache and reset stats
Improve accounting precision by eliminating noise.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-04-01 23:14:08 +02:00
Robert Baldyga
1f89ce7cfc
Merge pull request #1636 from robertbaldyga/update-version-v25.3
Update version to v25.3
2025-03-28 08:50:45 +01:00
Robert Baldyga
7cc1091a6a Update version to v25.3
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-27 20:23:39 +01:00
Robert Baldyga
8c6bf2c117
Merge pull request #1635 from robertbaldyga/kernel-6.14
Support kernel 6.14
2025-03-27 20:15:37 +01:00
Robert Baldyga
6aac52ed22 Support kernel 6.14
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-27 19:57:10 +01:00
Robert Baldyga
dad1e5af16
Merge pull request #1634 from mmichal10/upcate-ocf
Update OCF
2025-03-27 12:30:08 +01:00
Michal Mielewczyk
786651dea8 Update OCF
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-27 10:41:18 +01:00
Robert Baldyga
45a5d8a700
Merge pull request #1633 from robertbaldyga/update-ocf-20250326
Update OCF submodule
2025-03-26 08:27:41 +01:00
Robert Baldyga
84235350a0 Update OCF submodule
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-26 08:23:07 +01:00
Robert Baldyga
21d017d60b
Merge pull request #1632 from mmichal10/preemption
Disable preemption when accessing current cpu id
2025-03-26 08:19:35 +01:00
Michal Mielewczyk
b1f61580fc Disable preemption when accessing current cpu id
Currently Open CAS doesn't support kernels with involuntary preemption
anyways and once we add the support, we can get rid of this workaround

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-26 07:57:53 +01:00
Robert Baldyga
debbfcc0d1
Merge pull request #1631 from robertbaldyga/update-ocf-20250324
Update OCF submodule
2025-03-25 10:16:39 +01:00
Robert Baldyga
d4877904e4 Update OCF submodule
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-03-25 09:56:55 +01:00
Robert Baldyga
9ca6d79035
Merge pull request #1626 from mmichal10/duplicated_warning
Fix duplicated warning
2025-03-19 19:20:42 +01:00
Robert Baldyga
9d0a6762c0
Merge pull request #1623 from mmichal10/preemption
Involuntary preemption check
2025-03-19 12:49:17 +01:00
Michal Mielewczyk
0f23ae6950 Makefile: Error handling for failed modprobe
Print an additional error message and remove the installed kernel module

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-19 12:21:57 +01:00
Michal Mielewczyk
aa660ca0a5 Implement involuntary preemption check
Prevent loading the kernel module if the kernel can be involuntarily
preempted

CAS will work if the kernel has been compiled with either
CONFIG_PREEMPT_NONE, CONFIG_PREEMPT_VOLUNTARY, or CONFIG_PREEMPT_DYNAMIC.
If the dynamic configuration is enabled, the kernel must be booted with
preempt=none or preempt=voluntary.

Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-19 12:21:57 +01:00
Katarzyna Treder
a135a00627
Merge pull request #1602 from katlapinka/kasiat/test-identifier
Add unique test identifier to be able to manage logs
2025-03-19 11:27:20 +01:00
Katarzyna Treder
99b731d180 Add unique test identifier to be able to manage logs
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-03-19 10:12:08 +01:00
Michal Mielewczyk
c6f2371aea casadm: More specific warn for irresolvable cache
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-18 09:15:13 +01:00
Michal Mielewczyk
973023c459 casadm: Don't try to resolve detached cache path
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-03-18 09:13:25 +01:00
Robert Baldyga
2f827e2ff0
Merge pull request #1614 from Deixx/gitignore-update-gz
Update .gitignore after manpage installation fix
2025-03-11 11:08:08 +01:00
Katarzyna Treder
4d23c5f586
Merge pull request #1618 from katlapinka/kasiat/refactor-tests-description
Cleanup tests descriptions, prepare steps and values naming PART-1
2025-03-10 14:22:03 +01:00
Katarzyna Treder
476f62b2db Add separate steps for preparing devices, fix indent and move constants
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-03-10 14:21:15 +01:00
Katarzyna Treder
ba7d907775 Minor test description and names refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-03-10 14:21:15 +01:00
Robert Baldyga
d4de219fec
Merge pull request #1619 from Deixx/io-direction-classifier
New IO class rule `io_direction`
2025-03-06 12:12:05 +01:00
Daniel Madej
4cc7a74534 Add io_direction to random params for IoClass
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:26 +01:00
Daniel Madej
1445982b91 Add io_direction to fuzzy test
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:26 +01:00
Daniel Madej
d3be9444e7 Add test for io_direction IO class rule
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:26 +01:00
Daniel Madej
df813d9978 New IO class rule io_direction
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-03-06 09:08:19 +01:00
Katarzyna Treder
f37f5afd7b
Merge pull request #1596 from Kamoppl/kamilg/update_tests_dec
Update cli help test and remove duplicated test
2025-03-05 12:14:49 +01:00
Kamil Gierszewski
7f2b8fb229
tests: refactor test_cli_help test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-03-05 12:12:43 +01:00
Kamil Gierszewski
4c78a9f067
test-api: fix cli msg
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-03-05 12:12:43 +01:00
Kamil Gierszewski
f6545f2b06
tests: remove duplicated test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-03-05 12:12:43 +01:00
Robert Baldyga
ed113fd6da
Merge pull request #1612 from Open-CAS/jfckm-patch-1
chore(GH): Make GH ignore the test/ dir while detecting repo languages
2025-03-03 21:04:02 +01:00
Robert Baldyga
372a29d562
Merge pull request #1549 from robertbaldyga/kernel-6.11
Support kernel 6.13
2025-02-28 16:26:19 +01:00
Katarzyna Treder
69fd4a3872
Merge pull request #1617 from Deixx/rebuild-gz-fix
Add force to gzip commands
2025-02-28 12:39:19 +01:00
Daniel Madej
d562602556 Add force to gzip commands
Without force make shows errors when .gz
files already exist.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-28 12:25:09 +01:00
Katarzyna Treder
2cc49a1cd0
Merge pull request #1615 from katlapinka/kasiat/attach-detach-tests
Introduce tests for cache attach/detach feature
2025-02-28 12:18:44 +01:00
Katarzyna Treder
d973b3850e Introduce tests for cache attach/detach feature
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2025-02-28 12:18:02 +01:00
Katarzyna Treder
3893fc2aa7
Merge pull request #1616 from Kamoppl/kamilg/update_checksec_path
Kamilg/update checksec path
2025-02-28 09:44:16 +01:00
Kamil Gierszewski
cef43f7778
tests: fix checksec test formating
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-02-28 02:27:55 +01:00
Kamil Gierszewski
8544e28788
tests: update test script path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-02-28 02:27:55 +01:00
Robert Baldyga
dd20fcbc8e
Merge pull request #1590 from robertbaldyga/enable-attach-detach
Revert "Disable cache attach and detach"
2025-02-27 15:50:07 +01:00
Robert Baldyga
30d0cd0df0
Merge pull request #1565 from mmichal10/percpu-refcnt
Percpu refcnt
2025-02-27 15:14:22 +01:00
Daniel Madej
3e1dd26909 Update .gitignore after manpage installation fix
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-27 09:45:46 +01:00
Jan Musiał
78be601b1b
chore(GH): Make GH ignore the test/ dir while detecting repo languages
Signed-off-by: Jan Musial <jfckm@pm.me>
2025-02-25 18:28:31 +01:00
Michal Mielewczyk
5acc1a3cf2 update ocf: refcnt
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-02-25 16:23:41 +01:00
Jan Musial
27eed48976 Per-cpu reference counters
Signed-off-by: Adam Rutkowski <adam.j.rutkowski@intel.com>
Signed-off-by: Jan Musial <jan.musial@huawei.com>
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Ian Levine <ian.levine@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-02-25 16:21:02 +01:00
Jan Musial
4f43829e91 Implement env_atomic64_dec_return
Signed-off-by: Jan Musial <jan.musial@huawei.com>
Signed-off-by: Michal Mielewczyk <michal.mielewczyk@huawei.com>
2025-02-25 16:19:21 +01:00
Robert Baldyga
690cebae65
Merge pull request #1603 from Deixx/attach-error-msg
Fix error messages for metadata found during attach
2025-02-25 16:01:12 +01:00
Katarzyna Treder
d4f709ab9d
Merge pull request #1611 from Kamoppl/kamilg/remove_memory_barrier
Kamilg/remove memory barrier check
2025-02-25 12:42:41 +01:00
Kamil Gierszewski
8c32742f8c
github-actions: remove memory barrier warning
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-02-25 11:53:09 +01:00
Daniel Madej
37431273ea Add error message in test api
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-24 12:00:06 +01:00
Daniel Madej
69cdb458d2 Error msg for metadata found during attach
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-24 12:00:06 +01:00
Robert Baldyga
bafd1e79c4
Merge pull request #1608 from Deixx/gitignore-update
Added build/configuration output files to .gitignore
2025-02-21 11:00:26 +01:00
Robert Baldyga
c4b862a3e0
Merge pull request #1607 from robertbaldyga/fix-manpage
Fix manpage installation
2025-02-06 11:32:53 +01:00
Daniel Madej
4b411f837e Added build/configuration output files to .gitignore
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2025-02-06 10:40:33 +01:00
Katarzyna Treder
69a4da4b38
Merge pull request #1595 from Kamoppl/kamilg/update_api_dec
Few api fixes/improvements
2025-02-06 07:17:32 +01:00
Rafal Stefanowski
7ee78ac51e Kernel 6.13: Add setting queue limits of exported object
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
dbaeb21cb3 Kernel 6.13: Introduce cas_queue_limits_is_misaligned()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
6a275773ce Kernel 6.13: Introduce cas_queue_max_discard_sectors()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
e5607fe9dd Kernel 6.13: Introduce cas_queue_set_nonrot()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
3badd8b621 Kernel 6.13: Add another definition of cas_set_queue_flush_fua()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
efa2c0ad5e Kernel 6.13: Add another definition of cas_bd_get_next_part()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
eff9ad3c9d Kernel 6.13: Rearrange definitions of cas_copy_queue_limits()
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
14f375f135 Kernel 6.13: Expand debug macros
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Rafal Stefanowski
7dcb9c92fe Fix checking for NULL instead of error pointer
Signed-off-by: Rafal Stefanowski <rafal.stefanowski@huawei.com>
2025-02-05 17:29:45 +01:00
Robert Baldyga
52d0ff4c7b
Merge pull request #1587 from Deixx/ioclass-0
Informative error for incorrect IO class 0 name
2025-02-04 16:46:40 +01:00
Robert Baldyga
0f6c122e17 Fix manpage installation
gzip manpage properly and update mandb after its installation.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2025-01-27 22:20:13 +01:00
Kamil Gierszewski
bf7c72ccba
test-api: add a check for each stat parsing
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
7b52a2fc00
test-api: add attach cli msg
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
8ad0193a84
test-api: refactor cache imports
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
0ab7c2ca36
test-api: fix get status method
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
d3f4d80612
test-api: add attach/detach methods to Cache
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
c82d52bb47
test-api: add methods to statistics
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:46 +01:00
Kamil Gierszewski
537c9656b8
test-api: rename stat filter
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
1b52345732
test-api: fix core pool init
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
3606472e60
test-api: refactor to fix circular dependencies
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
0e24e52686
test-api: update parser
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
8cd3f4a631
test-api: add Byte unit
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
db3dc068f8
test-api: refactor casadm to use TestRun cache/core list
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:45 +01:00
Kamil Gierszewski
0f645ac10b
test-api: Change Cache init to force use of the cache_id instead of cache_device
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:44 +01:00
Kamil Gierszewski
9fb333a73f
test-api: minor refactors
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:44 +01:00
Kamil Gierszewski
f0753339dd
test-api: change default file path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:54:44 +01:00
Kamil Gierszewski
7adc356889
conftest: move import to the top of file
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 14:23:52 +01:00
Kamil Gierszewski
bef461ccc2
conftest: add TestRun cache/core list
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 12:43:18 +01:00
Kamil Gierszewski
e1f8426527
conftest: fix typo
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 12:43:18 +01:00
Kamil Gierszewski
76336c3ef5
conftest: add cleanup after drbd test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-10 12:42:20 +01:00
Kamil Gierszewski
f7539b46a1
conftest: add create/destroy temporary directory in conftest
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2025-01-02 03:08:50 +01:00
Katarzyna Treder
1934e801e7
Merge pull request #1599 from katlapinka/kasiat/tf-refactor
Tests and CAS API fixes after TF refactor
2024-12-31 12:44:52 +01:00
Katarzyna Treder
d4ccac3e75 Refactor trim stress test to use fio instead of vdbench
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-31 12:06:27 +01:00
Katarzyna Treder
e740ce377f Fix imports
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-31 12:06:18 +01:00
Katarzyna Treder
f7e7d3aa7f Disk tools and fs tools refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:58:26 +01:00
Katarzyna Treder
940990e37a Iostat refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:12:54 +01:00
Katarzyna Treder
70defbdf0d Move is_kernel_module_loaded to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:12:16 +01:00
Katarzyna Treder
58d89121ad Fix names: rename types to type_def
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 18:10:46 +01:00
Katarzyna Treder
e0f6d58d80 Disk finder refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 09:04:46 +01:00
Katarzyna Treder
8a5d531a32 OS tools refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-11 07:58:50 +01:00
Katarzyna Treder
3e67a8c0f5 Rename systemd to systemctl and move it to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:46:53 +01:00
Katarzyna Treder
a11e3ca890 Remove kedr and kedr tests
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:39:56 +01:00
Katarzyna Treder
c8ce05617d Move scsi_debug to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:36:35 +01:00
Katarzyna Treder
b724419a4f Move git to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:32:50 +01:00
Katarzyna Treder
4e8ea659da Move fstab to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:28:35 +01:00
Katarzyna Treder
241a0c545a Remove generator from test utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:25:27 +01:00
Katarzyna Treder
0cc3b3270d Move dmesg to tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 14:06:08 +01:00
Katarzyna Treder
4dca1c3c00 Move linux command and wait method to common tools
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:58:04 +01:00
Katarzyna Treder
cde7a3af16 Move error device to storage devices
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:47:18 +01:00
Katarzyna Treder
0be330ac1d Move checksec to scripts
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:39:16 +01:00
Katarzyna Treder
5121831bd8 Move singleton to common utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 13:24:57 +01:00
Katarzyna Treder
ee8b7b757f Move retry to connection utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:54:44 +01:00
Katarzyna Treder
4a6d6d39cd Move asynchronous to connection utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:50:43 +01:00
Katarzyna Treder
9460151ee5 Move output to connection utils
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:31:14 +01:00
Katarzyna Treder
81e792be99 Move Time to types
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:20:31 +01:00
Katarzyna Treder
d4e562caf9 Move size.py to types
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-12-10 12:18:38 +01:00
Robert Baldyga
75038692cd Revert "Disable cache attach and detach"
This reverts commit f34328adf2.

Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-11-27 13:41:00 +01:00
Katarzyna Treder
baa1f37432
Merge pull request #1589 from katlapinka/kasiat/initramfs-tests-update
Add initramfs update to LVM tests and conftest
2024-11-27 10:57:30 +01:00
Katarzyna Treder
809a9e407e TF update
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-27 10:26:35 +01:00
Daniel Madej
f15d3238ad Informative error for incorrect IO class 0 name
Instead of generic 'Invalid input parameter'

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-11-26 15:08:08 +01:00
Daniel Madej
0461de9e24 Fix typos
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-11-26 15:08:07 +01:00
Katarzyna Treder
3953e8b0f8 Add initramfs update to LVM tests and conftest
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-25 14:46:37 +01:00
Katarzyna Treder
c3bb599f0e
Merge pull request #1576 from Kamoppl/kamilg/speed_up_TF
speed up tests/conftest
2024-11-25 14:23:08 +01:00
Kamil Gierszewski
e54732ef81
test-conftest: move dict creation outside loop function
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
677a5019fb
test-conftest: Don't clean-up drives that won't be used
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
bf7711354d
test-conftest: More readable RAID teardown
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
b8ccf403f0
test-conftest: Kill IO faster in prepare/teardown
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:10 +01:00
Kamil Gierszewski
720475f85c
tests: update_test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Kamil Gierszewski
ed85411750
test-conftest: Use cached device_ids + fix posix path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Kamil Gierszewski
4626d87471
test-conftest: Don't prepare disks if test doesn't use them
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Kamil Gierszewski
92a8424dd0
test-conftest: reformat conftest
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-25 14:20:09 +01:00
Katarzyna Treder
c142610174
Merge pull request #1580 from katlapinka/kasiat/fix-lvm-tests
Fix tests after LVM API refactor
2024-11-13 13:29:00 +01:00
Katarzyna Treder
422a027f82 TF update
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-13 13:28:16 +01:00
Katarzyna Treder
6e3ac806b7 Fix tests after LVM API refactor
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-13 13:28:16 +01:00
Katarzyna Treder
eb50ee5f5d
Merge pull request #1524 from katlapinka/kasiat/loading-corrupted-metadata
Add test for loading corrupted metadata
2024-11-12 12:33:42 +01:00
Katarzyna Treder
cc0f4b1c8f Add test for loading corrupted metadata
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-11-12 12:17:26 +01:00
Robert Baldyga
aafc6b49a6
Merge pull request #1510 from Kamoppl/kamilg/add_checkpatch
github-actions: add checkpatch
2024-11-05 12:51:03 +01:00
Kamil Gierszewski
c7601847a1
github-actions: add checkpatch
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-11-05 11:46:54 +01:00
Katarzyna Treder
37d91fdbc2
Merge pull request #1578 from Deixx/mtab-fix
Fix for mtab changes
2024-10-31 11:04:26 +01:00
Daniel Madej
545a07098c Fix for mtab changes
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-31 11:02:31 +01:00
Katarzyna Treder
82ce9342a2
Merge pull request #1573 from Kamoppl/kamilg/fix_bugs
Kamilg/fix bugs
2024-10-30 14:16:16 +01:00
Kamil Gierszewski
c15b4d580b
tests: Fix after changing function name
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:38 +01:00
Kamil Gierszewski
35850c7d9a
test-api: adjust api to handle inactive core devices + add detached/inactive cores getter
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:38 +01:00
Kamil Gierszewski
908672fd66
test-api: add string representation of SeqCutOffPolicy
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:37 +01:00
Kamil Gierszewski
4ebc00bac8
tests: fix fault injestion interrupt test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:32:37 +01:00
Kamil Gierszewski
9ab60fe679
tests: change path type in test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:28:29 +01:00
Kamil Gierszewski
421c0e4641
test-api: fix stat type
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-29 15:28:29 +01:00
Robert Baldyga
e8b1c3ce81
Merge pull request #1514 from Deixx/mtab-check-optional
Handle missing /etc/mtab and modify output
2024-10-29 10:47:00 +01:00
Daniel Madej
0c0b10535e [tests] Update CLI messages and test
Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-28 16:07:33 +01:00
Daniel Madej
f11f14d31a Refactor mounted device checks
Calling functions now print error messages.
All the mounted devices are printed (not just the first one).

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-28 16:07:33 +01:00
Daniel Madej
a2f3cc1f4a Mtab check optional
There are situations when /etc/mtab is not present in the
system (e.g. in certain container images). This blocks
stop/remove operations. With making this check optional
the duty of checking mounts falls to kernel.
Test modified to check operations with and without mtab.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-28 16:07:33 +01:00
Robert Baldyga
588b7756a9
Merge pull request #1574 from robertbaldyga/exp-obj-serial
Introduce exp_obj serial
2024-10-25 14:58:47 +02:00
Robert Baldyga
b6f604d4a9 Introduce exp_obj serial
This is meant to be used by lvm2 to recognize which one of the stacked
devices should be used (be it backend device, or one of the bottom levels
in multi-level cache configuration).

Signed-off-by: Robert Baldyga <robert.baldyga@open-cas.com>
2024-10-19 21:53:43 +02:00
Katarzyna Treder
7a3b0672f2
Merge pull request #1572 from katlapinka/kasiat/update-tf
Update TF submodule
2024-10-15 12:03:10 +02:00
Katarzyna Treder
7c9c9a54e2 Update TF submodule
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-10-15 11:15:26 +02:00
Katarzyna Treder
dfa36541f6
Merge pull request #1562 from Deixx/concurrent-flush
Small update to test_concurrent_caches_flush
2024-10-15 09:51:23 +02:00
Daniel Madej
75fd39ed7b Update/fix to test_concurrent_caches_flush
No need to run fio in background. This fixes the issue that
one of the tests didn't wait for fio to finish before
checking stats.
More informative error messages.

Signed-off-by: Daniel Madej <daniel.madej@huawei.com>
2024-10-15 09:46:36 +02:00
Katarzyna Treder
bffe87d071
Merge pull request #1560 from katlapinka/kasiat/test-security-fixes
Small fixes for security tests
2024-10-15 09:37:55 +02:00
Katarzyna Treder
20ee2fda1f Small fixes in security tests
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-10-15 09:36:36 +02:00
Katarzyna Treder
e7f14f7d00
Merge pull request #1538 from Kamoppl/kamilg/fix_scope_bugs_v4
Kamilg/fix scope bugs v4
2024-10-11 11:26:58 +02:00
Kamil Gierszewski
5cada7a0ec
tests: add disabling udev in fault injection test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:53 +02:00
Kamil Gierszewski
1c26de3e7f
tests: update getting metadata size on device in memory consumption test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:53 +02:00
Kamil Gierszewski
a70500ee44
tests: fix init test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:52 +02:00
Kamil Gierszewski
2f188f9766
tests: add dirty data check to acp test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:52 +02:00
Kamil Gierszewski
0fdd4933a2
tests-api: add statistics parse for metadata in GiB
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:51 +02:00
Kamil Gierszewski
6ce978f317
tests: fix io class tests
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:51 +02:00
Kamil Gierszewski
cf68fb226b
tests: fix dmesg getting in test
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:50 +02:00
Kamil Gierszewski
004062d9fd
tests: fix test file path
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:50 +02:00
Kamil Gierszewski
4b74c65969
tests: fix checksec permissions
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:49 +02:00
Kamil Gierszewski
51962e4684
tests: refactor test_inactive_cores
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 10:36:49 +02:00
Kamil Gierszewski
daea1a433a
tests: fix test_simulation_startup
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 02:42:56 +02:00
Kamil Gierszewski
c32650af0b
tests: fix test recovery
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 02:42:56 +02:00
Kamil Gierszewski
39afdaa6c1
test-api: fix cli help message
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-11 02:42:56 +02:00
Katarzyna Treder
c1ad2a8584
Merge pull request #1526 from katlapinka/kasiat/ioclass-file-size-core
Add tests for io classification statistics per core
2024-10-10 20:46:21 +02:00
Katarzyna Treder
375fce5a19 Add tests for io classification statistics per core
Signed-off-by: Katarzyna Treder <katarzyna.treder@h-partners.com>
2024-10-10 20:45:24 +02:00
Katarzyna Treder
df5c0c7d4c
Merge pull request #1501 from Kamoppl/kamilg/add_old_tests
tests: add tests for read hit errors
2024-10-09 14:52:22 +02:00
Kamil Gierszewski
625cec7838
tests: update tests
Signed-off-by: Kamil Gierszewski <kamil.gierszewski@huawei.com>
2024-10-09 14:47:48 +02:00
Robert Baldyga
f5ee206fb9
Merge pull request #1564 from robertbaldyga/readme-v24.9
README: Recommend the latest release
2024-10-08 14:47:09 +02:00
Robert Baldyga
0e46d30281 README: Recommend the latest release
Signed-off-by: Robert Baldyga <robert.baldyga@huawei.com>
2024-10-08 14:38:19 +02:00
224 changed files with 5917 additions and 2556 deletions

28
.checkpatch.conf Normal file
View File

@ -0,0 +1,28 @@
--max-line-length=80
--no-tree
--ignore AVOID_BUG
--ignore COMMIT_MESSAGE
--ignore FILE_PATH_CHANGES
--ignore PREFER_PR_LEVEL
--ignore SPDX_LICENSE_TAG
--ignore SPLIT_STRING
--ignore MEMORY_BARRIER
--exclude .github
--exclude casadm
--exclude configure.d
--exclude doc
--exclude ocf
--exclude test
--exclude tools
--exclude utils
--exclude .gitignore
--exclude .gitmodules
--exclude .pep8speaks.yml
--exclude LICENSE
--exclude Makefile
--exclude README.md
--exclude configure
--exclude requirements.txt
--exclude version

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
test/** -linguist-detectable

15
.github/workflows/checkpatch.yml vendored Normal file
View File

@ -0,0 +1,15 @@
name: checkpatch review
on: [pull_request]
jobs:
my_review:
name: checkpatch review
runs-on: ubuntu-latest
steps:
- name: 'Calculate PR commits + 1'
run: echo "PR_FETCH_DEPTH=$(( ${{ github.event.pull_request.commits }} + 1 ))" >> $GITHUB_ENV
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: ${{ env.PR_FETCH_DEPTH }}
- name: Run checkpatch review
uses: webispy/checkpatch-action@v9

7
.gitignore vendored
View File

@ -11,8 +11,15 @@
tags
Module.symvers
Module.markers
*.mod
*.mod.c
*.out
modules.order
__pycache__/
*.py[cod]
*$py.class
*.gz
casadm/casadm
modules/include/ocf
modules/generated_defines.h

View File

@ -25,14 +25,14 @@ Open CAS uses Safe string library (safeclib) that is MIT licensed.
We recommend using the latest version, which contains all the important fixes
and performance improvements. Bugfix releases are guaranteed only for the
latest major release line (currently 22.6.x).
latest major release line (currently 24.9.x).
To download the latest Open CAS Linux release run following commands:
```
wget https://github.com/Open-CAS/open-cas-linux/releases/download/v22.6.3/open-cas-linux-22.06.3.0725.release.tar.gz
tar -xf open-cas-linux-22.06.3.0725.release.tar.gz
cd open-cas-linux-22.06.3.0725.release/
wget https://github.com/Open-CAS/open-cas-linux/releases/download/v24.9/open-cas-linux-24.09.0.0900.release.tar.gz
tar -xf open-cas-linux-24.09.0.0900.release.tar.gz
cd open-cas-linux-24.09.0.0900.release/
```
Alternatively, if you want recent development (unstable) version, you can clone GitHub repository:

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -121,7 +122,7 @@ LDFLAGS = -z noexecstack -z relro -z now -pie -pthread -lm
# Targets
#
all: sync
all: sync manpage
$(MAKE) build
build: $(TARGETS)
@ -156,10 +157,14 @@ endif
-include $(addprefix $(OBJDIR),$(OBJS:.o=.d))
manpage:
gzip -k -f $(TARGET).8
clean:
@echo " CLEAN "
@rm -f *.a $(TARGETS)
@rm -f $(shell find -name \*.d) $(shell find -name \*.o)
@rm -f $(TARGET).8.gz
distclean: clean
@ -168,11 +173,12 @@ install: install_files
install_files:
@echo "Installing casadm"
@install -m 755 -D $(TARGET) $(DESTDIR)$(BINARY_PATH)/$(TARGET)
@install -m 644 -D $(TARGET).8 $(DESTDIR)/usr/share/man/man8/$(TARGET).8
@install -m 644 -D $(TARGET).8.gz $(DESTDIR)/usr/share/man/man8/$(TARGET).8.gz
@mandb -q
uninstall:
@echo "Uninstalling casadm"
$(call remove-file,$(DESTDIR)$(BINARY_PATH)/$(TARGET))
$(call remove-file,$(DESTDIR)/usr/share/man/man8/$(TARGET).8)
$(call remove-file,$(DESTDIR)/usr/share/man/man8/$(TARGET).8.gz)
.PHONY: clean distclean all sync build install uninstall

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -45,8 +45,8 @@
#define CORE_ADD_MAX_TIMEOUT 30
int is_cache_mounted(int cache_id);
int is_core_mounted(int cache_id, int core_id);
bool device_mounts_detected(const char *pattern, int cmplen);
void print_mounted_devices(const char *pattern, int cmplen);
/* KCAS_IOCTL_CACHE_CHECK_DEVICE wrapper */
int _check_cache_device(const char *device_path,
@ -70,7 +70,7 @@ static const char *core_states_name[] = {
#define STANDBY_DETACHED_STATE "Standby detached"
#define CACHE_STATE_LENGHT 20
#define CACHE_STATE_LENGTH 20
#define CAS_LOG_FILE "/var/log/opencas.log"
#define CAS_LOG_LEVEL LOG_INFO
@ -1025,6 +1025,22 @@ static int _start_cache(uint16_t cache_id, unsigned int cache_init,
cache_device);
} else {
print_err(cmd.ext_err_code);
if (OCF_ERR_METADATA_FOUND == cmd.ext_err_code) {
/* print instructions specific for start/attach */
if (start) {
cas_printf(LOG_ERR,
"Please load cache metadata using --load"
" option or use --force to\n discard on-disk"
" metadata and start fresh cache instance.\n"
);
} else {
cas_printf(LOG_ERR,
"Please attach another device or use --force"
" to discard on-disk metadata\n"
" and attach this device to cache instance.\n"
);
}
}
}
return FAILURE;
}
@ -1119,8 +1135,16 @@ int stop_cache(uint16_t cache_id, int flush)
int status;
/* Don't stop instance with mounted filesystem */
if (is_cache_mounted(cache_id) == FAILURE)
int cmplen = 0;
char pattern[80];
/* verify if any core (or core partition) for this cache is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-", cache_id) - 1;
if (device_mounts_detected(pattern, cmplen)) {
cas_printf(LOG_ERR, "Can't stop cache instance %d due to mounted devices:\n", cache_id);
print_mounted_devices(pattern, cmplen);
return FAILURE;
}
fd = open_ctrl_device();
if (fd == -1)
@ -1803,58 +1827,52 @@ int add_core(unsigned int cache_id, unsigned int core_id, const char *core_devic
return SUCCESS;
}
int _check_if_mounted(int cache_id, int core_id)
bool device_mounts_detected(const char *pattern, int cmplen)
{
FILE *mtab;
struct mntent *mstruct;
char dev_buf[80];
int difference = 0, error = 0;
if (core_id >= 0) {
/* verify if specific core is mounted */
snprintf(dev_buf, sizeof(dev_buf), "/dev/cas%d-%d", cache_id, core_id);
} else {
/* verify if any core from given cache is mounted */
snprintf(dev_buf, sizeof(dev_buf), "/dev/cas%d-", cache_id);
}
int no_match = 0, error = 0;
mtab = setmntent("/etc/mtab", "r");
if (!mtab)
{
cas_printf(LOG_ERR, "Error while accessing /etc/mtab\n");
return FAILURE;
if (!mtab) {
/* if /etc/mtab not found then the kernel will check for mounts */
return false;
}
while ((mstruct = getmntent(mtab)) != NULL) {
error = strcmp_s(mstruct->mnt_fsname, PATH_MAX, dev_buf, &difference);
error = strcmp_s(mstruct->mnt_fsname, cmplen, pattern, &no_match);
/* mstruct->mnt_fsname is /dev/... block device path, not a mountpoint */
if (error != EOK)
return FAILURE;
if (!difference) {
if (core_id<0) {
cas_printf(LOG_ERR,
"Can't stop cache instance %d. Device %s is mounted!\n",
cache_id, mstruct->mnt_fsname);
} else {
cas_printf(LOG_ERR,
"Can't remove core %d from cache %d."
" Device %s is mounted!\n",
core_id, cache_id, mstruct->mnt_fsname);
}
return FAILURE;
}
return false;
if (no_match)
continue;
return true;
}
return SUCCESS;
return false;
}
int is_cache_mounted(int cache_id)
void print_mounted_devices(const char *pattern, int cmplen)
{
return _check_if_mounted(cache_id, -1);
}
FILE *mtab;
struct mntent *mstruct;
int no_match = 0, error = 0;
int is_core_mounted(int cache_id, int core_id)
{
return _check_if_mounted(cache_id, core_id);
mtab = setmntent("/etc/mtab", "r");
if (!mtab) {
/* should exist, but if /etc/mtab not found we cannot print mounted devices */
return;
}
while ((mstruct = getmntent(mtab)) != NULL) {
error = strcmp_s(mstruct->mnt_fsname, cmplen, pattern, &no_match);
/* mstruct->mnt_fsname is /dev/... block device path, not a mountpoint */
if (error != EOK || no_match)
continue;
cas_printf(LOG_ERR, "%s\n", mstruct->mnt_fsname);
}
}
int remove_core(unsigned int cache_id, unsigned int core_id,
@ -1864,7 +1882,23 @@ int remove_core(unsigned int cache_id, unsigned int core_id,
struct kcas_remove_core cmd;
/* don't even attempt ioctl if filesystem is mounted */
if (SUCCESS != is_core_mounted(cache_id, core_id)) {
bool mounts_detected = false;
int cmplen = 0;
char pattern[80];
/* verify if specific core is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-%d", cache_id, core_id);
mounts_detected = device_mounts_detected(pattern, cmplen);
if (!mounts_detected) {
/* verify if any partition of the core is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-%dp", cache_id, core_id) - 1;
mounts_detected = device_mounts_detected(pattern, cmplen);
}
if (mounts_detected) {
cas_printf(LOG_ERR, "Can't remove core %d from "
"cache %d due to mounted devices:\n",
core_id, cache_id);
print_mounted_devices(pattern, cmplen);
return FAILURE;
}
@ -1929,11 +1963,6 @@ int remove_inactive_core(unsigned int cache_id, unsigned int core_id,
int fd = 0;
struct kcas_remove_inactive cmd;
/* don't even attempt ioctl if filesystem is mounted */
if (SUCCESS != is_core_mounted(cache_id, core_id)) {
return FAILURE;
}
fd = open_ctrl_device();
if (fd == -1)
return FAILURE;
@ -2189,7 +2218,7 @@ int partition_list(unsigned int cache_id, unsigned int output_format)
fclose(intermediate_file[1]);
if (!result && stat_format_output(intermediate_file[0], stdout,
use_csv?RAW_CSV:TEXT)) {
cas_printf(LOG_ERR, "An error occured during statistics formatting.\n");
cas_printf(LOG_ERR, "An error occurred during statistics formatting.\n");
result = FAILURE;
}
fclose(intermediate_file[0]);
@ -2314,6 +2343,10 @@ static inline int partition_get_line(CSVFILE *csv,
}
strncpy_s(cnfg->info[part_id].name, sizeof(cnfg->info[part_id].name),
name, strnlen_s(name, sizeof(cnfg->info[part_id].name)));
if (0 == part_id && strcmp(name, "unclassified")) {
cas_printf(LOG_ERR, "IO class 0 must have the default name 'unclassified'\n");
return FAILURE;
}
/* Validate Priority*/
*error_col = part_csv_coll_prio;
@ -2401,7 +2434,7 @@ int partition_get_config(CSVFILE *csv, struct kcas_io_classes *cnfg,
return FAILURE;
} else {
cas_printf(LOG_ERR,
"I/O error occured while reading"
"I/O error occurred while reading"
" IO Classes configuration file"
" supplied.\n");
return FAILURE;
@ -2648,7 +2681,7 @@ void *list_printout(void *ctx)
struct list_printout_ctx *spc = ctx;
if (stat_format_output(spc->intermediate,
spc->out, spc->type)) {
cas_printf(LOG_ERR, "An error occured during statistics formatting.\n");
cas_printf(LOG_ERR, "An error occurred during statistics formatting.\n");
spc->result = FAILURE;
} else {
spc->result = SUCCESS;
@ -2787,20 +2820,24 @@ int list_caches(unsigned int list_format, bool by_id_path)
for (i = 0; i < caches_count; ++i) {
curr_cache = caches[i];
char status_buf[CACHE_STATE_LENGHT];
char status_buf[CACHE_STATE_LENGTH];
const char *tmp_status;
char mode_string[12];
char exp_obj[32];
char cache_ctrl_dev[MAX_STR_LEN] = "-";
float cache_flush_prog;
float core_flush_prog;
bool cache_device_detached;
bool cache_device_detached =
((curr_cache->state & (1 << ocf_cache_state_standby)) |
(curr_cache->state & (1 << ocf_cache_state_detached)));
if (!by_id_path && !curr_cache->standby_detached) {
if (!by_id_path && !cache_device_detached) {
if (get_dev_path(curr_cache->device, curr_cache->device,
sizeof(curr_cache->device))) {
cas_printf(LOG_WARNING, "WARNING: Cannot resolve path "
"to cache. By-id path will be shown for that cache.\n");
cas_printf(LOG_WARNING,
"WARNING: Cannot resolve path to "
"cache %d. By-id path will be shown "
"for that cache.\n", curr_cache->id);
}
}
@ -2826,11 +2863,6 @@ int list_caches(unsigned int list_format, bool by_id_path)
}
}
cache_device_detached =
((curr_cache->state & (1 << ocf_cache_state_standby)) |
(curr_cache->state & (1 << ocf_cache_state_detached)))
;
fprintf(intermediate_file[1], TAG(TREE_BRANCH)
"%s,%u,%s,%s,%s,%s\n",
"cache", /* type */
@ -2854,7 +2886,7 @@ int list_caches(unsigned int list_format, bool by_id_path)
}
if (core_flush_prog || cache_flush_prog) {
snprintf(status_buf, CACHE_STATE_LENGHT,
snprintf(status_buf, CACHE_STATE_LENGTH,
"%s (%3.1f %%)", "Flushing", core_flush_prog);
tmp_status = status_buf;
} else {
@ -2882,7 +2914,7 @@ int list_caches(unsigned int list_format, bool by_id_path)
pthread_join(thread, 0);
if (printout_ctx.result) {
result = 1;
cas_printf(LOG_ERR, "An error occured during list formatting.\n");
cas_printf(LOG_ERR, "An error occurred during list formatting.\n");
}
fclose(intermediate_file[0]);
@ -3016,7 +3048,7 @@ int zero_md(const char *cache_device, bool force)
}
close(fd);
cas_printf(LOG_INFO, "OpenCAS's metadata wiped succesfully from device '%s'.\n", cache_device);
cas_printf(LOG_INFO, "OpenCAS's metadata wiped successfully from device '%s'.\n", cache_device);
return SUCCESS;
}

View File

@ -2237,7 +2237,7 @@ static cli_command cas_commands[] = {
.options = attach_cache_options,
.command_handle_opts = start_cache_command_handle_option,
.handle = handle_cache_attach,
.flags = (CLI_SU_REQUIRED | CLI_COMMAND_BLOCKED),
.flags = CLI_SU_REQUIRED,
.help = NULL,
},
{
@ -2247,7 +2247,7 @@ static cli_command cas_commands[] = {
.options = detach_options,
.command_handle_opts = command_handle_option,
.handle = handle_cache_detach,
.flags = (CLI_SU_REQUIRED | CLI_COMMAND_BLOCKED),
.flags = CLI_SU_REQUIRED,
.help = NULL,
},
{

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -151,9 +151,7 @@ struct {
},
{
OCF_ERR_METADATA_FOUND,
"Old metadata found on device.\nPlease load cache metadata using --load"
" option or use --force to\n discard on-disk metadata and"
" start fresh cache instance.\n"
"Old metadata found on device"
},
{
OCF_ERR_SUPERBLOCK_MISMATCH,

View File

@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -10,16 +10,19 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/genhd.h" ||
compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/blkdev.h"
if compile_module $cur_name "struct block_device bd; bdev_partno;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct block_device bd; bd = *disk_part_iter_next(NULL);" "linux/blk_types.h" "linux/genhd.h"
elif compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/genhd.h" ||
compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
elif compile_module $cur_name "struct hd_struct hd; hd = *disk_part_iter_next(NULL);" "linux/genhd.h"
elif compile_module $cur_name "struct block_device bd; bd = *disk_part_iter_next(NULL);" "linux/blk_types.h" "linux/genhd.h"
then
echo $cur_name "3" >> $config_file_path
elif compile_module $cur_name "struct hd_struct hd; hd = *disk_part_iter_next(NULL);" "linux/genhd.h"
then
echo $cur_name "4" >> $config_file_path
else
echo $cur_name "X" >> $config_file_path
fi
@ -37,7 +40,7 @@ apply() {
unsigned long idx;
xa_for_each(&disk->part_tbl, idx, part) {
if ((part_no = part->bd_partno)) {
if ((part_no = bdev_partno(part))) {
break;
}
}
@ -47,6 +50,23 @@ apply() {
"2")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{
int part_no = 0;
struct gendisk *disk = bd->bd_disk;
struct block_device *part;
unsigned long idx;
xa_for_each(&disk->part_tbl, idx, part) {
if ((part_no = part->bd_partno)) {
break;
}
}
return part_no;
}" ;;
"3")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{
int part_no = 0;
struct gendisk *disk = bd->bd_disk;
@ -66,7 +86,7 @@ apply() {
return part_no;
}" ;;
"3")
"4")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{

View File

@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -22,7 +23,7 @@ apply() {
case "$1" in
"1")
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
static inline void _cas_cleanup_disk(struct gendisk *gd)
{
blk_cleanup_disk(gd);
}"
@ -31,7 +32,7 @@ apply() {
"2")
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
static inline void _cas_cleanup_disk(struct gendisk *gd)
{
put_disk(gd);
}"

View File

@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -33,7 +34,8 @@ apply() {
add_function "
static inline void cas_cleanup_queue(struct request_queue *q)
{
blk_mq_destroy_queue(q);
if (queue_is_mq(q))
blk_mq_destroy_queue(q);
}"
;;

View File

@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -22,6 +23,11 @@ check() {
output=$((output+2))
fi
if compile_module $cur_name "BLK_MQ_F_SHOULD_MERGE ;" "linux/blk-mq.h"
then
output=$((output+4))
fi
echo $cur_name $output >> $config_file_path
}
@ -42,6 +48,14 @@ apply() {
else
add_define "CAS_BLK_MQ_F_BLOCKING 0"
fi
if ((arg & 4))
then
add_define "CAS_BLK_MQ_F_SHOULD_MERGE \\
BLK_MQ_F_SHOULD_MERGE"
else
add_define "CAS_BLK_MQ_F_SHOULD_MERGE 0"
fi
}
conf_run $@

View File

@ -0,0 +1,45 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "page_folio((struct page *)NULL);" "linux/page-flags.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline struct address_space *cas_page_mapping(struct page *page)
{
struct folio *folio = page_folio(page);
return folio->mapping;
}" ;;
"2")
add_function "
static inline struct address_space *cas_page_mapping(struct page *page)
{
if (PageCompound(page))
return NULL;
return page->mapping;
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@ -0,0 +1,52 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "preempt_model_voluntary();" "linux/preempt.h" &&
compile_module $cur_name "preempt_model_none();" "linux/preempt.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline int cas_preempt_model_voluntary(void)
{
return preempt_model_voluntary();
}"
add_function "
static inline int cas_preempt_model_none(void)
{
return preempt_model_none();
}" ;;
"2")
add_function "
static inline int cas_preempt_model_voluntary(void)
{
return IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY);
}"
add_function "
static inline int cas_preempt_model_none(void)
{
return IS_ENABLED(CONFIG_PREEMPT_NONE);
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@ -0,0 +1,48 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_queue_max_discard_sectors(NULL, 0);" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline void cas_queue_max_discard_sectors(
struct request_queue *q,
unsigned int max_discard_sectors)
{
blk_queue_max_discard_sectors(q, max_discard_sectors);
}" ;;
"2")
add_function "
static inline void cas_queue_max_discard_sectors(
struct request_queue *q,
unsigned int max_discard_sectors)
{
struct queue_limits *lim = &q->limits;
lim->max_hw_discard_sectors = max_discard_sectors;
lim->max_discard_sectors =
min(max_discard_sectors, lim->max_user_discard_sectors);
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -12,18 +12,18 @@ check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct queue_limits q; q.limits_aux;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct queue_limits q; q.max_write_zeroes_sectors;" "linux/blkdev.h"
if compile_module $cur_name "struct queue_limits q; q.max_write_zeroes_sectors;" "linux/blkdev.h"
then
if compile_module $cur_name "struct queue_limits q; q.max_write_same_sectors;" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "3" >> $config_file_path
echo $cur_name "2" >> $config_file_path
fi
elif compile_module $cur_name "struct queue_limits q; q.max_write_same_sectors;" "linux/blkdev.h"
then
echo $cur_name "3" >> $config_file_path
elif compile_module $cur_name "struct queue_limits q; q.limits_aux;" "linux/blkdev.h"
then
echo $cur_name "4" >> $config_file_path
else
@ -37,6 +37,55 @@ apply() {
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"2")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"3")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"4")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
struct queue_limits_aux *l_aux = exp_q->limits.limits_aux;
exp_q->limits = *cache_q_limits;
@ -63,55 +112,6 @@ apply() {
if (queue_virt_boundary(cache_q))
queue_flag_set(QUEUE_FLAG_NOMERGES, cache_q);
}" ;;
"2")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"3")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"4")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
*)

View File

@ -0,0 +1,42 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct queue_limits q; q.misaligned;" "linux/blkdev.h"
then
echo $cur_name 1 >> $config_file_path
else
echo $cur_name 2 >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline bool cas_queue_limits_is_misaligned(
struct queue_limits *lim)
{
return lim->misaligned;
}" ;;
"2")
add_function "
static inline bool cas_queue_limits_is_misaligned(
struct queue_limits *lim)
{
return lim->features & BLK_FLAG_MISALIGNED;
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@ -0,0 +1,39 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "(int)QUEUE_FLAG_NONROT;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline void cas_queue_set_nonrot(struct request_queue *q)
{
q->queue_flags |= (1 << QUEUE_FLAG_NONROT);
}" ;;
"2")
add_function "
static inline void cas_queue_set_nonrot(struct request_queue *q)
{
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -11,15 +11,18 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL, NULL);" "linux/blk-mq.h"
if compile_module $cur_name "blk_alloc_disk(NULL, 0);" "linux/blkdev.h"
then
echo $cur_name 1 >> $config_file_path
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL);" "linux/blk-mq.h"
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL, NULL);" "linux/blk-mq.h"
then
echo $cur_name 2 >> $config_file_path
elif compile_module $cur_name "alloc_disk(0);" "linux/genhd.h"
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL);" "linux/blk-mq.h"
then
echo $cur_name 3 >> $config_file_path
elif compile_module $cur_name "alloc_disk(0);" "linux/genhd.h"
then
echo $cur_name 4 >> $config_file_path
else
echo $cur_name X >> $config_file_path
fi
@ -28,50 +31,73 @@ check() {
apply() {
case "$1" in
"1")
add_typedef "struct queue_limits cas_queue_limits_t;"
add_function "
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = blk_mq_alloc_disk(tag_set, NULL, NULL);
if (!(*gd))
return -ENOMEM;
*gd = blk_alloc_disk(lim, NUMA_NO_NODE);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
cas_cleanup_disk(gd);
_cas_cleanup_disk(gd);
}"
;;
"2")
add_typedef "struct queue_limits cas_queue_limits_t;"
add_function "
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = blk_mq_alloc_disk(tag_set, NULL);
if (!(*gd))
return -ENOMEM;
*gd = blk_mq_alloc_disk(tag_set, lim, NULL);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
cas_cleanup_disk(gd);
_cas_cleanup_disk(gd);
}"
;;
"3")
add_typedef "void* cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = blk_mq_alloc_disk(tag_set, NULL);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
_cas_cleanup_disk(gd);
}"
;;
"4")
add_typedef "void* cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = alloc_disk(1);
if (!(*gd))
@ -88,7 +114,7 @@ apply() {
}"
add_function "
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
blk_cleanup_queue(gd->queue);
gd->queue = NULL;

View File

@ -1,6 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -9,12 +10,15 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_queue_write_cache(NULL, 0, 0);" "linux/blkdev.h"
if compile_module $cur_name "BLK_FEAT_WRITE_CACHE;" "linux/blk-mq.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct request_queue rq; rq.flush_flags;" "linux/blkdev.h"
elif compile_module $cur_name "blk_queue_write_cache(NULL, 0, 0);" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
elif compile_module $cur_name "struct request_queue rq; rq.flush_flags;" "linux/blkdev.h"
then
echo $cur_name "3" >> $config_file_path
else
echo $cur_name "X" >> $config_file_path
fi
@ -23,21 +27,39 @@ check() {
apply() {
case "$1" in
"1")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
(q->limits.features & BLK_FEAT_WRITE_CACHE)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
(q->limits.features & BLK_FEAT_FUA)"
add_define "CAS_BLK_FEAT_WRITE_CACHE BLK_FEAT_WRITE_CACHE"
add_define "CAS_BLK_FEAT_FUA BLK_FEAT_FUA"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag) \\
({ lim->features |= flag; })"
add_function "
static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua) {}" ;;
"2")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
test_bit(QUEUE_FLAG_WC, &(q)->queue_flags)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)"
add_define "CAS_BLK_FEAT_WRITE_CACHE 0"
add_define "CAS_BLK_FEAT_FUA 0"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag)"
add_function "
static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua)
{
blk_queue_write_cache(q, flush, fua);
}" ;;
"2")
"3")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
CAS_IS_SET_FLUSH((q)->flush_flags)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
((q)->flush_flags & REQ_FUA)"
add_define "CAS_BLK_FEAT_WRITE_CACHE 0"
add_define "CAS_BLK_FEAT_FUA 0"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag)"
add_function "static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua)
{

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
# If $(M) is defined, we've been invoked from the
@ -52,7 +53,11 @@ distclean: clean distsync
install: install_files
@$(DEPMOD)
@$(MODPROBE) $(CACHE_MODULE)
@$(MODPROBE) $(CACHE_MODULE) || ( \
echo "See dmesg for more information" >&2 && \
rm -f $(DESTDIR)$(MODULES_DIR)/$(CACHE_MODULE).ko && exit 1 \
)
install_files:
@echo "Installing Open-CAS modules"

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -29,8 +29,8 @@
trace_printk(format, ##__VA_ARGS__)
#else
#define CAS_CLS_DEBUG_MSG(format, ...)
#define CAS_CLS_DEBUG_TRACE(format, ...)
#define CAS_CLS_DEBUG_MSG(format, ...) ({})
#define CAS_CLS_DEBUG_TRACE(format, ...) ({})
#endif
/* Done condition test - always accepts and stops evaluation */
@ -53,7 +53,7 @@ static cas_cls_eval_t _cas_cls_metadata_test(struct cas_classifier *cls,
if (PageAnon(io->page))
return cas_cls_eval_no;
if (PageSlab(io->page) || PageCompound(io->page)) {
if (PageSlab(io->page)) {
/* A filesystem issues IO on pages that does not belongs
* to the file page cache. It means that it is a
* part of metadata
@ -61,7 +61,7 @@ static cas_cls_eval_t _cas_cls_metadata_test(struct cas_classifier *cls,
return cas_cls_eval_yes;
}
if (!io->page->mapping) {
if (!cas_page_mapping(io->page)) {
/* XFS case, page are allocated internally and do not
* have references into inode
*/
@ -221,6 +221,42 @@ static int _cas_cls_string_ctr(struct cas_classifier *cls,
return 0;
}
/* IO direction condition constructor. @data is expected to contain string
* translated to IO direction.
*/
static int _cas_cls_direction_ctr(struct cas_classifier *cls,
struct cas_cls_condition *c, char *data)
{
uint64_t direction;
struct cas_cls_numeric *ctx;
if (!data) {
CAS_CLS_MSG(KERN_ERR, "Missing IO direction specifier\n");
return -EINVAL;
}
if (strncmp("read", data, 5) == 0) {
direction = READ;
} else if (strncmp("write", data, 6) == 0) {
direction = WRITE;
} else {
CAS_CLS_MSG(KERN_ERR, "Invalid IO direction specifier '%s'\n"
" allowed specifiers: 'read', 'write'\n", data);
return -EINVAL;
}
ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->operator = cas_cls_numeric_eq;
ctx->v_u64 = direction;
c->context = ctx;
return 0;
}
/* Unsigned int numeric test function */
static cas_cls_eval_t _cas_cls_numeric_test_u(
struct cas_cls_condition *c, uint64_t val)
@ -664,6 +700,14 @@ static cas_cls_eval_t _cas_cls_request_size_test(
return _cas_cls_numeric_test_u(c, CAS_BIO_BISIZE(io->bio));
}
/* Request IO direction test function */
static cas_cls_eval_t _cas_cls_request_direction_test(
struct cas_classifier *cls, struct cas_cls_condition *c,
struct cas_cls_io *io, ocf_part_id_t part_id)
{
return _cas_cls_numeric_test_u(c, bio_data_dir(io->bio));
}
/* Array of condition handlers */
static struct cas_cls_condition_handler _handlers[] = {
{ "done", _cas_cls_done_test, _cas_cls_generic_ctr },
@ -689,6 +733,8 @@ static struct cas_cls_condition_handler _handlers[] = {
_cas_cls_generic_dtr },
{ "request_size", _cas_cls_request_size_test, _cas_cls_numeric_ctr,
_cas_cls_generic_dtr },
{ "io_direction", _cas_cls_request_direction_test,
_cas_cls_direction_ctr, _cas_cls_generic_dtr },
#ifdef CAS_WLTH_SUPPORT
{ "wlth", _cas_cls_wlth_test, _cas_cls_numeric_ctr,
_cas_cls_generic_dtr},
@ -757,7 +803,7 @@ static struct cas_cls_condition * _cas_cls_create_condition(
return c;
}
/* Read single codnition from text input and return cas_cls_condition
/* Read single condition from text input and return cas_cls_condition
* representation. *rule pointer is advanced to point to next condition.
* Input @rule string is modified to speed up parsing (selected bytes are
* overwritten with 0).
@ -765,7 +811,7 @@ static struct cas_cls_condition * _cas_cls_create_condition(
* *l_op contains logical operator from previous condition and gets overwritten
* with operator read from currently parsed condition.
*
* Returns pointer to condition if successfull.
* Returns pointer to condition if successful.
* Returns NULL if no more conditions in string.
* Returns error pointer in case of syntax or runtime error.
*/
@ -1050,9 +1096,11 @@ int cas_cls_rule_create(ocf_cache_t cache,
return -ENOMEM;
r = _cas_cls_rule_create(cls, part_id, _rule);
if (IS_ERR(r))
if (IS_ERR(r)) {
CAS_CLS_DEBUG_MSG(
"Cannot create rule: %s => %d\n", rule, part_id);
ret = _cas_cls_rule_err_to_cass_err(PTR_ERR(r));
else {
} else {
CAS_CLS_DEBUG_MSG("Created rule: %s => %d\n", rule, part_id);
*cls_rule = r;
ret = 0;
@ -1181,6 +1229,7 @@ static void _cas_cls_get_bio_context(struct bio *bio,
struct cas_cls_io *ctx)
{
struct page *page = NULL;
struct address_space *mapping;
if (!bio)
return;
@ -1198,13 +1247,14 @@ static void _cas_cls_get_bio_context(struct bio *bio,
if (PageAnon(page))
return;
if (PageSlab(page) || PageCompound(page))
if (PageSlab(page))
return;
if (!page->mapping)
mapping = cas_page_mapping(page);
if (!mapping)
return;
ctx->inode = page->mapping->host;
ctx->inode = mapping->host;
return;
}

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <linux/module.h>
@ -351,7 +351,8 @@ static int _cas_init_tag_set(struct cas_disk *dsk, struct blk_mq_tag_set *set)
set->queue_depth = CAS_BLKDEV_DEFAULT_RQ;
set->cmd_size = 0;
set->flags = BLK_MQ_F_SHOULD_MERGE | CAS_BLK_MQ_F_STACKING | CAS_BLK_MQ_F_BLOCKING;
set->flags = CAS_BLK_MQ_F_SHOULD_MERGE | CAS_BLK_MQ_F_STACKING |
CAS_BLK_MQ_F_BLOCKING;
set->driver_data = dsk;
@ -388,12 +389,36 @@ static int _cas_exp_obj_check_path(const char *dev_name)
return result;
}
static ssize_t device_attr_serial_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct gendisk *gd = dev_to_disk(dev);
struct cas_disk *dsk = gd->private_data;
struct cas_exp_obj *exp_obj = dsk->exp_obj;
return sysfs_emit(buf, "opencas-%s", exp_obj->dev_name);
}
static struct device_attribute device_attr_serial =
__ATTR(serial, 0444, device_attr_serial_show, NULL);
static struct attribute *device_attrs[] = {
&device_attr_serial.attr,
NULL,
};
static const struct attribute_group device_attr_group = {
.attrs = device_attrs,
.name = "device",
};
int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
struct module *owner, struct cas_exp_obj_ops *ops, void *priv)
{
struct cas_exp_obj *exp_obj;
struct request_queue *queue;
struct gendisk *gd;
cas_queue_limits_t queue_limits;
int result = 0;
BUG_ON(!owner);
@ -442,7 +467,15 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
goto error_init_tag_set;
}
result = cas_alloc_mq_disk(&gd, &queue, &exp_obj->tag_set);
if (exp_obj->ops->set_queue_limits) {
result = exp_obj->ops->set_queue_limits(dsk, priv,
&queue_limits);
if (result)
goto error_set_queue_limits;
}
result = cas_alloc_disk(&gd, &queue, &exp_obj->tag_set,
&queue_limits);
if (result) {
goto error_alloc_mq_disk;
}
@ -473,9 +506,14 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
goto error_set_geometry;
}
if (cas_add_disk(gd))
result = cas_add_disk(gd);
if (result)
goto error_add_disk;
result = sysfs_create_group(&disk_to_dev(gd)->kobj, &device_attr_group);
if (result)
goto error_sysfs;
result = bd_claim_by_disk(cas_disk_get_blkdev(dsk), dsk, gd);
if (result)
goto error_bd_claim;
@ -483,15 +521,18 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
return 0;
error_bd_claim:
sysfs_remove_group(&disk_to_dev(gd)->kobj, &device_attr_group);
error_sysfs:
del_gendisk(dsk->exp_obj->gd);
error_add_disk:
error_set_geometry:
exp_obj->private = NULL;
_cas_exp_obj_clear_dev_t(dsk);
error_exp_obj_set_dev_t:
cas_cleanup_mq_disk(gd);
cas_cleanup_disk(gd);
exp_obj->gd = NULL;
error_alloc_mq_disk:
error_set_queue_limits:
blk_mq_free_tag_set(&exp_obj->tag_set);
error_init_tag_set:
module_put(owner);

View File

@ -1,11 +1,12 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __CASDISK_EXP_OBJ_H__
#define __CASDISK_EXP_OBJ_H__
#include "linux_kernel_version.h"
#include <linux/fs.h>
struct cas_disk;
@ -17,6 +18,12 @@ struct cas_exp_obj_ops {
*/
int (*set_geometry)(struct cas_disk *dsk, void *private);
/**
* @brief Set queue limits of exported object (top) block device.
*/
int (*set_queue_limits)(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim);
/**
* @brief submit_bio of exported object (top) block device.
*

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -2405,7 +2405,8 @@ static int cache_mngt_check_bdev(struct ocf_mngt_cache_device_config *cfg,
printk(KERN_WARNING "New cache device block properties "
"differ from the previous one.\n");
}
if (tmp_limits.misaligned) {
if (cas_queue_limits_is_misaligned(&tmp_limits)) {
reattach_properties_diff = true;
printk(KERN_WARNING "New cache device block interval "
"doesn't line up with the previous one.\n");

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -40,7 +40,6 @@
#include <linux/mm.h>
#include <linux/blk-mq.h>
#include <linux/ktime.h>
#include "exp_obj.h"
#include "generated_defines.h"

View File

@ -1,5 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -42,10 +43,56 @@ MODULE_PARM_DESC(seq_cut_off_mb,
ocf_ctx_t cas_ctx;
struct cas_module cas_module;
static inline uint32_t involuntary_preemption_enabled(void)
{
bool config_dynamic = IS_ENABLED(CONFIG_PREEMPT_DYNAMIC);
bool config_rt = IS_ENABLED(CONFIG_PREEMPT_RT);
bool config_preempt = IS_ENABLED(CONFIG_PREEMPT);
bool config_lazy = IS_ENABLED(CONFIG_PREEMPT_LAZY);
bool config_none = IS_ENABLED(CONFIG_PREEMPT_NONE);
if (!config_dynamic && !config_rt && !config_preempt && !config_lazy)
return false;
if (config_none)
return false;
if (config_rt || config_preempt || config_lazy) {
printk(KERN_ERR OCF_PREFIX_SHORT
"The kernel has been built with involuntary preemption "
"enabled.\nFailed to load Open CAS kernel module.\n");
return true;
}
#ifdef CONFIG_PREEMPT_DYNAMIC
printk(KERN_WARNING OCF_PREFIX_SHORT
"The kernel has been compiled with preemption configurable\n"
"at boot time (PREEMPT_DYNAMIC=y). Open CAS doesn't support\n"
"kernels with involuntary preemption so make sure to set\n"
"\"preempt=\" to \"none\" or \"voluntary\" in the kernel"
" command line\n");
if (!cas_preempt_model_none() && !cas_preempt_model_voluntary()) {
printk(KERN_ERR OCF_PREFIX_SHORT
"The kernel has been booted with involuntary "
"preemption enabled.\nFailed to load Open CAS kernel "
"module.\n");
return true;
} else {
return false;
}
#endif
return false;
}
static int __init cas_init_module(void)
{
int result = 0;
if (involuntary_preemption_enabled())
return -ENOTSUP;
if (!writeback_queue_unblock_size || !max_writeback_queue_size) {
printk(KERN_ERR OCF_PREFIX_SHORT
"Invalid module parameter.\n");

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -407,6 +407,11 @@ static inline u64 env_atomic64_inc_return(env_atomic64 *a)
return atomic64_inc_return(a);
}
static inline u64 env_atomic64_dec_return(env_atomic64 *a)
{
return atomic64_dec_return(a);
}
static inline u64 env_atomic64_cmpxchg(atomic64_t *a, u64 old, u64 new)
{
return atomic64_cmpxchg(a, old, new);

View File

@ -0,0 +1,345 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "ocf_env_refcnt.h"
#include "ocf/ocf_err.h"
#include "ocf_env.h"
#define ENV_REFCNT_CB_ARMING 1
#define ENV_REFCNT_CB_ARMED 2
static void _env_refcnt_do_on_cpus_cb(struct work_struct *work)
{
struct notify_cpu_work *ctx =
container_of(work, struct notify_cpu_work, work);
ctx->cb(ctx->priv);
env_atomic_dec(&ctx->rc->notify.to_notify);
wake_up(&ctx->rc->notify.notify_wait_queue);
}
static void _env_refcnt_do_on_cpus(struct env_refcnt *rc,
env_refcnt_do_on_cpu_cb_t cb, void *priv)
{
int cpu_no;
struct notify_cpu_work *work;
ENV_BUG_ON(env_atomic_read(&rc->notify.to_notify));
for_each_online_cpu(cpu_no) {
work = rc->notify.notify_work_items[cpu_no];
env_atomic_inc(&rc->notify.to_notify);
work->cb = cb;
work->rc = rc;
work->priv = priv;
INIT_WORK(&work->work, _env_refcnt_do_on_cpus_cb);
queue_work_on(cpu_no, rc->notify.notify_work_queue,
&work->work);
}
wait_event(rc->notify.notify_wait_queue,
!env_atomic_read(&rc->notify.to_notify));
}
static void _env_refcnt_init_pcpu(void *ctx)
{
struct env_refcnt *rc = ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(rc->pcpu);
pcpu->freeze = false;
env_atomic64_set(&pcpu->counter, 0);
}
int env_refcnt_init(struct env_refcnt *rc, const char *name, size_t name_len)
{
int cpu_no, result;
env_memset(rc, sizeof(*rc), 0);
env_strncpy(rc->name, sizeof(rc->name), name, name_len);
rc->pcpu = alloc_percpu(struct env_refcnt_pcpu);
if (!rc->pcpu)
return -OCF_ERR_NO_MEM;
init_waitqueue_head(&rc->notify.notify_wait_queue);
rc->notify.notify_work_queue = alloc_workqueue("refcnt_%s", 0,
0, rc->name);
if (!rc->notify.notify_work_queue) {
result = -OCF_ERR_NO_MEM;
goto cleanup_pcpu;
}
rc->notify.notify_work_items = env_vzalloc(
sizeof(*rc->notify.notify_work_items) * num_online_cpus());
if (!rc->notify.notify_work_items) {
result = -OCF_ERR_NO_MEM;
goto cleanup_wq;
}
for_each_online_cpu(cpu_no) {
rc->notify.notify_work_items[cpu_no] = env_vmalloc(
sizeof(*rc->notify.notify_work_items[cpu_no]));
if (!rc->notify.notify_work_items[cpu_no]) {
result = -OCF_ERR_NO_MEM;
goto cleanup_work;
}
}
result = env_spinlock_init(&rc->freeze.lock);
if (result)
goto cleanup_work;
_env_refcnt_do_on_cpus(rc, _env_refcnt_init_pcpu, rc);
rc->callback.pfn = NULL;
rc->callback.priv = NULL;
return 0;
cleanup_work:
for_each_online_cpu(cpu_no) {
if (rc->notify.notify_work_items[cpu_no]) {
env_vfree(rc->notify.notify_work_items[cpu_no]);
rc->notify.notify_work_items[cpu_no] = NULL;
}
}
env_vfree(rc->notify.notify_work_items);
rc->notify.notify_work_items = NULL;
cleanup_wq:
destroy_workqueue(rc->notify.notify_work_queue);
rc->notify.notify_work_queue = NULL;
cleanup_pcpu:
free_percpu(rc->pcpu);
rc->pcpu = NULL;
return result;
}
void env_refcnt_deinit(struct env_refcnt *rc)
{
int cpu_no;
env_spinlock_destroy(&rc->freeze.lock);
ENV_BUG_ON(env_atomic_read(&rc->notify.to_notify));
for_each_online_cpu(cpu_no) {
if (rc->notify.notify_work_items[cpu_no]) {
env_vfree(rc->notify.notify_work_items[cpu_no]);
rc->notify.notify_work_items[cpu_no] = NULL;
}
}
env_vfree(rc->notify.notify_work_items);
rc->notify.notify_work_items = NULL;
destroy_workqueue(rc->notify.notify_work_queue);
rc->notify.notify_work_queue = NULL;
free_percpu(rc->pcpu);
rc->pcpu = NULL;
}
static inline void _env_refcnt_call_freeze_cb(struct env_refcnt *rc)
{
bool fire;
fire = (env_atomic_cmpxchg(&rc->callback.armed, ENV_REFCNT_CB_ARMED, 0)
== ENV_REFCNT_CB_ARMED);
smp_mb();
if (fire)
rc->callback.pfn(rc->callback.priv);
}
void env_refcnt_dec(struct env_refcnt *rc)
{
struct env_refcnt_pcpu *pcpu;
bool freeze;
int64_t countdown = 0;
bool callback;
unsigned long flags;
pcpu = get_cpu_ptr(rc->pcpu);
freeze = pcpu->freeze;
if (!freeze)
env_atomic64_dec(&pcpu->counter);
put_cpu_ptr(pcpu);
if (freeze) {
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
countdown = env_atomic64_dec_return(&rc->freeze.countdown);
callback = !rc->freeze.initializing && countdown == 0;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
}
bool env_refcnt_inc(struct env_refcnt *rc)
{
struct env_refcnt_pcpu *pcpu;
bool freeze;
pcpu = get_cpu_ptr(rc->pcpu);
freeze = pcpu->freeze;
if (!freeze)
env_atomic64_inc(&pcpu->counter);
put_cpu_ptr(pcpu);
return !freeze;
}
struct env_refcnt_freeze_ctx {
struct env_refcnt *rc;
env_atomic64 sum;
};
static void _env_refcnt_freeze_pcpu(void *_ctx)
{
struct env_refcnt_freeze_ctx *ctx = _ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(ctx->rc->pcpu);
pcpu->freeze = true;
env_atomic64_add(env_atomic64_read(&pcpu->counter), &ctx->sum);
}
void env_refcnt_freeze(struct env_refcnt *rc)
{
struct env_refcnt_freeze_ctx ctx;
int freeze_cnt;
bool callback;
unsigned long flags;
ctx.rc = rc;
env_atomic64_set(&ctx.sum, 0);
/* initiate freeze */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
freeze_cnt = ++(rc->freeze.counter);
if (freeze_cnt > 1) {
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return;
}
rc->freeze.initializing = true;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
/* notify CPUs about freeze */
_env_refcnt_do_on_cpus(rc, _env_refcnt_freeze_pcpu, &ctx);
/* update countdown */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
env_atomic64_add(env_atomic64_read(&ctx.sum), &rc->freeze.countdown);
rc->freeze.initializing = false;
callback = (env_atomic64_read(&rc->freeze.countdown) == 0);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
/* if countdown finished trigger callback */
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
void env_refcnt_register_zero_cb(struct env_refcnt *rc, env_refcnt_cb_t cb,
void *priv)
{
bool callback;
bool concurrent_arming;
unsigned long flags;
concurrent_arming = (env_atomic_inc_return(&rc->callback.armed)
> ENV_REFCNT_CB_ARMING);
ENV_BUG_ON(concurrent_arming);
/* arm callback */
rc->callback.pfn = cb;
rc->callback.priv = priv;
smp_wmb();
env_atomic_set(&rc->callback.armed, ENV_REFCNT_CB_ARMED);
/* fire callback in case countdown finished */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
callback = (
env_atomic64_read(&rc->freeze.countdown) == 0 &&
!rc->freeze.initializing
);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
static void _env_refcnt_unfreeze_pcpu(void *_ctx)
{
struct env_refcnt_freeze_ctx *ctx = _ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(ctx->rc->pcpu);
ENV_BUG_ON(!pcpu->freeze);
env_atomic64_set(&pcpu->counter, 0);
pcpu->freeze = false;
}
void env_refcnt_unfreeze(struct env_refcnt *rc)
{
struct env_refcnt_freeze_ctx ctx;
int freeze_cnt;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
freeze_cnt = --(rc->freeze.counter);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
ENV_BUG_ON(freeze_cnt < 0);
if (freeze_cnt > 0)
return;
ENV_BUG_ON(env_atomic64_read(&rc->freeze.countdown));
/* disarm callback */
env_atomic_set(&rc->callback.armed, 0);
smp_wmb();
/* notify CPUs about unfreeze */
ctx.rc = rc;
_env_refcnt_do_on_cpus(rc, _env_refcnt_unfreeze_pcpu, &ctx);
}
bool env_refcnt_frozen(struct env_refcnt *rc)
{
bool frozen;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
frozen = !!rc->freeze.counter;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return frozen;
}
bool env_refcnt_zeroed(struct env_refcnt *rc)
{
bool frozen;
bool initializing;
int64_t countdown;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
frozen = !!rc->freeze.counter;
initializing = rc->freeze.initializing;
countdown = env_atomic64_read(&rc->freeze.countdown);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return frozen && !initializing && countdown == 0;
}

View File

@ -0,0 +1,104 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __OCF_ENV_REFCNT_H__
#define __OCF_ENV_REFCNT_H__
#include "ocf_env.h"
typedef void (*env_refcnt_cb_t)(void *priv);
struct env_refcnt_pcpu {
env_atomic64 counter;
bool freeze;
};
typedef void (*env_refcnt_do_on_cpu_cb_t)(void *priv);
struct notify_cpu_work {
struct work_struct work;
/* function to call on each cpu */
env_refcnt_do_on_cpu_cb_t cb;
/* priv passed to cb */
void *priv;
/* refcnt instance */
struct env_refcnt *rc;
};
struct env_refcnt {
struct env_refcnt_pcpu __percpu *pcpu __aligned(64);
struct {
/* freeze counter */
int counter;
/* global counter used instead of per-CPU ones after
* freeze
*/
env_atomic64 countdown;
/* freeze initializing - freeze was requested but not all
* CPUs were notified.
*/
bool initializing;
env_spinlock lock;
} freeze;
struct {
struct notify_cpu_work **notify_work_items;
env_atomic to_notify;
wait_queue_head_t notify_wait_queue;
struct workqueue_struct *notify_work_queue;
} notify;
struct {
env_atomic armed;
env_refcnt_cb_t pfn;
void *priv;
} callback;
char name[32];
};
/* Initialize reference counter */
int env_refcnt_init(struct env_refcnt *rc, const char *name, size_t name_len);
void env_refcnt_deinit(struct env_refcnt *rc);
/* Try to increment counter. Returns counter value (> 0) if successful, 0
* if counter is frozen
*/
bool env_refcnt_inc(struct env_refcnt *rc);
/* Decrement reference counter */
void env_refcnt_dec(struct env_refcnt *rc);
/* Disallow incrementing of underlying counter - attempts to increment counter
* will be failing until env_refcnt_unfreeze is called.
* It's ok to call freeze multiple times, in which case counter is frozen
* until all freeze calls are offset by a corresponding unfreeze.
*/
void env_refcnt_freeze(struct env_refcnt *rc);
/* Cancel the effect of single env_refcnt_freeze call */
void env_refcnt_unfreeze(struct env_refcnt *rc);
bool env_refcnt_frozen(struct env_refcnt *rc);
bool env_refcnt_zeroed(struct env_refcnt *rc);
/* Register callback to be called when reference counter drops to 0.
* Must be called after counter is frozen.
* Cannot be called until previously regsitered callback had fired.
*/
void env_refcnt_register_zero_cb(struct env_refcnt *rc, env_refcnt_cb_t cb,
void *priv);
#endif // __OCF_ENV_REFCNT_H__

View File

@ -86,10 +86,6 @@ long cas_service_ioctl_ctrl(struct file *filp, unsigned int cmd,
GET_CMD_INFO(cmd_info, arg);
printk(KERN_ERR "Cache attach is not supported!\n");
retval = -ENOTSUP;
RETURN_CMD_RESULT(cmd_info, arg, retval);
cache_name_from_id(cache_name, cmd_info->cache_id);
retval = cache_mngt_attach_cache_cfg(cache_name, OCF_CACHE_NAME_SIZE,
@ -108,9 +104,6 @@ long cas_service_ioctl_ctrl(struct file *filp, unsigned int cmd,
char cache_name[OCF_CACHE_NAME_SIZE];
GET_CMD_INFO(cmd_info, arg);
printk(KERN_ERR "Cache detach is not supported!\n");
retval = -ENOTSUP;
RETURN_CMD_RESULT(cmd_info, arg, retval);
cache_name_from_id(cache_name, cmd_info->cache_id);

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -73,6 +73,7 @@ static int _cas_cleaner_thread(void *data)
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
struct cas_thread_info *info;
uint32_t ms;
ocf_queue_t queue;
BUG_ON(!c);
@ -94,7 +95,10 @@ static int _cas_cleaner_thread(void *data)
atomic_set(&info->kicked, 0);
init_completion(&info->sync_compl);
ocf_cleaner_run(c, cache_priv->io_queues[smp_processor_id()]);
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
ocf_cleaner_run(c, queue);
wait_for_completion(&info->sync_compl);
/*

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies
* Copyright(c) 2024-2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -208,9 +208,13 @@ void *cas_rpool_try_get(struct cas_reserve_pool *rpool_master, int *cpu)
CAS_DEBUG_TRACE();
get_cpu();
*cpu = smp_processor_id();
current_rpool = &rpool_master->rpools[*cpu];
put_cpu();
spin_lock_irqsave(&current_rpool->lock, flags);
if (!list_empty(&current_rpool->list)) {

View File

@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024 Huawei Technologies Co., Ltd.
* Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
@ -63,13 +63,14 @@ static void blkdev_set_discard_properties(ocf_cache_t cache,
CAS_SET_DISCARD_ZEROES_DATA(exp_q->limits, 0);
if (core_q && cas_has_discard_support(core_bd)) {
blk_queue_max_discard_sectors(exp_q, core_q->limits.max_discard_sectors);
cas_queue_max_discard_sectors(exp_q,
core_q->limits.max_discard_sectors);
exp_q->limits.discard_alignment =
bdev_discard_alignment(core_bd);
exp_q->limits.discard_granularity =
core_q->limits.discard_granularity;
} else {
blk_queue_max_discard_sectors(exp_q,
cas_queue_max_discard_sectors(exp_q,
min((uint64_t)core_sectors, (uint64_t)UINT_MAX));
exp_q->limits.discard_granularity = ocf_cache_get_line_size(cache);
exp_q->limits.discard_alignment = 0;
@ -129,7 +130,37 @@ static int blkdev_core_set_geometry(struct cas_disk *dsk, void *private)
blkdev_set_discard_properties(cache, exp_q, core_bd, sectors);
exp_q->queue_flags |= (1 << QUEUE_FLAG_NONROT);
cas_queue_set_nonrot(exp_q);
return 0;
}
static int blkdev_core_set_queue_limits(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim)
{
ocf_core_t core = private;
ocf_cache_t cache = ocf_core_get_cache(core);
ocf_volume_t core_vol = ocf_core_get_volume(core);
struct bd_object *bd_core_vol;
struct request_queue *core_q;
bool flush, fua;
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
bd_core_vol = bd_object(core_vol);
core_q = cas_disk_get_queue(bd_core_vol->dsk);
flush = (CAS_CHECK_QUEUE_FLUSH(core_q) ||
cache_priv->device_properties.flush);
fua = (CAS_CHECK_QUEUE_FUA(core_q) ||
cache_priv->device_properties.fua);
memset(lim, 0, sizeof(cas_queue_limits_t));
if (flush)
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_WRITE_CACHE);
if (fua)
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_FUA);
return 0;
}
@ -217,12 +248,16 @@ static int blkdev_handle_data_single(struct bd_object *bvol, struct bio *bio,
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_queue_t queue;
ocf_io_t io;
struct blk_data *data;
uint64_t flags = CAS_BIO_OP_FLAGS(bio);
int ret;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
data = cas_alloc_blk_data(bio_segments(bio), GFP_NOIO);
if (!data) {
CAS_PRINT_RL(KERN_CRIT "BIO data vector allocation error\n");
@ -332,9 +367,13 @@ static void blkdev_handle_discard(struct bd_object *bvol, struct bio *bio)
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_queue_t queue;
ocf_io_t io;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
io = ocf_volume_new_io(bvol->front_volume, queue,
CAS_BIO_BISECTOR(bio) << SECTOR_SHIFT,
CAS_BIO_BISIZE(bio), OCF_WRITE, 0, 0);
@ -380,9 +419,13 @@ static void blkdev_handle_flush(struct bd_object *bvol, struct bio *bio)
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_queue_t queue;
ocf_io_t io;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
io = ocf_volume_new_io(bvol->front_volume, queue, 0, 0, OCF_WRITE, 0,
CAS_SET_FLUSH(0));
if (!io) {
@ -428,6 +471,7 @@ static void blkdev_core_submit_bio(struct cas_disk *dsk,
static struct cas_exp_obj_ops kcas_core_exp_obj_ops = {
.set_geometry = blkdev_core_set_geometry,
.set_queue_limits = blkdev_core_set_queue_limits,
.submit_bio = blkdev_core_submit_bio,
};
@ -470,6 +514,37 @@ static int blkdev_cache_set_geometry(struct cas_disk *dsk, void *private)
return 0;
}
static int blkdev_cache_set_queue_limits(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim)
{
ocf_cache_t cache;
ocf_volume_t volume;
struct bd_object *bvol;
struct request_queue *cache_q;
struct block_device *bd;
BUG_ON(!private);
cache = private;
volume = ocf_cache_get_volume(cache);
bvol = bd_object(volume);
bd = cas_disk_get_blkdev(bvol->dsk);
BUG_ON(!bd);
cache_q = bd->bd_disk->queue;
memset(lim, 0, sizeof(cas_queue_limits_t));
if (CAS_CHECK_QUEUE_FLUSH(cache_q))
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_WRITE_CACHE);
if (CAS_CHECK_QUEUE_FUA(cache_q))
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_FUA);
return 0;
}
static void blkdev_cache_submit_bio(struct cas_disk *dsk,
struct bio *bio, void *private)
{
@ -485,6 +560,7 @@ static void blkdev_cache_submit_bio(struct cas_disk *dsk,
static struct cas_exp_obj_ops kcas_cache_exp_obj_ops = {
.set_geometry = blkdev_cache_set_geometry,
.set_queue_limits = blkdev_cache_set_queue_limits,
.submit_bio = blkdev_cache_submit_bio,
};

2
ocf

@ -1 +1 @@
Subproject commit 6ad1007e6fa47ff6468eaba0b304d77334c34d4a
Subproject commit a63479c7cdcc631a9c6e27f4dc7ee3cadc14bf83

View File

@ -1,36 +1,59 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from api.cas.casadm_parser import *
from datetime import timedelta
from typing import List
from api.cas import casadm
from api.cas.cache_config import (
CacheLineSize,
CleaningPolicy,
CacheStatus,
CacheMode,
FlushParametersAlru,
FlushParametersAcp,
SeqCutOffParameters,
SeqCutOffPolicy,
PromotionPolicy,
PromotionParametersNhit,
CacheConfig,
)
from api.cas.casadm_params import StatsFilter
from api.cas.casadm_parser import (get_cas_devices_dict, get_cores, get_flush_parameters_alru,
get_flush_parameters_acp, get_io_class_list)
from api.cas.core import Core
from api.cas.dmesg import get_metadata_size_on_device
from api.cas.statistics import CacheStats, CacheIoClassStats
from test_utils.os_utils import *
from test_utils.output import Output
from connection.utils.output import Output
from storage_devices.device import Device
from test_tools.os_tools import sync
from type_def.size import Size
class Cache:
def __init__(self, device: Device, cache_id: int = None) -> None:
self.cache_device = device
self.cache_id = cache_id if cache_id else self.__get_cache_id()
self.__cache_line_size = None
def __get_cache_id(self) -> int:
device_path = self.__get_cache_device_path()
def __init__(
self, cache_id: int, device: Device = None, cache_line_size: CacheLineSize = None
) -> None:
self.cache_id = cache_id
self.cache_device = device if device else self.__get_cache_device()
self.__cache_line_size = cache_line_size
def __get_cache_device(self) -> Device | None:
caches_dict = get_cas_devices_dict()["caches"]
cache = next(
iter([cache for cache in caches_dict.values() if cache["id"] == self.cache_id])
)
for cache in caches_dict.values():
if cache["device_path"] == device_path:
return int(cache["id"])
if not cache:
return None
raise Exception(f"There is no cache started on {device_path}")
if cache["device_path"] is "-":
return None
def __get_cache_device_path(self) -> str:
return self.cache_device.path if self.cache_device is not None else "-"
return Device(path=cache["device_path"])
def get_core_devices(self) -> list:
return get_cores(self.cache_id)
@ -194,8 +217,8 @@ class Cache:
def set_params_nhit(self, promotion_params_nhit: PromotionParametersNhit) -> Output:
return casadm.set_param_promotion_nhit(
self.cache_id,
threshold=promotion_params_nhit.threshold.get_value(),
trigger=promotion_params_nhit.trigger
threshold=promotion_params_nhit.threshold,
trigger=promotion_params_nhit.trigger,
)
def get_cache_config(self) -> CacheConfig:
@ -208,10 +231,18 @@ class Cache:
def standby_detach(self, shortcut: bool = False) -> Output:
return casadm.standby_detach_cache(cache_id=self.cache_id, shortcut=shortcut)
def standby_activate(self, device, shortcut: bool = False) -> Output:
def standby_activate(self, device: Device, shortcut: bool = False) -> Output:
return casadm.standby_activate_cache(
cache_id=self.cache_id, cache_dev=device, shortcut=shortcut
)
def attach(self, device: Device, force: bool = False) -> Output:
cmd_output = casadm.attach_cache(cache_id=self.cache_id, device=device, force=force)
return cmd_output
def detach(self) -> Output:
cmd_output = casadm.detach_cache(cache_id=self.cache_id)
return cmd_output
def has_volatile_metadata(self) -> bool:
return self.get_metadata_size_on_disk() == Size.zero()

View File

@ -1,14 +1,14 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from enum import Enum, IntFlag
from test_utils.os_utils import get_kernel_module_parameter
from test_utils.size import Size, Unit
from test_utils.time import Time
from test_tools.os_tools import get_kernel_module_parameter
from type_def.size import Size, Unit
from type_def.time import Time
class CacheLineSize(Enum):
@ -72,9 +72,9 @@ class CacheMode(Enum):
class SeqCutOffPolicy(Enum):
full = 0
always = 1
never = 2
full = "full"
always = "always"
never = "never"
DEFAULT = full
@classmethod
@ -85,6 +85,9 @@ class SeqCutOffPolicy(Enum):
raise ValueError(f"{name} is not a valid sequential cut off name")
def __str__(self):
return self.value
class MetadataMode(Enum):
normal = "normal"
@ -122,6 +125,7 @@ class CacheStatus(Enum):
incomplete = "incomplete"
standby = "standby"
standby_detached = "standby detached"
detached = "detached"
def __str__(self):
return self.value
@ -240,7 +244,7 @@ class SeqCutOffParameters:
class PromotionParametersNhit:
def __init__(self, threshold: Size = None, trigger: int = None):
def __init__(self, threshold: int = None, trigger: int = None):
self.threshold = threshold
self.trigger = trigger

View File

@ -6,8 +6,7 @@
from enum import Enum
from core.test_run import TestRun
from test_utils import os_utils
from test_utils.os_utils import ModuleRemoveMethod
from test_tools.os_tools import unload_kernel_module, load_kernel_module
class CasModule(Enum):
@ -15,12 +14,12 @@ class CasModule(Enum):
def reload_all_cas_modules():
os_utils.unload_kernel_module(CasModule.cache.value, ModuleRemoveMethod.modprobe)
os_utils.load_kernel_module(CasModule.cache.value)
unload_kernel_module(CasModule.cache.value)
load_kernel_module(CasModule.cache.value)
def unload_all_cas_modules():
os_utils.unload_kernel_module(CasModule.cache.value, os_utils.ModuleRemoveMethod.rmmod)
unload_kernel_module(CasModule.cache.value)
def is_cas_management_dev_present():

View File

@ -9,7 +9,7 @@ import os
import re
from core.test_run import TestRun
from test_tools.fs_utils import check_if_directory_exists, find_all_files
from test_tools.fs_tools import check_if_directory_exists, find_all_files
from test_tools.linux_packaging import DebSet, RpmSet

View File

@ -9,13 +9,13 @@ from datetime import timedelta
from string import Template
from textwrap import dedent
from test_tools.fs_utils import (
from test_tools.fs_tools import (
check_if_directory_exists,
create_directory,
write_file,
remove,
)
from test_utils.systemd import reload_daemon
from test_tools.systemctl import reload_daemon
opencas_drop_in_directory = Path("/etc/systemd/system/open-cas.service.d/")
test_drop_in_file = Path("10-modified-timeout.conf")

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -20,9 +20,9 @@ from api.cas.cli import *
from api.cas.core import Core
from core.test_run import TestRun
from storage_devices.device import Device
from test_utils.os_utils import reload_kernel_module
from test_utils.output import CmdException, Output
from test_utils.size import Size, Unit
from test_tools.os_tools import reload_kernel_module
from connection.utils.output import CmdException, Output
from type_def.size import Size, Unit
# casadm commands
@ -48,6 +48,7 @@ def start_cache(
)
_cache_id = str(cache_id) if cache_id is not None else None
_cache_mode = cache_mode.name.lower() if cache_mode else None
output = TestRun.executor.run(
start_cmd(
cache_dev=cache_dev.path,
@ -59,33 +60,71 @@ def start_cache(
shortcut=shortcut,
)
)
if output.exit_code != 0:
raise CmdException("Failed to start cache.", output)
return Cache(cache_dev)
if not _cache_id:
from api.cas.casadm_parser import get_caches
cache_list = get_caches()
attached_cache_list = [cache for cache in cache_list if cache.cache_device is not None]
# compare path of old and new caches, returning the only one created now.
# This will be needed in case cache_id not present in cli command
new_cache = next(
cache for cache in attached_cache_list if cache.cache_device.path == cache_dev.path
)
_cache_id = new_cache.cache_id
cache = Cache(cache_id=int(_cache_id), device=cache_dev, cache_line_size=_cache_line_size)
TestRun.dut.cache_list.append(cache)
return cache
def load_cache(device: Device, shortcut: bool = False) -> Cache:
from api.cas.casadm_parser import get_caches
caches_before_load = get_caches()
output = TestRun.executor.run(load_cmd(cache_dev=device.path, shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to load cache.", output)
return Cache(device)
caches_after_load = get_caches()
new_cache = next(cache for cache in caches_after_load if cache.cache_id not in
[cache.cache_id for cache in caches_before_load])
cache = Cache(cache_id=new_cache.cache_id, device=new_cache.cache_device)
TestRun.dut.cache_list.append(cache)
return cache
def attach_cache(cache_id: int, device: Device, force: bool, shortcut: bool = False) -> Output:
def attach_cache(
cache_id: int, device: Device, force: bool = False, shortcut: bool = False
) -> Output:
output = TestRun.executor.run(
attach_cache_cmd(
cache_dev=device.path, cache_id=str(cache_id), force=force, shortcut=shortcut
)
)
if output.exit_code != 0:
raise CmdException("Failed to attach cache.", output)
attached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
attached_cache.cache_device = device
return output
def detach_cache(cache_id: int, shortcut: bool = False) -> Output:
output = TestRun.executor.run(detach_cache_cmd(cache_id=str(cache_id), shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to detach cache.", output)
detached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
detached_cache.cache_device = None
return output
@ -93,8 +132,16 @@ def stop_cache(cache_id: int, no_data_flush: bool = False, shortcut: bool = Fals
output = TestRun.executor.run(
stop_cmd(cache_id=str(cache_id), no_data_flush=no_data_flush, shortcut=shortcut)
)
if output.exit_code != 0:
raise CmdException("Failed to stop cache.", output)
TestRun.dut.cache_list = [
cache for cache in TestRun.dut.cache_list if cache.cache_id != cache_id
]
TestRun.dut.core_list = [core for core in TestRun.dut.core_list if core.cache_id != cache_id]
return output
@ -192,7 +239,7 @@ def set_param_promotion(cache_id: int, policy: PromotionPolicy, shortcut: bool =
def set_param_promotion_nhit(
cache_id: int, threshold: int = None, trigger: int = None, shortcut: bool = False
cache_id: int, threshold: int = None, trigger: int = None, shortcut: bool = False
) -> Output:
_threshold = str(threshold) if threshold is not None else None
_trigger = str(trigger) if trigger is not None else None
@ -267,7 +314,7 @@ def get_param_cleaning_acp(
def get_param_promotion(
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
) -> Output:
_output_format = output_format.name if output_format else None
output = TestRun.executor.run(
@ -281,7 +328,7 @@ def get_param_promotion(
def get_param_promotion_nhit(
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
) -> Output:
_output_format = output_format.name if output_format else None
output = TestRun.executor.run(
@ -325,7 +372,11 @@ def add_core(cache: Cache, core_dev: Device, core_id: int = None, shortcut: bool
)
if output.exit_code != 0:
raise CmdException("Failed to add core.", output)
return Core(core_dev.path, cache.cache_id)
core = Core(core_dev.path, cache.cache_id)
TestRun.dut.core_list.append(core)
return core
def remove_core(cache_id: int, core_id: int, force: bool = False, shortcut: bool = False) -> Output:
@ -336,6 +387,12 @@ def remove_core(cache_id: int, core_id: int, force: bool = False, shortcut: bool
)
if output.exit_code != 0:
raise CmdException("Failed to remove core.", output)
TestRun.dut.core_list = [
core
for core in TestRun.dut.core_list
if core.cache_id != cache_id or core.core_id != core_id
]
return output
@ -485,22 +542,41 @@ def standby_init(
shortcut=shortcut,
)
)
if output.exit_code != 0:
raise CmdException("Failed to init standby cache.", output)
return Cache(cache_dev)
return Cache(cache_id=cache_id, device=cache_dev)
def standby_load(cache_dev: Device, shortcut: bool = False) -> Cache:
from api.cas.casadm_parser import get_caches
caches_before_load = get_caches()
output = TestRun.executor.run(standby_load_cmd(cache_dev=cache_dev.path, shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to load standby cache.", output)
return Cache(cache_dev)
raise CmdException("Failed to load cache.", output)
caches_after_load = get_caches()
# compare ids of old and new caches, returning the only one created now
new_cache = next(
cache
for cache in caches_after_load
if cache.cache_id not in [cache.cache_id for cache in caches_before_load]
)
cache = Cache(cache_id=new_cache.cache_id, device=new_cache.cache_device)
TestRun.dut.cache_list.append(cache)
return cache
def standby_detach_cache(cache_id: int, shortcut: bool = False) -> Output:
output = TestRun.executor.run(standby_detach_cmd(cache_id=str(cache_id), shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to detach standby cache.", output)
detached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
detached_cache.cache_device = None
return output
@ -510,6 +586,10 @@ def standby_activate_cache(cache_dev: Device, cache_id: int, shortcut: bool = Fa
)
if output.exit_code != 0:
raise CmdException("Failed to activate standby cache.", output)
activated_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
activated_cache.cache_device = cache_dev
return output

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -26,7 +26,7 @@ class OutputFormat(Enum):
class StatsFilter(Enum):
all = "all"
conf = "configuration"
conf = "config"
usage = "usage"
req = "request"
blk = "block"

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -14,11 +14,12 @@ from typing import List
from api.cas import casadm
from api.cas.cache_config import *
from api.cas.casadm_params import *
from api.cas.core_config import CoreStatus
from api.cas.ioclass_config import IoClass
from api.cas.version import CasVersion
from core.test_run_utils import TestRun
from storage_devices.device import Device
from test_utils.output import CmdException
from connection.utils.output import CmdException
class Stats(dict):
@ -54,12 +55,12 @@ def get_caches() -> list:
def get_cores(cache_id: int) -> list:
from api.cas.core import Core, CoreStatus
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_active(core):
return CoreStatus[core["status"].lower()] == CoreStatus.active
return core["status"] == CoreStatus.active
return [
Core(core["device_path"], core["cache_id"])
@ -68,6 +69,36 @@ def get_cores(cache_id: int) -> list:
]
def get_inactive_cores(cache_id: int) -> list:
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_inactive(core):
return core["status"] == CoreStatus.inactive
return [
Core(core["device_path"], core["cache_id"])
for core in cores_dict
if is_inactive(core) and core["cache_id"] == cache_id
]
def get_detached_cores(cache_id: int) -> list:
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_detached(core):
return core["status"] == CoreStatus.detached
return [
Core(core["device_path"], core["cache_id"])
for core in cores_dict
if is_detached(core) and core["cache_id"] == cache_id
]
def get_cas_devices_dict() -> dict:
device_list = list(csv.DictReader(casadm.list_caches(OutputFormat.csv).stdout.split("\n")))
devices = {"caches": {}, "cores": {}, "core_pool": {}}
@ -80,21 +111,21 @@ def get_cas_devices_dict() -> dict:
params = [
("id", cache_id),
("device_path", device["disk"]),
("status", device["status"]),
("status", CacheStatus(device["status"].lower())),
]
devices["caches"][cache_id] = dict([(key, value) for key, value in params])
elif device["type"] == "core":
params = [
("cache_id", cache_id),
("core_id", (int(device["id"]) if device["id"] != "-" else device["id"])),
("device_path", device["disk"]),
("status", device["status"]),
("status", CoreStatus(device["status"].lower())),
("exp_obj", device["device"]),
]
if core_pool:
params.append(("core_pool", device))
devices["core_pool"][device["disk"]] = dict(
[(key, value) for key, value in params]
)
devices["core_pool"][device["disk"]] = dict([(key, value) for key, value in params])
else:
devices["cores"][(cache_id, int(device["id"]))] = dict(
[(key, value) for key, value in params]
@ -205,11 +236,14 @@ def get_io_class_list(cache_id: int) -> list:
return ret
def get_core_info_by_path(core_disk_path) -> dict | None:
def get_core_info_for_cache_by_path(core_disk_path: str, target_cache_id: int) -> dict | None:
output = casadm.list_caches(OutputFormat.csv, by_id_path=True)
reader = csv.DictReader(io.StringIO(output.stdout))
cache_id = -1
for row in reader:
if row["type"] == "core" and row["disk"] == core_disk_path:
if row["type"] == "cache":
cache_id = int(row["id"])
if row["type"] == "core" and row["disk"] == core_disk_path and target_cache_id == cache_id:
return {
"core_id": row["id"],
"core_device": row["disk"],

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -192,7 +192,7 @@ remove_core_help = [
remove_inactive_help = [
r"casadm --remove-inactive --cache-id \<ID\> --core-id \<ID\> \[option\.\.\.\]",
r"Usage: casadm --remove-inactive --cache-id \<ID\> --core-id \<ID\> \[option\.\.\.\]",
r"Remove inactive core device from cache instance",
r"Options that are valid with --remove-inactive are:",
r"-i --cache-id \<ID\> Identifier of cache instance \<1-16384\>",
@ -285,7 +285,7 @@ standby_help = [
]
zero_metadata_help = [
r"Usage: casadm --zero-metadata --device \<DEVICE\> \[option\.\.\.\]]",
r"Usage: casadm --zero-metadata --device \<DEVICE\> \[option\.\.\.\]",
r"Clear metadata from caching device",
r"Options that are valid with --zero-metadata are:",
r"-d --device \<DEVICE\> Path to device on which metadata would be cleared",

View File

@ -1,13 +1,27 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
import re
from connection.utils.output import Output
from core.test_run import TestRun
from test_utils.output import Output
attach_not_enough_memory = [
r"Not enough free RAM\.\nYou need at least \d+.\d+GB to attach a device to cache "
r"with cache line size equal \d+kB.\n"
r"Try with greater cache line size\."
]
attach_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device",
r"Please attach another device or use --force to discard on-disk metadata",
r" and attach this device to cache instance\."
]
load_inactive_core_missing = [
r"WARNING: Can not resolve path to core \d+ from cache \d+\. By-id path will be shown for that "
@ -17,11 +31,18 @@ load_inactive_core_missing = [
start_cache_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device\.",
r"Old metadata found on device",
r"Please load cache metadata using --load option or use --force to",
r" discard on-disk metadata and start fresh cache instance\.",
]
attach_cache_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device",
r"Please attach another device or use --force to discard on-disk metadata",
r" and attach this device to cache instance\.",
]
start_cache_on_already_used_dev = [
r"Error inserting cache \d+",
r"Cache device \'\/dev\/\S+\' is already used as cache\.",
@ -84,11 +105,20 @@ already_cached_core = [
]
remove_mounted_core = [
r"Can\'t remove core \d+ from cache \d+\. Device /dev/cas\d+-\d+ is mounted\!"
r"Can\'t remove core \d+ from cache \d+ due to mounted devices:"
]
remove_mounted_core_kernel = [
r"Error while removing core device \d+ from cache instance \d+",
r"Device opens or mount are pending to this cache",
]
stop_cache_mounted_core = [
r"Error while removing cache \d+",
r"Can\'t stop cache instance \d+ due to mounted devices:"
]
stop_cache_mounted_core_kernel = [
r"Error while stopping cache \d+",
r"Device opens or mount are pending to this cache",
]
@ -224,6 +254,12 @@ malformed_io_class_header = [
unexpected_cls_option = [r"Option '--cache-line-size \(-x\)' is not allowed"]
attach_not_enough_memory = [
r"Not enough free RAM\.\nYou need at least \d+.\d+GB to attach a device to cache "
r"with cache line size equal \d+kB.\n"
r"Try with greater cache line size\."
]
def check_stderr_msg(output: Output, expected_messages, negate=False):
return __check_string_msg(output.stderr, expected_messages, negate)
@ -242,7 +278,7 @@ def __check_string_msg(text: str, expected_messages, negate=False):
msg_ok = False
elif matches and negate:
TestRun.LOGGER.error(
f"Message is incorrect, expected to not find: {msg}\n " f"actual: {text}."
f"Message is incorrect, expected to not find: {msg}\n actual: {text}."
)
msg_ok = False
return msg_ok

View File

@ -1,30 +1,24 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from datetime import timedelta
from typing import List
from enum import Enum
from api.cas import casadm
from api.cas.cache_config import SeqCutOffParameters, SeqCutOffPolicy
from api.cas.casadm_params import StatsFilter
from api.cas.casadm_parser import get_seq_cut_off_parameters, get_core_info_by_path
from api.cas.casadm_parser import get_seq_cut_off_parameters, get_cas_devices_dict
from api.cas.core_config import CoreStatus
from api.cas.statistics import CoreStats, CoreIoClassStats
from core.test_run_utils import TestRun
from storage_devices.device import Device
from test_tools import fs_utils, disk_utils
from test_utils.os_utils import wait, sync
from test_utils.size import Unit, Size
class CoreStatus(Enum):
empty = 0
active = 1
inactive = 2
detached = 3
from test_tools.fs_tools import Filesystem, ls_item
from test_tools.os_tools import sync
from test_tools.common.wait import wait
from type_def.size import Unit, Size
SEQ_CUTOFF_THRESHOLD_MAX = Size(4194181, Unit.KibiByte)
@ -35,20 +29,35 @@ class Core(Device):
def __init__(self, core_device: str, cache_id: int):
self.core_device = Device(core_device)
self.path = None
self.cache_id = cache_id
core_info = self.__get_core_info()
# "-" is special case for cores in core pool
if core_info["core_id"] != "-":
self.core_id = int(core_info["core_id"])
if core_info["exp_obj"] != "-":
Device.__init__(self, core_info["exp_obj"])
self.cache_id = cache_id
self.partitions = []
self.block_size = None
def __get_core_info(self):
return get_core_info_by_path(self.core_device.path)
def __get_core_info(self) -> dict | None:
core_dicts = get_cas_devices_dict()["cores"].values()
# for core
core_device = [
core
for core in core_dicts
if core["cache_id"] == self.cache_id and core["device_path"] == self.core_device.path
]
if core_device:
return core_device[0]
def create_filesystem(self, fs_type: disk_utils.Filesystem, force=True, blocksize=None):
# for core pool
core_pool_dicts = get_cas_devices_dict()["core_pool"].values()
core_pool_device = [
core for core in core_pool_dicts if core["device_path"] == self.core_device.path
]
return core_pool_device[0]
def create_filesystem(self, fs_type: Filesystem, force=True, blocksize=None):
super().create_filesystem(fs_type, force, blocksize)
self.core_device.filesystem = self.filesystem
@ -76,8 +85,8 @@ class Core(Device):
percentage_val=percentage_val,
)
def get_status(self):
return CoreStatus[self.__get_core_info()["status"].lower()]
def get_status(self) -> CoreStatus:
return self.__get_core_info()["status"]
def get_seq_cut_off_parameters(self):
return get_seq_cut_off_parameters(self.cache_id, self.core_id)
@ -137,7 +146,7 @@ class Core(Device):
def check_if_is_present_in_os(self, should_be_visible=True):
device_in_system_message = "CAS device exists in OS."
device_not_in_system_message = "CAS device does not exist in OS."
item = fs_utils.ls_item(f"{self.path}")
item = ls_item(self.path)
if item is not None:
if should_be_visible:
TestRun.LOGGER.info(device_in_system_message)

View File

@ -0,0 +1,16 @@
#
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from enum import Enum
class CoreStatus(Enum):
empty = "empty"
active = "active"
inactive = "inactive"
detached = "detached"
def __str__(self):
return self.value

View File

@ -6,8 +6,8 @@
import re
from test_utils.dmesg import get_dmesg
from test_utils.size import Size, Unit
from test_tools.dmesg import get_dmesg
from type_def.size import Size, Unit
def get_metadata_size_on_device(cache_id: int) -> Size:

View File

@ -7,8 +7,7 @@
from api.cas import casadm_parser
from api.cas.cache_config import CacheMode
from storage_devices.device import Device
from test_tools import fs_utils
from test_tools.fs_tools import remove, write_file
opencas_conf_path = "/etc/opencas/opencas.conf"
@ -34,7 +33,7 @@ class InitConfig:
@staticmethod
def remove_config_file():
fs_utils.remove(opencas_conf_path, force=False)
remove(opencas_conf_path, force=False)
def save_config_file(self):
config_lines = []
@ -47,7 +46,7 @@ class InitConfig:
config_lines.append(CoreConfigLine.header)
for c in self.core_config_lines:
config_lines.append(str(c))
fs_utils.write_file(opencas_conf_path, "\n".join(config_lines), False)
write_file(opencas_conf_path, "\n".join(config_lines), False)
@classmethod
def create_init_config_from_running_configuration(
@ -69,7 +68,7 @@ class InitConfig:
@classmethod
def create_default_init_config(cls):
cas_version = casadm_parser.get_casadm_version()
fs_utils.write_file(opencas_conf_path, f"version={cas_version.base}")
write_file(opencas_conf_path, f"version={cas_version.base}")
class CacheConfigLine:

View File

@ -9,8 +9,9 @@ import os
from core.test_run import TestRun
from api.cas import cas_module
from api.cas.version import get_installed_cas_version
from test_utils import os_utils, git
from test_utils.output import CmdException
from test_tools import git
from connection.utils.output import CmdException
from test_tools.os_tools import is_kernel_module_loaded
def rsync_opencas_sources():
@ -98,7 +99,7 @@ def reinstall_opencas(version: str = ""):
def check_if_installed(version: str = ""):
TestRun.LOGGER.info("Check if Open CAS Linux is installed")
output = TestRun.executor.run("which casadm")
modules_loaded = os_utils.is_kernel_module_loaded(cas_module.CasModule.cache.value)
modules_loaded = is_kernel_module_loaded(cas_module.CasModule.cache.value)
if output.exit_code != 0 or not modules_loaded:
TestRun.LOGGER.info("CAS is not installed")

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -14,11 +14,10 @@ from datetime import timedelta
from packaging import version
from core.test_run import TestRun
from test_tools import fs_utils
from test_utils import os_utils
from test_utils.generator import random_string
from test_tools.fs_tools import write_file
from test_tools.os_tools import get_kernel_version
default_config_file_path = "/tmp/opencas_ioclass.conf"
default_config_file_path = TestRun.TEST_RUN_DATA_PATH + "/opencas_ioclass.conf"
MAX_IO_CLASS_ID = 32
MAX_IO_CLASS_PRIORITY = 255
@ -109,7 +108,7 @@ class IoClass:
ioclass_config_path: str = default_config_file_path,
):
TestRun.LOGGER.info(f"Creating config file {ioclass_config_path}")
fs_utils.write_file(
write_file(
ioclass_config_path, IoClass.list_to_csv(ioclass_list, add_default_rule)
)
@ -167,7 +166,7 @@ class IoClass:
"file_offset",
"request_size",
]
if os_utils.get_kernel_version() >= version.Version("4.13"):
if get_kernel_version() >= version.Version("4.13"):
rules.append("wlth")
rule = random.choice(rules)
@ -178,13 +177,17 @@ class IoClass:
def add_random_params(rule: str):
if rule == "directory":
allowed_chars = string.ascii_letters + string.digits + "/"
rule += f":/{random_string(random.randint(1, 40), allowed_chars)}"
rule += f":/{''.join(random.choices(allowed_chars, k=random.randint(1, 40)))}"
elif rule in ["file_size", "lba", "pid", "file_offset", "request_size", "wlth"]:
rule += f":{Operator(random.randrange(len(Operator))).name}:{random.randrange(1000000)}"
elif rule == "io_class":
rule += f":{random.randrange(MAX_IO_CLASS_PRIORITY + 1)}"
elif rule in ["extension", "process_name", "file_name_prefix"]:
rule += f":{random_string(random.randint(1, 10))}"
allowed_chars = string.ascii_letters + string.digits
rule += f":{''.join(random.choices(allowed_chars, k=random.randint(1, 10)))}"
elif rule == "io_direction":
direction = random.choice(["read", "write"])
rule += f":{direction}"
if random.randrange(2):
rule += "&done"
return rule

View File

@ -10,7 +10,7 @@ from datetime import timedelta
import paramiko
from core.test_run import TestRun
from test_utils.os_utils import wait
from test_tools.common.wait import wait
def check_progress_bar(command: str, progress_bar_expected: bool = True):

View File

@ -1,17 +1,18 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import csv
from datetime import timedelta
from enum import Enum
from typing import List
from api.cas import casadm
from api.cas.casadm_params import StatsFilter
from test_utils.size import Size, Unit
from connection.utils.output import CmdException
from type_def.size import Size, Unit
class UnitType(Enum):
@ -22,6 +23,7 @@ class UnitType(Enum):
kibibyte = "[KiB]"
gibibyte = "[GiB]"
seconds = "[s]"
byte = "[B]"
def __str__(self):
return self.value
@ -57,6 +59,9 @@ class CacheStats:
case StatsFilter.err:
self.error_stats = ErrorStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __str__(self):
# stats_list contains all Class.__str__ methods initialized in CacheStats
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
@ -68,6 +73,9 @@ class CacheStats:
getattr(other, stats_item) for stats_item in other.__dict__
]
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreStats:
def __init__(
@ -92,6 +100,9 @@ class CoreStats:
case StatsFilter.err:
self.error_stats = ErrorStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __str__(self):
# stats_list contains all Class.__str__ methods initialized in CacheStats
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
@ -103,6 +114,9 @@ class CoreStats:
getattr(other, stats_item) for stats_item in other.__dict__
]
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreIoClassStats:
def __init__(
@ -128,6 +142,9 @@ class CoreIoClassStats:
case StatsFilter.blk:
self.block_stats = BlockStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __eq__(self, other):
# check if all initialized variable in self(CacheStats) match other(CacheStats)
return [getattr(self, stats_item) for stats_item in self.__dict__] == [
@ -139,6 +156,9 @@ class CoreIoClassStats:
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
return "\n".join(stats_list)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CacheIoClassStats(CoreIoClassStats):
def __init__(
@ -173,12 +193,31 @@ class CacheConfigStats:
self.cache_line_size = parse_value(
value=stats_dict["Cache line size [KiB]"], unit_type=UnitType.kibibyte
)
footprint_prefix = "Metadata Memory Footprint "
footprint_key = next(k for k in stats_dict if k.startswith(footprint_prefix))
self.metadata_memory_footprint = parse_value(
value=stats_dict["Metadata Memory Footprint [MiB]"], unit_type=UnitType.mebibyte
value=stats_dict[footprint_key],
unit_type=UnitType(footprint_key[len(footprint_prefix) :]),
)
self.dirty_for = parse_value(value=stats_dict["Dirty for [s]"], unit_type=UnitType.seconds)
self.status = stats_dict["Status"]
del stats_dict["Cache Id"]
del stats_dict["Cache Size [4KiB Blocks]"]
del stats_dict["Cache Size [GiB]"]
del stats_dict["Cache Device"]
del stats_dict["Exported Object"]
del stats_dict["Core Devices"]
del stats_dict["Inactive Core Devices"]
del stats_dict["Write Policy"]
del stats_dict["Cleaning Policy"]
del stats_dict["Promotion Policy"]
del stats_dict["Cache line size [KiB]"]
del stats_dict[footprint_key]
del stats_dict["Dirty for [s]"]
del stats_dict["Dirty for"]
del stats_dict["Status"]
def __str__(self):
return (
f"Config stats:\n"
@ -216,10 +255,13 @@ class CacheConfigStats:
and self.status == other.status
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreConfigStats:
def __init__(self, stats_dict):
self.core_id = stats_dict["Core Id"]
self.core_id = int(stats_dict["Core Id"])
self.core_dev = stats_dict["Core Device"]
self.exp_obj = stats_dict["Exported Object"]
self.core_size = parse_value(
@ -232,6 +274,17 @@ class CoreConfigStats:
)
self.seq_cutoff_policy = stats_dict["Seq cutoff policy"]
del stats_dict["Core Id"]
del stats_dict["Core Device"]
del stats_dict["Exported Object"]
del stats_dict["Core Size [4KiB Blocks]"]
del stats_dict["Core Size [GiB]"]
del stats_dict["Dirty for [s]"]
del stats_dict["Dirty for"]
del stats_dict["Status"]
del stats_dict["Seq cutoff threshold [KiB]"]
del stats_dict["Seq cutoff policy"]
def __str__(self):
return (
f"Config stats:\n"
@ -259,6 +312,9 @@ class CoreConfigStats:
and self.seq_cutoff_policy == other.seq_cutoff_policy
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class IoClassConfigStats:
def __init__(self, stats_dict):
@ -267,6 +323,11 @@ class IoClassConfigStats:
self.eviction_priority = stats_dict["Eviction priority"]
self.max_size = stats_dict["Max size"]
del stats_dict["IO class ID"]
del stats_dict["IO class name"]
del stats_dict["Eviction priority"]
del stats_dict["Max size"]
def __str__(self):
return (
f"Config stats:\n"
@ -286,6 +347,9 @@ class IoClassConfigStats:
and self.max_size == other.max_size
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class UsageStats:
def __init__(self, stats_dict, percentage_val):
@ -307,6 +371,18 @@ class UsageStats:
value=stats_dict[f"Inactive Dirty {unit}"], unit_type=unit
)
for unit in [UnitType.percentage, UnitType.block_4k]:
del stats_dict[f"Occupancy {unit}"]
del stats_dict[f"Free {unit}"]
del stats_dict[f"Clean {unit}"]
del stats_dict[f"Dirty {unit}"]
if f"Inactive Dirty {unit}" in stats_dict:
del stats_dict[f"Inactive Occupancy {unit}"]
if f"Inactive Clean {unit}" in stats_dict:
del stats_dict[f"Inactive Clean {unit}"]
if f"Inactive Dirty {unit}" in stats_dict:
del stats_dict[f"Inactive Dirty {unit}"]
def __str__(self):
return (
f"Usage stats:\n"
@ -332,6 +408,9 @@ class UsageStats:
def __ne__(self, other):
return not self == other
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class IoClassUsageStats:
def __init__(self, stats_dict, percentage_val):
@ -340,6 +419,11 @@ class IoClassUsageStats:
self.clean = parse_value(value=stats_dict[f"Clean {unit}"], unit_type=unit)
self.dirty = parse_value(value=stats_dict[f"Dirty {unit}"], unit_type=unit)
for unit in [UnitType.percentage, UnitType.block_4k]:
del stats_dict[f"Occupancy {unit}"]
del stats_dict[f"Clean {unit}"]
del stats_dict[f"Dirty {unit}"]
def __str__(self):
return (
f"Usage stats:\n"
@ -363,15 +447,22 @@ class IoClassUsageStats:
def __ne__(self, other):
return not self == other
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class RequestStats:
def __init__(self, stats_dict, percentage_val):
unit = UnitType.percentage if percentage_val else UnitType.requests
self.read = RequestStatsChunk(
stats_dict=stats_dict, percentage_val=percentage_val, operation=OperationType.read
stats_dict=stats_dict,
percentage_val=percentage_val,
operation=OperationType.read,
)
self.write = RequestStatsChunk(
stats_dict=stats_dict, percentage_val=percentage_val, operation=OperationType.write
stats_dict=stats_dict,
percentage_val=percentage_val,
operation=OperationType.write,
)
self.pass_through_reads = parse_value(
value=stats_dict[f"Pass-Through reads {unit}"], unit_type=unit
@ -386,6 +477,17 @@ class RequestStats:
value=stats_dict[f"Total requests {unit}"], unit_type=unit
)
for unit in [UnitType.percentage, UnitType.requests]:
for operation in [OperationType.read, OperationType.write]:
del stats_dict[f"{operation} hits {unit}"]
del stats_dict[f"{operation} partial misses {unit}"]
del stats_dict[f"{operation} full misses {unit}"]
del stats_dict[f"{operation} total {unit}"]
del stats_dict[f"Pass-Through reads {unit}"]
del stats_dict[f"Pass-Through writes {unit}"]
del stats_dict[f"Serviced requests {unit}"]
del stats_dict[f"Total requests {unit}"]
def __str__(self):
return (
f"Request stats:\n"
@ -409,6 +511,9 @@ class RequestStats:
and self.requests_total == other.requests_total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class RequestStatsChunk:
def __init__(self, stats_dict, percentage_val: bool, operation: OperationType):
@ -440,6 +545,9 @@ class RequestStatsChunk:
and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BlockStats:
def __init__(self, stats_dict, percentage_val):
@ -455,6 +563,12 @@ class BlockStats:
device="exported object",
)
for unit in [UnitType.percentage, UnitType.block_4k]:
for device in ["core", "cache", "exported object"]:
del stats_dict[f"Reads from {device} {unit}"]
del stats_dict[f"Writes to {device} {unit}"]
del stats_dict[f"Total to/from {device} {unit}"]
def __str__(self):
return (
f"Block stats:\n"
@ -470,6 +584,9 @@ class BlockStats:
self.core == other.core and self.cache == other.cache and self.exp_obj == other.exp_obj
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class ErrorStats:
def __init__(self, stats_dict, percentage_val):
@ -482,6 +599,13 @@ class ErrorStats:
)
self.total_errors = parse_value(value=stats_dict[f"Total errors {unit}"], unit_type=unit)
for unit in [UnitType.percentage, UnitType.requests]:
for device in ["Core", "Cache"]:
del stats_dict[f"{device} read errors {unit}"]
del stats_dict[f"{device} write errors {unit}"]
del stats_dict[f"{device} total errors {unit}"]
del stats_dict[f"Total errors {unit}"]
def __str__(self):
return (
f"Error stats:\n"
@ -499,6 +623,9 @@ class ErrorStats:
and self.total_errors == other.total_errors
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BasicStatsChunk:
def __init__(self, stats_dict: dict, percentage_val: bool, device: str):
@ -517,6 +644,9 @@ class BasicStatsChunk:
self.reads == other.reads and self.writes == other.writes and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BasicStatsChunkError:
def __init__(self, stats_dict: dict, percentage_val: bool, device: str):
@ -535,6 +665,9 @@ class BasicStatsChunkError:
self.reads == other.reads and self.writes == other.writes and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
def get_stat_value(stat_dict: dict, key: str):
idx = key.index("[")
@ -580,10 +713,10 @@ def _get_section_filters(filter: List[StatsFilter], io_class_stats: bool = False
def get_stats_dict(
filter: List[StatsFilter],
cache_id: int,
core_id: int = None,
io_class_id: int = None
filter: List[StatsFilter],
cache_id: int,
core_id: int = None,
io_class_id: int = None,
):
csv_stats = casadm.print_statistics(
cache_id=cache_id,

View File

@ -6,9 +6,9 @@
import re
from test_utils import git
from test_tools import git
from core.test_run import TestRun
from test_utils.output import CmdException
from connection.utils.output import CmdException
class CasVersion:
@ -43,7 +43,7 @@ class CasVersion:
def get_available_cas_versions():
release_tags = git.get_release_tags()
release_tags = git.get_tags()
versions = [CasVersion.from_git_tag(tag) for tag in release_tags]

@ -1 +1 @@
Subproject commit acedafb5afec1ff346d97aa9db5486caf8acc032
Subproject commit 1d7589644a95caeaaeda0eeb99a2985d174f68d0

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -12,9 +12,8 @@ from core.test_run import TestRun
from api.cas import casadm
from storage_devices.disk import DiskType, DiskTypeSet
from api.cas.cache_config import CacheMode
from test_tools import fs_utils
from test_tools.disk_utils import Filesystem
from test_utils.size import Size, Unit
from test_tools.fs_tools import Filesystem, remove, create_directory
from type_def.size import Size, Unit
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
@ -29,11 +28,10 @@ block_sizes = [1, 2, 4, 5, 8, 16, 32, 64, 128]
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.hdd, DiskType.nand]))
def test_support_different_io_size(cache_mode):
"""
title: OpenCAS supports different IO sizes
description: |
OpenCAS supports IO of size in rage from 512b to 128K
title: Support for different I/O sizes
description: Verify support for I/O of size in rage from 512B to 128KiB
pass_criteria:
- No IO errors
- No I/O errors
"""
with TestRun.step("Prepare cache and core devices"):
@ -48,12 +46,12 @@ def test_support_different_io_size(cache_mode):
)
core = cache.add_core(core_disk.partitions[0])
with TestRun.step("Load the default ioclass config file"):
with TestRun.step("Load the default io class config file"):
cache.load_io_class(opencas_ioclass_conf_path)
with TestRun.step("Create a filesystem on the core device and mount it"):
fs_utils.remove(path=mountpoint, force=True, recursive=True, ignore_errors=True)
fs_utils.create_directory(path=mountpoint)
remove(path=mountpoint, force=True, recursive=True, ignore_errors=True)
create_directory(path=mountpoint)
core.create_filesystem(Filesystem.xfs)
core.mount(mountpoint)

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -17,12 +17,11 @@ from api.cas.cli_messages import (
)
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools import fs_utils
from test_tools.dd import Dd
from test_tools.disk_utils import Filesystem
from test_tools.fs_tools import Filesystem, read_file
from test_utils.filesystem.file import File
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from connection.utils.output import CmdException
from type_def.size import Size, Unit
version_file_path = r"/var/lib/opencas/cas_version"
mountpoint = "/mnt"
@ -31,46 +30,45 @@ mountpoint = "/mnt"
@pytest.mark.CI
def test_cas_version():
"""
title: Test for CAS version
title: Test for version number
description:
Check if CAS print version cmd returns consistent version with version file
Check if version printed by cmd returns value consistent with version file
pass criteria:
- casadm version command succeeds
- versions from cmd and file in /var/lib/opencas/cas_version are consistent
- Version command succeeds
- Versions from cmd and file in /var/lib/opencas/cas_version are consistent
"""
with TestRun.step("Read cas version using casadm cmd"):
with TestRun.step("Read version using casadm cmd"):
output = casadm.print_version(output_format=OutputFormat.csv)
cmd_version = output.stdout
cmd_cas_versions = [version.split(",")[1] for version in cmd_version.split("\n")[1:]]
with TestRun.step(f"Read cas version from {version_file_path} location"):
file_read = fs_utils.read_file(version_file_path).split("\n")
with TestRun.step(f"Read version from {version_file_path} location"):
file_read = read_file(version_file_path).split("\n")
file_cas_version = next(
(line.split("=")[1] for line in file_read if "CAS_VERSION=" in line)
)
with TestRun.step("Compare cmd and file versions"):
if not all(file_cas_version == cmd_cas_version for cmd_cas_version in cmd_cas_versions):
TestRun.LOGGER.error(f"Cmd and file versions doesn`t match")
TestRun.LOGGER.error(f"Cmd and file versions doesn't match")
@pytest.mark.CI
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
def test_negative_start_cache():
"""
title: Test start cache negative on cache device
title: Negative test for starting cache
description:
Check for negative cache start scenarios
Check starting cache using the same device or cache ID twice
pass criteria:
- Cache start succeeds
- Fails to start cache on the same device with another id
- Fails to start cache on another partition with the same id
- Starting cache on the same device with another ID fails
- Starting cache on another partition with the same ID fails
"""
with TestRun.step("Prepare cache device"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)] * 2)
cache_dev_1 = cache_dev.partitions[0]

View File

@ -9,7 +9,7 @@ import pytest
from api.cas import casadm
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
@pytest.mark.CI

View File

@ -0,0 +1,262 @@
#
# Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import posixpath
import random
import time
import pytest
from api.cas import casadm_parser, casadm
from api.cas.cache_config import CacheLineSize, CacheMode
from api.cas.cli import attach_cache_cmd
from api.cas.cli_messages import check_stderr_msg, attach_with_existing_metadata
from connection.utils.output import CmdException
from core.test_run import TestRun
from core.test_run_utils import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from storage_devices.nullblk import NullBlk
from test_tools.dmesg import clear_dmesg
from test_tools.fs_tools import Filesystem, create_directory, create_random_test_file, \
check_if_directory_exists, remove
from type_def.size import Size, Unit
mountpoint = "/mnt/cas"
test_file_path = f"{mountpoint}/test_file"
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.require_disk("core2", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", CacheMode)
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
def test_attach_device_with_existing_metadata(cache_mode, cache_line_size):
"""
title: Test attaching cache with valid and relevant metadata.
description: |
Attach disk with valid and relevant metadata and verify whether the running configuration
wasn't affected by the values from the old metadata.
pass_criteria:
- no cache crash during attach and detach.
- old metadata doesn't affect running cache.
- no kernel panic
"""
with TestRun.step("Prepare random cache line size and cache mode (different than tested)"):
random_cache_mode = _get_random_uniq_cache_mode(cache_mode)
cache_mode1, cache_mode2 = cache_mode, random_cache_mode
random_cache_line_size = _get_random_uniq_cache_line_size(cache_line_size)
cache_line_size1, cache_line_size2 = cache_line_size, random_cache_line_size
with TestRun.step("Clear dmesg log"):
clear_dmesg()
with TestRun.step("Prepare devices for caches and cores"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
cache_dev2.create_partitions([Size(2, Unit.GibiByte)])
cache_dev2 = cache_dev2.partitions[0]
core_dev1 = TestRun.disks["core"]
core_dev2 = TestRun.disks["core2"]
core_dev1.create_partitions([Size(2, Unit.GibiByte)] * 2)
core_dev2.create_partitions([Size(2, Unit.GibiByte)] * 2)
with TestRun.step("Start 2 caches with different parameters and add core to each"):
cache1 = casadm.start_cache(
cache_dev, force=True, cache_line_size=cache_line_size1
)
if cache1.has_volatile_metadata():
pytest.skip("Non-volatile metadata needed to run this test")
for core in core_dev1.partitions:
cache1.add_core(core)
cache2 = casadm.start_cache(
cache_dev2, force=True, cache_line_size=cache_line_size2
)
for core in core_dev2.partitions:
cache2.add_core(core)
cores_in_cache1_before = {
core.core_device.path for core in casadm_parser.get_cores(cache_id=cache1.cache_id)
}
with TestRun.step(f"Set cache modes for caches to {cache_mode1} and {cache_mode2}"):
cache1.set_cache_mode(cache_mode1)
cache2.set_cache_mode(cache_mode2)
with TestRun.step("Stop second cache"):
cache2.stop()
with TestRun.step("Detach first cache device"):
cache1.detach()
with TestRun.step("Try to attach the other cache device to first cache without force flag"):
try:
cache1.attach(device=cache_dev2)
TestRun.fail("Cache attached successfully"
"Expected: cache fail to attach")
except CmdException as exc:
check_stderr_msg(exc.output, attach_with_existing_metadata)
TestRun.LOGGER.info("Cache attach failed as expected")
with TestRun.step("Attach the other cache device to first cache with force flag"):
cache1.attach(device=cache_dev2, force=True)
cores_after_attach = casadm_parser.get_cores(cache_id=cache1.cache_id)
with TestRun.step("Verify if old configuration doesn`t affect new cache"):
cores_in_cache1 = {core.core_device.path for core in cores_after_attach}
if cores_in_cache1 != cores_in_cache1_before:
TestRun.fail(
f"After attaching cache device, core list has changed:"
f"\nUsed {cores_in_cache1}"
f"\nShould use {cores_in_cache1_before}."
)
if cache1.get_cache_line_size() == cache_line_size2:
TestRun.fail(
f"After attaching cache device, cache line size changed:"
f"\nUsed {cache_line_size2}"
f"\nShould use {cache_line_size1}."
)
if cache1.get_cache_mode() != cache_mode1:
TestRun.fail(
f"After attaching cache device, cache mode changed:"
f"\nUsed {cache1.get_cache_mode()}"
f"\nShould use {cache_mode1}."
)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", [CacheMode.WB, CacheMode.WT])
def test_attach_detach_md5sum(cache_mode):
"""
title: Test for md5sum of file after attach/detach operation.
description: |
Test data integrity after detach/attach operations
pass_criteria:
- CAS doesn't crash during attach and detach.
- md5sums before and after operations match each other
"""
with TestRun.step("Prepare cache and core devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
cache_dev2.create_partitions([Size(3, Unit.GibiByte)])
cache_dev2 = cache_dev2.partitions[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(6, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
core = cache.add_core(core_dev)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step("Create a filesystem on the core device and mount it"):
if check_if_directory_exists(mountpoint):
remove(mountpoint, force=True, recursive=True)
create_directory(path=mountpoint)
core.create_filesystem(Filesystem.xfs)
core.mount(mountpoint)
with TestRun.step("Write data to the exported object"):
test_file_main = create_random_test_file(
target_file_path=posixpath.join(mountpoint, "test_file"),
file_size=Size(5, Unit.GibiByte),
)
with TestRun.step("Calculate test file md5sums before detach"):
test_file_md5sum_before = test_file_main.md5sum()
with TestRun.step("Detach cache device"):
cache.detach()
with TestRun.step("Attach different cache device"):
cache.attach(device=cache_dev2, force=True)
with TestRun.step("Calculate cache test file md5sums after cache attach"):
test_file_md5sum_after = test_file_main.md5sum()
with TestRun.step("Compare test file md5sums"):
if test_file_md5sum_before != test_file_md5sum_after:
TestRun.fail(
f"MD5 sums of core before and after do not match."
f"Expected: {test_file_md5sum_before}"
f"Actual: {test_file_md5sum_after}"
)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", CacheMode)
def test_stop_cache_during_attach(cache_mode):
"""
title: Test cache stop during attach.
description: Test for handling concurrent cache attach and stop.
pass_criteria:
- No system crash.
- Stop operation completed successfully.
"""
with TestRun.step("Create null_blk device for cache"):
nullblk = NullBlk.create(size_gb=1500)
with TestRun.step("Prepare cache and core devices"):
cache_dev = nullblk[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(2, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step(f"Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
cache.add_core(core_dev)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step("Detach cache"):
cache.detach()
with TestRun.step("Start cache re-attach in background"):
TestRun.executor.run_in_background(
attach_cache_cmd(str(cache.cache_id), cache_dev.path)
)
time.sleep(1)
with TestRun.step("Stop cache"):
cache.stop()
with TestRun.step("Verify if cache stopped"):
caches = casadm_parser.get_caches()
if caches:
TestRun.fail(
"Cache is still running despite stop operation"
"expected behaviour: Cache stopped"
"actual behaviour: Cache running"
)
def _get_random_uniq_cache_line_size(cache_line_size) -> CacheLineSize:
return random.choice([c for c in list(CacheLineSize) if c is not cache_line_size])
def _get_random_uniq_cache_mode(cache_mode) -> CacheMode:
return random.choice([c for c in list(CacheMode) if c is not cache_mode])

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -17,8 +17,8 @@ from api.cas.cache_config import (
)
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from core.test_run import TestRun
from test_utils.size import Size, Unit
from test_utils.os_utils import Udev
from type_def.size import Size, Unit
from test_tools.udev import Udev
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
@ -65,10 +65,10 @@ def test_cleaning_policies_in_write_back(cleaning_policy: CleaningPolicy):
cache.set_cleaning_policy(cleaning_policy=cleaning_policy)
set_cleaning_policy_params(cache, cleaning_policy)
with TestRun.step("Check for running CAS cleaner"):
with TestRun.step("Check for running cleaner process"):
output = TestRun.executor.run(f"pgrep {cas_cleaner_process_name}")
if output.exit_code != 0:
TestRun.fail("CAS cleaner process is not running!")
TestRun.fail("Cleaner process is not running!")
with TestRun.step(f"Add {cores_count} cores to the cache"):
cores = [cache.add_core(partition) for partition in core_dev.partitions]
@ -133,10 +133,10 @@ def test_cleaning_policies_in_write_through(cleaning_policy):
cache.set_cleaning_policy(cleaning_policy=cleaning_policy)
set_cleaning_policy_params(cache, cleaning_policy)
with TestRun.step("Check for running CAS cleaner"):
with TestRun.step("Check for running cleaner process"):
output = TestRun.executor.run(f"pgrep {cas_cleaner_process_name}")
if output.exit_code != 0:
TestRun.fail("CAS cleaner process is not running!")
TestRun.fail("Cleaner process is not running!")
with TestRun.step(f"Add {cores_count} cores to the cache"):
cores = [cache.add_core(partition) for partition in core_dev.partitions]
@ -193,12 +193,12 @@ def set_cleaning_policy_params(cache, cleaning_policy):
if current_acp_params.wake_up_time != acp_params.wake_up_time:
failed_params += (
f"Wake Up time is {current_acp_params.wake_up_time}, "
f"Wake up time is {current_acp_params.wake_up_time}, "
f"should be {acp_params.wake_up_time}\n"
)
if current_acp_params.flush_max_buffers != acp_params.flush_max_buffers:
failed_params += (
f"Flush Max Buffers is {current_acp_params.flush_max_buffers}, "
f"Flush max buffers is {current_acp_params.flush_max_buffers}, "
f"should be {acp_params.flush_max_buffers}\n"
)
TestRun.LOGGER.error(f"ACP parameters did not switch properly:\n{failed_params}")
@ -215,22 +215,22 @@ def set_cleaning_policy_params(cache, cleaning_policy):
failed_params = ""
if current_alru_params.wake_up_time != alru_params.wake_up_time:
failed_params += (
f"Wake Up time is {current_alru_params.wake_up_time}, "
f"Wake up time is {current_alru_params.wake_up_time}, "
f"should be {alru_params.wake_up_time}\n"
)
if current_alru_params.staleness_time != alru_params.staleness_time:
failed_params += (
f"Staleness Time is {current_alru_params.staleness_time}, "
f"Staleness time is {current_alru_params.staleness_time}, "
f"should be {alru_params.staleness_time}\n"
)
if current_alru_params.flush_max_buffers != alru_params.flush_max_buffers:
failed_params += (
f"Flush Max Buffers is {current_alru_params.flush_max_buffers}, "
f"Flush max buffers is {current_alru_params.flush_max_buffers}, "
f"should be {alru_params.flush_max_buffers}\n"
)
if current_alru_params.activity_threshold != alru_params.activity_threshold:
failed_params += (
f"Activity Threshold is {current_alru_params.activity_threshold}, "
f"Activity threshold is {current_alru_params.activity_threshold}, "
f"should be {alru_params.activity_threshold}\n"
)
TestRun.LOGGER.error(f"ALRU parameters did not switch properly:\n{failed_params}")
@ -245,9 +245,9 @@ def check_cleaning_policy_operation(
case CleaningPolicy.alru:
if core_writes_before_wait_for_cleaning != Size.zero():
TestRun.LOGGER.error(
"CAS cleaner started to clean dirty data right after IO! "
"Cleaner process started to clean dirty data right after I/O! "
"According to ALRU parameters set in this test cleaner should "
"wait 10 seconds after IO before cleaning dirty data"
"wait 10 seconds after I/O before cleaning dirty data"
)
if core_writes_after_wait_for_cleaning <= core_writes_before_wait_for_cleaning:
TestRun.LOGGER.error(
@ -266,9 +266,9 @@ def check_cleaning_policy_operation(
case CleaningPolicy.acp:
if core_writes_before_wait_for_cleaning == Size.zero():
TestRun.LOGGER.error(
"CAS cleaner did not start cleaning dirty data right after IO! "
"Cleaner process did not start cleaning dirty data right after I/O! "
"According to ACP policy cleaner should start "
"cleaning dirty data right after IO"
"cleaning dirty data right after I/O"
)
if core_writes_after_wait_for_cleaning <= core_writes_before_wait_for_cleaning:
TestRun.LOGGER.error(

View File

@ -1,21 +1,22 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from time import sleep
import pytest
from api.cas import casadm, casadm_parser, cli
from api.cas.cache_config import CacheMode, CleaningPolicy, CacheModeTrait, SeqCutOffPolicy
from api.cas.casadm_params import StatsFilter
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from connection.utils.output import CmdException
from type_def.size import Size, Unit
@pytest.mark.parametrize("cache_mode", CacheMode.with_traits(CacheModeTrait.LazyWrites))
@ -23,10 +24,10 @@ from test_utils.size import Size, Unit
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.hdd, DiskType.hdd4k]))
def test_concurrent_cores_flush(cache_mode: CacheMode):
"""
title: Fail to flush two cores simultaneously.
title: Flush two cores simultaneously - negative.
description: |
CAS should return an error on attempt to flush second core if there is already
one flush in progress.
Validate that the attempt to flush another core when there is already one flush in
progress on the same cache will fail.
pass_criteria:
- No system crash.
- First core flushing should finish successfully.
@ -39,7 +40,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
core_dev = TestRun.disks["core"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
core_dev.create_partitions([Size(5, Unit.GibiByte)] * 2)
core_dev.create_partitions([Size(2, Unit.GibiByte)] * 2)
cache_part = cache_dev.partitions[0]
core_part1 = core_dev.partitions[0]
@ -48,7 +49,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
with TestRun.step("Start cache"):
cache = casadm.start_cache(cache_part, cache_mode, force=True)
with TestRun.step(f"Add both core devices to cache"):
with TestRun.step("Add both core devices to cache"):
core1 = cache.add_core(core_part1)
core2 = cache.add_core(core_part2)
@ -56,37 +57,34 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step("Run concurrent fio on both cores"):
fio_pids = []
with TestRun.step("Run fio on both cores"):
data_per_core = cache.size / 2
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.size(data_per_core)
.block_size(Size(4, Unit.MebiByte))
.read_write(ReadWrite.write)
.direct(1)
)
for core in [core1, core2]:
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.target(core.path)
.size(core.size)
.block_size(Size(4, Unit.MebiByte))
.read_write(ReadWrite.write)
.direct(1)
)
fio_pid = fio.run_in_background()
fio_pids.append(fio_pid)
for fio_pid in fio_pids:
if not TestRun.executor.check_if_process_exists(fio_pid):
TestRun.fail("Fio failed to start")
with TestRun.step("Wait for fio to finish"):
for fio_pid in fio_pids:
while TestRun.executor.check_if_process_exists(fio_pid):
sleep(1)
fio.add_job().target(core.path)
fio.run()
with TestRun.step("Check if both cores contain dirty blocks"):
if core1.get_dirty_blocks() == Size.zero():
TestRun.fail("The first core does not contain dirty blocks")
if core2.get_dirty_blocks() == Size.zero():
TestRun.fail("The second core does not contain dirty blocks")
core2_dirty_blocks_before = core2.get_dirty_blocks()
required_dirty_data = (
(data_per_core * 0.9).align_down(Unit.Blocks4096.value).set_unit(Unit.Blocks4096)
)
core1_dirty_data = core1.get_dirty_blocks()
if core1_dirty_data < required_dirty_data:
TestRun.fail(f"Core {core1.core_id} does not contain enough dirty data.\n"
f"Expected at least {required_dirty_data}, actual {core1_dirty_data}.")
core2_dirty_data_before = core2.get_dirty_blocks()
if core2_dirty_data_before < required_dirty_data:
TestRun.fail(f"Core {core2.core_id} does not contain enough dirty data.\n"
f"Expected at least {required_dirty_data}, actual "
f" {core2_dirty_data_before}.")
with TestRun.step("Start flushing the first core in background"):
output_pid = TestRun.executor.run_in_background(
@ -104,7 +102,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
pass
with TestRun.step(
"Wait until first core reach 40% flush and start flush operation on the second core"
"Wait until first core reaches 40% flush and start flush operation on the second core"
):
percentage = 0
while percentage < 40:
@ -131,18 +129,20 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
except CmdException:
TestRun.LOGGER.info("The first core is not flushing dirty data anymore")
with TestRun.step("Check number of dirty data on both cores"):
if core1.get_dirty_blocks() > Size.zero():
with TestRun.step("Check the size of dirty data on both cores"):
core1_dirty_data = core1.get_dirty_blocks()
if core1_dirty_data > Size.zero():
TestRun.LOGGER.error(
"The quantity of dirty cache lines on the first core "
"after completed flush should be zero"
"There should not be any dirty data on the first core after completed flush.\n"
f"Dirty data: {core1_dirty_data}."
)
core2_dirty_blocks_after = core2.get_dirty_blocks()
if core2_dirty_blocks_before != core2_dirty_blocks_after:
core2_dirty_data_after = core2.get_dirty_blocks()
if core2_dirty_data_after != core2_dirty_data_before:
TestRun.LOGGER.error(
"The quantity of dirty cache lines on the second core "
"after failed flush should not change"
"Dirty data on the second core after failed flush should not change."
f"Dirty data before flush: {core2_dirty_data_before}, "
f"after: {core2_dirty_data_after}"
)
@ -151,9 +151,9 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_concurrent_caches_flush(cache_mode: CacheMode):
"""
title: Success to flush two caches simultaneously.
title: Flush multiple caches simultaneously.
description: |
CAS should successfully flush multiple caches if there is already other flush in progress.
Check for flushing multiple caches if there is already other flush in progress.
pass_criteria:
- No system crash.
- Flush for each cache should finish successfully.
@ -178,28 +178,29 @@ def test_concurrent_caches_flush(cache_mode: CacheMode):
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step(f"Add core devices to caches"):
with TestRun.step("Add cores to caches"):
cores = [cache.add_core(core_dev=core_dev.partitions[i]) for i, cache in enumerate(caches)]
with TestRun.step("Run fio on all cores"):
fio_pids = []
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.block_size(Size(4, Unit.MebiByte))
.size(cache.size)
.read_write(ReadWrite.write)
.direct(1)
)
for core in cores:
fio = (
Fio()
.create_command()
.target(core)
.io_engine(IoEngine.libaio)
.block_size(Size(4, Unit.MebiByte))
.size(core.size)
.read_write(ReadWrite.write)
.direct(1)
)
fio_pids.append(fio.run_in_background())
fio.add_job().target(core)
fio.run()
with TestRun.step("Check if each cache is full of dirty blocks"):
for cache in caches:
if not cache.get_dirty_blocks() != core.size:
TestRun.fail(f"The cache {cache.cache_id} does not contain dirty blocks")
cache_stats = cache.get_statistics(stat_filter=[StatsFilter.usage], percentage_val=True)
if cache_stats.usage_stats.dirty < 90:
TestRun.fail(f"Cache {cache.cache_id} should contain at least 90% of dirty data, "
f"actual dirty data: {cache_stats.usage_stats.dirty}%")
with TestRun.step("Start flush operation on all caches simultaneously"):
flush_pids = [
@ -214,8 +215,9 @@ def test_concurrent_caches_flush(cache_mode: CacheMode):
with TestRun.step("Check number of dirty data on each cache"):
for cache in caches:
if cache.get_dirty_blocks() > Size.zero():
dirty_blocks = cache.get_dirty_blocks()
if dirty_blocks > Size.zero():
TestRun.LOGGER.error(
f"The quantity of dirty cache lines on the cache "
f"{str(cache.cache_id)} after complete flush should be zero"
f"The quantity of dirty data on cache {cache.cache_id} after complete "
f"flush should be zero, is: {dirty_blocks.set_unit(Unit.Blocks4096)}"
)

View File

@ -5,15 +5,14 @@
#
import random
import pytest
from api.cas import casadm
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeLowerThan, DiskTypeSet
from test_tools.disk_utils import Filesystem
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from test_tools.fs_tools import Filesystem
from connection.utils.output import CmdException
from type_def.size import Size, Unit
mount_point = "/mnt/cas"
cores_amount = 3

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -15,8 +15,9 @@ from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, VerifyMethod
from test_utils.os_utils import Udev, sync
from test_utils.size import Size, Unit
from test_tools.os_tools import sync
from test_tools.udev import Udev
from type_def.size import Size, Unit
io_size = Size(10000, Unit.Blocks4096)
@ -45,7 +46,7 @@ def test_cache_stop_and_load(cache_mode):
"""
title: Test for stopping and loading cache back with dynamic cache mode switching.
description: |
Validate the ability of the CAS to switch cache modes at runtime and
Validate the ability to switch cache modes at runtime and
check if all of them are working properly after switching and
after stopping and reloading cache back.
Check also other parameters consistency after reload.
@ -137,10 +138,8 @@ def test_cache_stop_and_load(cache_mode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cache_mode_switching_during_io(cache_mode_1, cache_mode_2, flush, io_mode):
"""
title: Test for dynamic cache mode switching during IO.
description: |
Validate the ability of CAS to switch cache modes
during working IO on CAS device.
title: Test for dynamic cache mode switching during I/O.
description: Validate the ability to switch cache modes during I/O on exported object.
pass_criteria:
- Cache mode is switched without errors.
"""
@ -181,7 +180,7 @@ def test_cache_mode_switching_during_io(cache_mode_1, cache_mode_2, flush, io_mo
):
cache.set_cache_mode(cache_mode=cache_mode_2, flush=flush)
with TestRun.step(f"Check if cache mode has switched properly during IO"):
with TestRun.step("Check if cache mode has switched properly during I/O"):
cache_mode_after_switch = cache.get_cache_mode()
if cache_mode_after_switch != cache_mode_2:
TestRun.fail(
@ -228,7 +227,7 @@ def run_io_and_verify(cache, core, io_mode):
):
TestRun.fail(
"Write-Back cache mode is not working properly! "
"There should be some writes to CAS device and none to the core"
"There should be some writes to exported object and none to the core"
)
case CacheMode.PT:
if (

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -9,7 +9,7 @@ import pytest
from api.cas import casadm, cli, cli_messages
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@ -18,11 +18,11 @@ def test_remove_multilevel_core():
"""
title: Test of the ability to remove a core used in a multilevel cache.
description: |
Negative test if OpenCAS does not allow to remove a core when the related exported object
Negative test for removing a core when the related exported object
is used as a core device for another cache instance.
pass_criteria:
- No system crash.
- OpenCAS does not allow removing a core used in a multilevel cache instance.
- Removing a core used in a multilevel cache instance is forbidden.
"""
with TestRun.step("Prepare cache and core devices"):

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -21,12 +21,12 @@ from api.cas.casadm_params import StatsFilter
from core.test_run_utils import TestRun
from storage_devices.disk import DiskTypeSet, DiskTypeLowerThan, DiskType
from test_tools.dd import Dd
from test_tools.disk_utils import Filesystem
from test_tools.fs_tools import Filesystem
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from test_utils.os_utils import Udev
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from test_tools.udev import Udev
from connection.utils.output import CmdException
from type_def.size import Size, Unit
random_thresholds = random.sample(range(1028, 1024**2, 4), 3)
random_stream_numbers = random.sample(range(2, 128), 3)
@ -57,7 +57,7 @@ def test_multistream_seq_cutoff_functional(streams_number, threshold):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step(f"Start cache in Write-Back"):
with TestRun.step(f"Start cache in Write-Back cache mode"):
cache_disk = TestRun.disks["cache"]
core_disk = TestRun.disks["core"]
cache = casadm.start_cache(cache_disk, CacheMode.WB, force=True)
@ -105,7 +105,7 @@ def test_multistream_seq_cutoff_functional(streams_number, threshold):
with TestRun.step(
"Write random number of 4k block requests to each stream and check if all "
"writes were sent in pass-through mode"
"writes were sent in pass-through"
):
core_statistics_before = core.get_statistics([StatsFilter.req, StatsFilter.blk])
random.shuffle(offsets)
@ -170,7 +170,7 @@ def test_multistream_seq_cutoff_stress_raw(streams_seq_rand):
with TestRun.step("Reset core statistics counters"):
core.reset_counters()
with TestRun.step("Run FIO on core device"):
with TestRun.step("Run fio on core device"):
stream_size = min(core_disk.size / 256, Size(256, Unit.MebiByte))
sequential_streams = streams_seq_rand[0]
random_streams = streams_seq_rand[1]
@ -216,12 +216,14 @@ def test_multistream_seq_cutoff_stress_fs(streams_seq_rand, filesystem, cache_mo
- No system crash
"""
with TestRun.step(f"Disable udev"):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Create filesystem on core device"):
with TestRun.step("Prepare cache and core devices"):
cache_disk = TestRun.disks["cache"]
core_disk = TestRun.disks["core"]
with TestRun.step("Create filesystem on core device"):
core_disk.create_filesystem(filesystem)
with TestRun.step("Start cache and add core"):
@ -231,7 +233,7 @@ def test_multistream_seq_cutoff_stress_fs(streams_seq_rand, filesystem, cache_mo
with TestRun.step("Mount core"):
core.mount(mount_point=mount_point)
with TestRun.step(f"Set seq-cutoff policy to always and threshold to 20MiB"):
with TestRun.step("Set sequential cutoff policy to always and threshold to 20MiB"):
core.set_seq_cutoff_policy(policy=SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold=Size(20, Unit.MebiByte))
@ -279,7 +281,7 @@ def run_dd(target_path, count, seek):
TestRun.LOGGER.info(f"dd command:\n{dd}")
output = dd.run()
if output.exit_code != 0:
raise CmdException("Error during IO", output)
raise CmdException("Error during I/O", output)
def check_statistics(stats_before, stats_after, expected_pt_writes, expected_writes_to_cache):

View File

@ -0,0 +1,263 @@
#
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
import math
import random
import pytest
from api.cas import casadm
from api.cas.cache_config import SeqCutOffPolicy, CleaningPolicy, PromotionPolicy, \
PromotionParametersNhit, CacheMode
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from test_tools.dd import Dd
from test_tools.udev import Udev
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_promotion_policy_nhit_threshold():
"""
title: Functional test for promotion policy nhit - threshold
description: |
Test checking if data is cached only after number of hits to given cache line
accordingly to specified promotion nhit threshold.
pass_criteria:
- Promotion policy and hit parameters are set properly
- Data is cached only after number of hits to given cache line specified by threshold param
- Data is written in pass-through before number of hits to given cache line specified by
threshold param
- After meeting specified number of hits to given cache line, writes to other cache lines
are handled in pass-through
"""
random_thresholds = random.sample(range(2, 1000), 10)
additional_writes_count = 10
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(value=5, unit=Unit.GibiByte)])
core_device.create_partitions([Size(value=10, unit=Unit.GibiByte)])
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, cache_mode=CacheMode.WB)
core = cache.add_core(core_parts)
with TestRun.step("Disable sequential cut-off and cleaning"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.reset_counters()
with TestRun.step("Check if statistics of writes to cache and writes to core are zeros"):
check_statistics(
cache,
expected_writes_to_cache=Size.zero(),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Set nhit promotion policy"):
cache.set_promotion_policy(PromotionPolicy.nhit)
for iteration, threshold in enumerate(
TestRun.iteration(
random_thresholds,
"Set and validate nhit promotion policy threshold"
)
):
with TestRun.step(f"Set threshold to {threshold} and trigger to 0%"):
cache.set_params_nhit(
PromotionParametersNhit(
threshold=threshold,
trigger=0
)
)
with TestRun.step("Purge cache"):
cache.purge_cache()
with TestRun.step("Reset counters"):
cache.reset_counters()
with TestRun.step(
"Run dd and check if number of writes to cache and writes to core increase "
"accordingly to nhit parameters"
):
# dd_seek is counted as below to use different part of the cache in each iteration
dd_seek = int(
cache.size.get_value(Unit.Blocks4096) // len(random_thresholds) * iteration
)
for count in range(1, threshold + additional_writes_count):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(1) \
.seek(dd_seek) \
.run()
if count < threshold:
expected_writes_to_cache = Size.zero()
expected_writes_to_core = Size(count, Unit.Blocks4096)
else:
expected_writes_to_cache = Size(count - threshold + 1, Unit.Blocks4096)
expected_writes_to_core = Size(threshold - 1, Unit.Blocks4096)
check_statistics(cache, expected_writes_to_cache, expected_writes_to_core)
with TestRun.step("Write to other cache line and check if it was handled in pass-through"):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(1) \
.seek(int(dd_seek + Unit.Blocks4096.value)) \
.run()
expected_writes_to_core = expected_writes_to_core + Size(1, Unit.Blocks4096)
check_statistics(cache, expected_writes_to_cache, expected_writes_to_core)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_promotion_policy_nhit_trigger():
"""
title: Functional test for promotion policy nhit - trigger
description: |
Test checking if data is cached accordingly to nhit threshold parameter only after reaching
cache occupancy specified by nhit trigger value
pass_criteria:
- Promotion policy and hit parameters are set properly
- Data is cached accordingly to nhit threshold parameter only after reaching
cache occupancy specified by nhit trigger value
- Data is cached without nhit policy before reaching the trigger
"""
random_triggers = random.sample(range(0, 100), 10)
threshold = 2
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(value=50, unit=Unit.MebiByte)])
core_device.create_partitions([Size(value=100, unit=Unit.MebiByte)])
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
for trigger in TestRun.iteration(
random_triggers,
"Validate nhit promotion policy trigger"
):
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, cache_mode=CacheMode.WB, force=True)
core = cache.add_core(core_parts)
with TestRun.step("Disable sequential cut-off and cleaning"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step("Purge cache"):
cache.purge_cache()
with TestRun.step("Reset counters"):
cache.reset_counters()
with TestRun.step("Check if statistics of writes to cache and writes to core are zeros"):
check_statistics(
cache,
expected_writes_to_cache=Size.zero(),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Set nhit promotion policy"):
cache.set_promotion_policy(PromotionPolicy.nhit)
with TestRun.step(f"Set threshold to {threshold} and trigger to {trigger}%"):
cache.set_params_nhit(
PromotionParametersNhit(
threshold=threshold,
trigger=trigger
)
)
with TestRun.step(f"Run dd to fill {trigger}% of cache size with data"):
blocks_count = math.ceil(cache.size.get_value(Unit.Blocks4096) * trigger / 100)
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(blocks_count) \
.seek(0) \
.run()
with TestRun.step("Check if all written data was cached"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Write to free cached volume sectors"):
free_seek = (blocks_count + 1)
pt_blocks_count = int(cache.size.get_value(Unit.Blocks4096) - blocks_count)
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(pt_blocks_count) \
.seek(free_seek) \
.run()
with TestRun.step("Check if recently written data was written in pass-through"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size(pt_blocks_count, Unit.Blocks4096)
)
with TestRun.step("Write to recently written sectors one more time"):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(pt_blocks_count) \
.seek(free_seek) \
.run()
with TestRun.step("Check if recently written data was cached"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count + pt_blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size(pt_blocks_count, Unit.Blocks4096)
)
with TestRun.step("Stop cache"):
cache.stop(no_data_flush=True)
def check_statistics(cache, expected_writes_to_cache, expected_writes_to_core):
cache_stats = cache.get_statistics()
writes_to_cache = cache_stats.block_stats.cache.writes
writes_to_core = cache_stats.block_stats.core.writes
if writes_to_cache != expected_writes_to_cache:
TestRun.LOGGER.error(
f"Number of writes to cache should be "
f"{expected_writes_to_cache.get_value(Unit.Blocks4096)} "
f"but it is {writes_to_cache.get_value(Unit.Blocks4096)}")
if writes_to_core != expected_writes_to_core:
TestRun.LOGGER.error(
f"Number of writes to core should be: "
f"{expected_writes_to_core.get_value(Unit.Blocks4096)} "
f"but it is {writes_to_core.get_value(Unit.Blocks4096)}")

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -15,8 +15,9 @@ from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, CpusAllowedPolicy
from test_utils.os_utils import Udev, sync, get_dut_cpu_physical_cores
from test_utils.size import Size, Unit
from test_tools.os_tools import sync, get_dut_cpu_physical_cores
from test_tools.udev import Udev
from type_def.size import Size, Unit
class VerifyType(Enum):
@ -39,15 +40,14 @@ class VerifyType(Enum):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_size):
"""
title: Sequential cut-off tests during sequential and random IO 'always' policy with 4 cores
title: Functional sequential cutoff test with multiple cores
description: |
Testing if amount of data written to cache after sequential writes for different
sequential cut-off thresholds on each core, while running sequential IO on 3 out of 4
cores and random IO against the last core, is correct.
Test checking if data is cached properly with sequential cutoff "always" policy
when sequential and random I/O is running to multiple cores.
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off threshold for three first cores.
- Amount of written blocks to cache is equal to io size run against last core.
with sequential cutoff threshold for three first cores.
- Amount of written blocks to cache is equal to I/O size run against last core.
"""
with TestRun.step("Prepare cache and core devices"):
@ -75,7 +75,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
)
core_list = [cache.add_core(core_dev=core_part) for core_part in core_parts]
with TestRun.step("Set sequential cut-off parameters for all cores"):
with TestRun.step("Set sequential cutoff parameters for all cores"):
writes_before_list = []
fio_additional_size = Size(10, Unit.Blocks4096)
thresholds_list = [
@ -95,7 +95,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
core.set_seq_cutoff_policy(SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential IO against first three cores"):
with TestRun.step("Prepare sequential I/O against first three cores"):
block_size = Size(4, Unit.KibiByte)
fio = Fio().create_command().io_engine(IoEngine.libaio).block_size(block_size).direct(True)
@ -106,7 +106,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
fio_job.target(core.path)
writes_before_list.append(core.get_statistics().block_stats.cache.writes)
with TestRun.step("Prepare random IO against the last core"):
with TestRun.step("Prepare random I/O against the last core"):
fio_job = fio.add_job(f"core_{core_list[-1].core_id}")
fio_job.size(io_sizes_list[-1])
fio_job.read_write(io_type_last)
@ -116,7 +116,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
with TestRun.step("Run fio against all cores"):
fio.run()
with TestRun.step("Verify writes to cache count after IO"):
with TestRun.step("Verify writes to cache count after I/O"):
margins = [
min(block_size * (core.get_seq_cut_off_parameters().promotion_count - 1), threshold)
for core, threshold in zip(core_list[:-1], thresholds_list[:-1])
@ -158,17 +158,16 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cache_line_size):
def test_seq_cutoff_multi_core_cpu_pinned(cache_mode, io_type, io_type_last, cache_line_size):
"""
title: Sequential cut-off tests during sequential and random IO 'always' policy with 4 cores
title: Functional sequential cutoff test with multiple cores and cpu pinned I/O
description: |
Testing if amount of data written to cache after sequential writes for different
sequential cut-off thresholds on each core, while running sequential IO, pinned,
on 3 out of 4 cores and random IO against the last core, is correct.
Test checking if data is cached properly with sequential cutoff "always" policy
when sequential and random cpu pinned I/O is running to multiple cores.
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off threshold for three first cores.
- Amount of written blocks to cache is equal to io size run against last core.
with sequential cutoff threshold for three first cores.
- Amount of written blocks to cache is equal to I/O size run against last core.
"""
with TestRun.step("Partition cache and core devices"):
@ -197,7 +196,7 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
)
core_list = [cache.add_core(core_dev=core_part) for core_part in core_parts]
with TestRun.step(f"Set sequential cut-off parameters for all cores"):
with TestRun.step("Set sequential cutoff parameters for all cores"):
writes_before_list = []
fio_additional_size = Size(10, Unit.Blocks4096)
thresholds_list = [
@ -217,7 +216,9 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
core.set_seq_cutoff_policy(SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential IO against first three cores"):
with TestRun.step(
"Prepare sequential I/O against first three cores and random I/O against the last one"
):
fio = (
Fio()
.create_command()
@ -243,10 +244,10 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
fio_job.target(core_list[-1].path)
writes_before_list.append(core_list[-1].get_statistics().block_stats.cache.writes)
with TestRun.step("Running IO against all cores"):
with TestRun.step("Running I/O against all cores"):
fio.run()
with TestRun.step("Verifying writes to cache count after IO"):
with TestRun.step("Verifying writes to cache count after I/O"):
for core, writes, threshold, io_size in zip(
core_list[:-1], writes_before_list[:-1], thresholds_list[:-1], io_sizes_list[:-1]
):
@ -281,16 +282,14 @@ def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cach
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
"""
title: Sequential cut-off tests for writes and reads for 'never', 'always' and 'full' policies
title: Functional test for sequential cutoff threshold parameter
description: |
Testing if amount of data written to cache after sequential writes and reads for different
sequential cut-off policies with cache configured with different cache line size
is valid for sequential cut-off threshold parameter, assuming that cache occupancy
doesn't reach 100% during test.
Check if data is cached properly according to sequential cutoff policy and
threshold parameter
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off parameter in case of 'always' policy.
- Amount of written blocks to cache is at least equal io size in case of 'never' and 'full'
- Amount of blocks written to cache is less than or equal to amount set
with sequential cutoff parameter in case of 'always' policy.
- Amount of blocks written to cache is at least equal to io size in case of 'never' and 'full'
policy.
"""
@ -325,13 +324,13 @@ def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
)
io_size = (threshold + fio_additional_size).align_down(0x1000)
with TestRun.step(f"Setting cache sequential cut off policy mode to {policy}"):
with TestRun.step(f"Setting cache sequential cutoff policy mode to {policy}"):
cache.set_seq_cutoff_policy(policy)
with TestRun.step(f"Setting cache sequential cut off policy threshold to {threshold}"):
with TestRun.step(f"Setting cache sequential cutoff policy threshold to {threshold}"):
cache.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential IO against core"):
with TestRun.step("Prepare sequential I/O against core"):
sync()
writes_before = core.get_statistics().block_stats.cache.writes
fio = (
@ -363,16 +362,15 @@ def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
"""
title: Sequential cut-off tests during writes and reads on full cache for 'full' policy
title: Functional test for sequential cutoff threshold parameter and 'full' policy
description: |
Testing if amount of data written to cache after sequential io against fully occupied
cache for 'full' sequential cut-off policy with cache configured with different cache
line sizes is valid for sequential cut-off threshold parameter.
Check if data is cached properly according to sequential cutoff 'full' policy and given
threshold parameter
pass_criteria:
- Amount of written blocks to cache is big enough to fill cache when 'never' sequential
cut-off policy is set
cutoff policy is set
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off parameter in case of 'full' policy.
with sequential cutoff parameter in case of 'full' policy.
"""
with TestRun.step("Partition cache and core devices"):
@ -406,10 +404,10 @@ def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
)
io_size = (threshold + fio_additional_size).align_down(0x1000)
with TestRun.step(f"Setting cache sequential cut off policy mode to {SeqCutOffPolicy.never}"):
with TestRun.step(f"Setting cache sequential cutoff policy mode to {SeqCutOffPolicy.never}"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step("Prepare sequential IO against core"):
with TestRun.step("Prepare sequential I/O against core"):
sync()
fio = (
Fio()
@ -431,13 +429,13 @@ def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
f"Cache occupancy is too small: {occupancy_percentage}, expected at least 95%"
)
with TestRun.step(f"Setting cache sequential cut off policy mode to {SeqCutOffPolicy.full}"):
with TestRun.step(f"Setting cache sequential cutoff policy mode to {SeqCutOffPolicy.full}"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.full)
with TestRun.step(f"Setting cache sequential cut off policy threshold to {threshold}"):
with TestRun.step(f"Setting cache sequential cutoff policy threshold to {threshold}"):
cache.set_seq_cutoff_threshold(threshold)
with TestRun.step(f"Running sequential IO ({io_dir})"):
with TestRun.step(f"Running sequential I/O ({io_dir})"):
sync()
writes_before = core.get_statistics().block_stats.cache.writes
fio = (

View File

@ -1,16 +1,17 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm
from api.cas.cache_config import CacheMode
from api.cas.cache_config import CacheMode, CacheModeTrait
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.os_utils import Udev
from test_utils.size import Unit, Size
from test_tools.udev import Udev
from type_def.size import Unit, Size
from test_tools.dd import Dd
from test_tools.iostat import IOstatBasic
@ -19,19 +20,17 @@ dd_count = 100
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrize("cache_mode", [CacheMode.WT, CacheMode.WA, CacheMode.WB])
@pytest.mark.parametrize("cache_mode", CacheMode.with_traits(CacheModeTrait.InsertRead))
@pytest.mark.CI()
def test_ci_read(cache_mode):
"""
title: Verification test for write mode: write around
description: Verify if write mode: write around, works as expected and cache only reads
and does not cache write
title: Verification test for caching reads in various cache modes
description: Check if reads are properly cached in various cache modes
pass criteria:
- writes are not cached
- reads are cached
- Reads are cached
"""
with TestRun.step("Prepare partitions"):
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@ -44,7 +43,7 @@ def test_ci_read(cache_mode):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step(f"Start cache with cache_mode={cache_mode}"):
with TestRun.step(f"Start cache in {cache_mode} cache mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=cache_mode)
casadm.add_core(cache, core_device)
@ -62,7 +61,7 @@ def test_ci_read(cache_mode):
dd.run()
with TestRun.step("Collect iostat"):
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_1 = iostat[0].total_reads
with TestRun.step("Generate cache hits using reads"):
@ -77,7 +76,7 @@ def test_ci_read(cache_mode):
dd.run()
with TestRun.step("Collect iostat"):
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_2 = iostat[0].total_reads
with TestRun.step("Stop cache"):
@ -98,7 +97,14 @@ def test_ci_read(cache_mode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.CI()
def test_ci_write_around_write():
with TestRun.step("Prepare partitions"):
"""
title: Verification test for writes in Write-Around cache mode
description: Validate I/O statistics after writing to exported object in Write-Around cache mode
pass criteria:
- Writes are not cached
- After inserting writes to core, data is read from core and not from cache
"""
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@ -111,16 +117,16 @@ def test_ci_write_around_write():
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start CAS Linux in Write Around mode"):
with TestRun.step("Start cache in Write-Around mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=CacheMode.WA)
casadm.add_core(cache, core_device)
with TestRun.step("Collect iostat before I/O"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_0 = iostat_core[0].total_writes
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_0 = iostat_cache[0].total_writes
with TestRun.step("Submit writes to exported object"):
@ -136,11 +142,11 @@ def test_ci_write_around_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_1 = iostat_core[0].total_writes
read_core_1 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_1 = iostat_cache[0].total_writes
read_cache_1 = iostat_cache[0].total_reads
@ -156,10 +162,10 @@ def test_ci_write_around_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
read_core_2 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_2 = iostat_cache[0].total_reads
with TestRun.step("Stop cache"):
@ -182,14 +188,14 @@ def test_ci_write_around_write():
else:
TestRun.LOGGER.error(f"Writes to cache: {write_cache_delta_1} != 0")
with TestRun.step("Verify that reads propagated to core"):
with TestRun.step("Verify that data was read from core"):
read_core_delta_2 = read_core_2 - read_core_1
if read_core_delta_2 == data_write:
TestRun.LOGGER.info(f"Reads from core: {read_core_delta_2} == {data_write}")
else:
TestRun.LOGGER.error(f"Reads from core: {read_core_delta_2} != {data_write}")
with TestRun.step("Verify that reads did not occur on cache"):
with TestRun.step("Verify that data was not read from cache"):
read_cache_delta_2 = read_cache_2 - read_cache_1
if read_cache_delta_2.value == 0:
TestRun.LOGGER.info(f"Reads from cache: {read_cache_delta_2} == 0")
@ -202,7 +208,15 @@ def test_ci_write_around_write():
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.CI()
def test_ci_write_through_write():
with TestRun.step("Prepare partitions"):
"""
title: Verification test for Write-Through cache mode
description: |
Validate if reads and writes are cached properly for cache in Write-Through mode
pass criteria:
- Writes are inserted to cache and core
- Reads are not cached
"""
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@ -215,16 +229,16 @@ def test_ci_write_through_write():
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start CAS Linux in Write Through mode"):
with TestRun.step("Start cache in Write-Through mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=CacheMode.WT)
casadm.add_core(cache, core_device)
with TestRun.step("Collect iostat before I/O"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_0 = iostat_core[0].total_writes
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_0 = iostat_cache[0].total_writes
with TestRun.step("Insert data into the cache using writes"):
@ -241,11 +255,11 @@ def test_ci_write_through_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
write_core_1 = iostat_core[0].total_writes
read_core_1 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
write_cache_1 = iostat_cache[0].total_writes
read_cache_1 = iostat_cache[0].total_reads
@ -262,10 +276,10 @@ def test_ci_write_through_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
read_core_2 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
read_cache_2 = iostat_cache[0].total_reads
with TestRun.step("Stop cache"):

View File

@ -1,69 +1,121 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm
from api.cas.cas_module import CasModule
from api.cas.cli_messages import check_stderr_msg, attach_not_enough_memory
from connection.utils.output import CmdException
from core.test_run import TestRun
from test_utils.size import Unit
from test_utils.os_utils import (allocate_memory,
disable_memory_affecting_functions,
drop_caches,
get_mem_free,
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from type_def.size import Unit, Size
from test_tools.os_tools import (drop_caches,
is_kernel_module_loaded,
load_kernel_module,
unload_kernel_module,
)
from test_tools.memory import disable_memory_affecting_functions, get_mem_free, allocate_memory, \
get_mem_available, unmount_ramfs
@pytest.mark.os_dependent
def test_insufficient_memory_for_cas_module():
"""
title: Negative test for the ability of CAS to load the kernel module with insufficient memory.
title: Load CAS kernel module with insufficient memory
description: |
Check that the CAS kernel module wont be loaded if enough memory is not available
Negative test for the ability to load the CAS kernel module with insufficient memory.
pass_criteria:
- CAS module cannot be loaded with not enough memory.
- Loading CAS with not enough memory returns error.
- CAS kernel module cannot be loaded with not enough memory.
- Loading CAS kernel module with not enough memory returns error.
"""
with TestRun.step("Disable caching and memory over-committing"):
disable_memory_affecting_functions()
drop_caches()
with TestRun.step("Measure memory usage without OpenCAS module"):
with TestRun.step("Measure memory usage without CAS kernel module"):
if is_kernel_module_loaded(CasModule.cache.value):
unload_kernel_module(CasModule.cache.value)
available_mem_before_cas = get_mem_free()
with TestRun.step("Load CAS module"):
with TestRun.step("Load CAS kernel module"):
load_kernel_module(CasModule.cache.value)
with TestRun.step("Measure memory usage with CAS module"):
with TestRun.step("Measure memory usage with CAS kernel module"):
available_mem_with_cas = get_mem_free()
memory_used_by_cas = available_mem_before_cas - available_mem_with_cas
TestRun.LOGGER.info(
f"OpenCAS module uses {memory_used_by_cas.get_value(Unit.MiB):.2f} MiB of DRAM."
f"CAS kernel module uses {memory_used_by_cas.get_value(Unit.MiB):.2f} MiB of DRAM."
)
with TestRun.step("Unload CAS module"):
with TestRun.step("Unload CAS kernel module"):
unload_kernel_module(CasModule.cache.value)
with TestRun.step("Allocate memory, leaving not enough memory for CAS module"):
memory_to_leave = get_mem_free() - (memory_used_by_cas * (3 / 4))
allocate_memory(memory_to_leave)
TestRun.LOGGER.info(
f"Memory left for OpenCAS module: {get_mem_free().get_value(Unit.MiB):0.2f} MiB."
f"Memory left for CAS kernel module: {get_mem_free().get_value(Unit.MiB):0.2f} MiB."
)
with TestRun.step(
"Try to load OpenCAS module and check if correct error message is printed on failure"
"Try to load CAS kernel module and check if correct error message is printed on failure"
):
output = load_kernel_module(CasModule.cache.value)
if output.stderr and output.exit_code != 0:
TestRun.LOGGER.info(f"Cannot load OpenCAS module as expected.\n{output.stderr}")
TestRun.LOGGER.info(f"Cannot load CAS kernel module as expected.\n{output.stderr}")
else:
TestRun.LOGGER.error("Loading OpenCAS module successfully finished, but should fail.")
TestRun.LOGGER.error("Loading CAS kernel module successfully finished, but should fail.")
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_attach_cache_min_ram():
"""
title: Test attach cache with insufficient memory.
description: |
Check for valid message when attaching cache with insufficient memory.
pass_criteria:
- CAS attach operation fail due to insufficient RAM.
- No system crash.
"""
with TestRun.step("Prepare devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
core_dev = TestRun.disks["core"]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True)
cache.add_core(core_dev)
with TestRun.step("Detach cache"):
cache.detach()
with TestRun.step("Set RAM workload"):
disable_memory_affecting_functions()
allocate_memory(get_mem_available() - Size(100, Unit.MegaByte))
with TestRun.step("Try to attach cache"):
try:
TestRun.LOGGER.info(
f"There is {get_mem_available().unit.MebiByte.value} available memory left"
)
cache.attach(device=cache_dev2, force=True)
TestRun.LOGGER.error(
f"Cache attached not as expected."
f"{get_mem_available()} is enough memory to complete operation")
except CmdException as exc:
check_stderr_msg(exc.output, attach_not_enough_memory)
with TestRun.step("Unlock RAM memory"):
unmount_ramfs()

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -8,14 +8,14 @@ import pytest
import time
from core.test_run_utils import TestRun
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
from api.cas import casadm
from api.cas.cache_config import CacheMode, CleaningPolicy
from test_utils.os_utils import Udev
from test_tools.udev import Udev
@pytest.mark.CI
@ -23,14 +23,14 @@ from test_utils.os_utils import Udev
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cleaning_policy():
"""
Title: test_cleaning_policy
Title: Basic test for cleaning policy
description: |
The test is to see if dirty data will be removed from the Cache after changing the
cleaning policy from NOP to one that expects a flush.
Verify cleaning behaviour after changing cleaning policy from NOP
to one that expects a flush.
pass_criteria:
- Cache is successfully populated with dirty data
- Cleaning policy is changed successfully
- There is no dirty data after the policy change
- Cache is successfully populated with dirty data
- Cleaning policy is changed successfully
- There is no dirty data after the policy change
"""
wait_time = 60

View File

@ -0,0 +1,61 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas.cli_help_messages import *
from api.cas.cli_messages import check_stderr_msg, check_stdout_msg
from core.test_run import TestRun
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_help(shortcut):
"""
title: Test for 'help' command.
description: |
Verifies that running command with 'help' param displays correct message for each
available command.
pass_criteria:
- Proper help message is displayed for every command.
- Proper help message is displayed after running command with wrong param.
"""
check_list_cmd = [
(" -S", " --start-cache", start_cache_help),
(None, " --attach-cache", attach_cache_help),
(None, " --detach-cache", detach_cache_help),
(" -T", " --stop-cache", stop_cache_help),
(" -X", " --set-param", set_params_help),
(" -G", " --get-param", get_params_help),
(" -Q", " --set-cache-mode", set_cache_mode_help),
(" -A", " --add-core", add_core_help),
(" -R", " --remove-core", remove_core_help),
(None, " --remove-inactive", remove_inactive_help),
(None, " --remove-detached", remove_detached_help),
(" -L", " --list-caches", list_caches_help),
(" -P", " --stats", stats_help),
(" -Z", " --reset-counters", reset_counters_help),
(" -F", " --flush-cache", flush_cache_help),
(" -C", " --io-class", ioclass_help),
(" -V", " --version", version_help),
# (None, " --standby", standby_help),
(" -H", " --help", help_help),
(None, " --zero-metadata", zero_metadata_help),
]
help = " -H" if shortcut else " --help"
with TestRun.step("Run 'help' for every 'casadm' command and check output"):
for cmds in check_list_cmd:
cmd = cmds[0] if shortcut else cmds[1]
if cmd:
output = TestRun.executor.run("casadm" + cmd + help)
check_stdout_msg(output, cmds[-1])
with TestRun.step("Run 'help' for command that doesn`t exist and check output"):
cmd = " -Y" if shortcut else " --yell"
output = TestRun.executor.run("casadm" + cmd + help)
check_stderr_msg(output, unrecognized_stderr)
check_stdout_msg(output, unrecognized_stdout)

View File

@ -1,127 +0,0 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import re
import pytest
from api.cas import casadm
from api.cas.casadm_params import OutputFormat
from api.cas.cli_help_messages import *
from api.cas.cli_messages import check_stderr_msg, check_stdout_msg
from core.test_run import TestRun
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_help(shortcut):
"""
title: Test for 'help' command.
description: Test if help for commands displays.
pass_criteria:
- Proper help displays for every command.
"""
TestRun.LOGGER.info("Run 'help' for every 'casadm' command.")
output = casadm.help(shortcut)
check_stdout_msg(output, casadm_help)
output = TestRun.executor.run("casadm" + (" -S" if shortcut else " --start-cache")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, start_cache_help)
output = TestRun.executor.run("casadm" + (" -T" if shortcut else " --stop-cache")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, stop_cache_help)
output = TestRun.executor.run("casadm" + (" -X" if shortcut else " --set-param")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, set_params_help)
output = TestRun.executor.run("casadm" + (" -G" if shortcut else " --get-param")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, get_params_help)
output = TestRun.executor.run("casadm" + (" -Q" if shortcut else " --set-cache-mode")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, set_cache_mode_help)
output = TestRun.executor.run("casadm" + (" -A" if shortcut else " --add-core")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, add_core_help)
output = TestRun.executor.run("casadm" + (" -R" if shortcut else " --remove-core")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, remove_core_help)
output = TestRun.executor.run("casadm" + " --remove-detached"
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, remove_detached_help)
output = TestRun.executor.run("casadm" + (" -L" if shortcut else " --list-caches")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, list_caches_help)
output = TestRun.executor.run("casadm" + (" -P" if shortcut else " --stats")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, stats_help)
output = TestRun.executor.run("casadm" + (" -Z" if shortcut else " --reset-counters")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, reset_counters_help)
output = TestRun.executor.run("casadm" + (" -F" if shortcut else " --flush-cache")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, flush_cache_help)
output = TestRun.executor.run("casadm" + (" -C" if shortcut else " --io-class")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, ioclass_help)
output = TestRun.executor.run("casadm" + (" -V" if shortcut else " --version")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, version_help)
output = TestRun.executor.run("casadm" + (" -H" if shortcut else " --help")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, help_help)
output = TestRun.executor.run("casadm" + " --standby"
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, standby_help)
output = TestRun.executor.run("casadm" + " --zero-metadata"
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, zero_metadata_help)
output = TestRun.executor.run("casadm" + (" -Y" if shortcut else " --yell")
+ (" -H" if shortcut else " --help"))
check_stderr_msg(output, unrecognized_stderr)
check_stdout_msg(output, unrecognized_stdout)
@pytest.mark.parametrize("output_format", OutputFormat)
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_version(shortcut, output_format):
"""
title: Test for 'version' command.
description: Test if version displays.
pass_criteria:
- Proper OCL's components names displays in table with its versions.
"""
TestRun.LOGGER.info("Check OCL's version.")
output = casadm.print_version(output_format, shortcut).stdout
TestRun.LOGGER.info(output)
if not names_in_output(output) or not versions_in_output(output):
TestRun.fail("'Version' command failed.")
def names_in_output(output):
return ("CAS Cache Kernel Module" in output
and "CAS CLI Utility" in output)
def versions_in_output(output):
version_pattern = re.compile(r"(\d){2}\.(\d){2}\.(\d)\.(\d){4}.(\S)")
return len(version_pattern.findall(output)) == 2

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -14,7 +15,7 @@ def test_cli_help_spelling():
title: Spelling test for 'help' command
description: Validates spelling of 'help' in CLI
pass criteria:
- no spelling mistakes are found
- No spelling mistakes are found
"""
cas_dictionary = os.path.join(TestRun.usr.repo_dir, "test", "functional", "resources")

View File

@ -1,16 +1,17 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm, casadm_parser
from api.cas import casadm
from core.test_run import TestRun
from test_utils.os_utils import sync
from test_tools.os_tools import sync
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Unit, Size
from type_def.size import Unit, Size
from test_tools.dd import Dd
@ -19,12 +20,11 @@ from test_tools.dd import Dd
@pytest.mark.parametrize("purge_target", ["cache", "core"])
def test_purge(purge_target):
"""
title: Call purge without and with `--script` switch
description: |
Check if purge is called only when `--script` switch is used.
title: Basic test for purge command
description: Check purge command behaviour with and without '--script' flag
pass_criteria:
- casadm returns an error when `--script` is missing
- cache is wiped when purge command is used properly
- Error returned when '--script' is missing
- Cache is wiped when purge command is used properly
"""
with TestRun.step("Prepare devices"):
cache_device = TestRun.disks["cache"]
@ -40,7 +40,7 @@ def test_purge(purge_target):
cache = casadm.start_cache(cache_device, force=True)
core = casadm.add_core(cache, core_device)
with TestRun.step("Trigger IO to prepared cache instance"):
with TestRun.step("Trigger I/O to prepared cache instance"):
dd = (
Dd()
.input("/dev/zero")
@ -78,8 +78,3 @@ def test_purge(purge_target):
if cache.get_statistics().usage_stats.occupancy.get_value() != 0:
TestRun.fail(f"{cache.get_statistics().usage_stats.occupancy.get_value()}")
TestRun.fail(f"Purge {purge_target} should invalidate all cache lines!")
with TestRun.step(
f"Stop cache"
):
casadm.stop_all_caches()

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -13,11 +13,11 @@ from core.test_run import TestRun
from storage_devices.device import Device
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.dd import Dd
from test_tools.disk_utils import Filesystem
from test_tools.fs_tools import Filesystem
from test_utils.filesystem.file import File
from test_utils.os_utils import sync
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from test_tools.os_tools import sync
from connection.utils.output import CmdException
from type_def.size import Size, Unit
from api.cas.cli_messages import (
check_stderr_msg,
missing_param,
@ -44,8 +44,8 @@ def test_standby_neg_cli_params():
"""
title: Verifying parameters for starting a standby cache instance
description: |
Try executing the standby init command with required arguments missing or
disallowed arguments present.
Try executing the standby init command with required arguments missing or
disallowed arguments present.
pass_criteria:
- The execution is unsuccessful for all improper argument combinations
- A proper error message is displayed for unsuccessful executions
@ -120,11 +120,12 @@ def test_activate_neg_cli_params():
-The execution is unsuccessful for all improper argument combinations
-A proper error message is displayed for unsuccessful executions
"""
cache_id = 1
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(500, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cache_id = 1
with TestRun.step("Init standby cache"):
cache_dev = Device(cache_device.path)
@ -201,6 +202,8 @@ def test_standby_neg_cli_management():
- The execution is successful for allowed management commands
- A proper error message is displayed for unsuccessful executions
"""
cache_id = 1
with TestRun.step("Prepare the device for the cache."):
device = TestRun.disks["cache"]
device.create_partitions([Size(500, Unit.MebiByte), Size(500, Unit.MebiByte)])
@ -208,7 +211,6 @@ def test_standby_neg_cli_management():
core_device = device.partitions[1]
with TestRun.step("Prepare the standby instance"):
cache_id = 1
cache = casadm.standby_init(
cache_dev=cache_device, cache_id=cache_id,
cache_line_size=CacheLineSize.LINE_32KiB, force=True
@ -272,19 +274,19 @@ def test_start_neg_cli_flags():
"""
title: Blocking standby start command with mutually exclusive flags
description: |
Try executing the standby start command with different combinations of mutually
exclusive flags.
Try executing the standby start command with different combinations of mutually
exclusive flags.
pass_criteria:
- The command execution is unsuccessful for commands with mutually exclusive flags
- A proper error message is displayed
"""
cache_id = 1
cache_line_size = 32
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(500, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cache_id = 1
cache_line_size = 32
with TestRun.step("Try to start standby cache with mutually exclusive parameters"):
init_required_params = f' --cache-device {cache_device.path}' \
@ -327,19 +329,19 @@ def test_activate_without_detach():
"""
title: Activate cache without detach command.
description: |
Try activate passive cache without detach command before activation.
Try to activate passive cache without detach command before activation.
pass_criteria:
- The activation is not possible
- The cache remains in Standby state after unsuccessful activation
- The cache exported object is present after an unsuccessful activation
"""
cache_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Prepare the device for the cache."):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(500, Unit.MebiByte)])
cache_dev = cache_dev.partitions[0]
cache_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Start cache instance."):
cache = casadm.start_cache(cache_dev=cache_dev, cache_id=cache_id)
@ -390,15 +392,18 @@ def test_activate_without_detach():
@pytest.mark.require_disk("standby_cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
def test_activate_neg_cache_line_size():
"""
title: Blocking cache with mismatching cache line size activation.
description: |
Try restoring cache operations from a replicated cache that was initialized
with different cache line size than the original cache.
pass_criteria:
- The activation is cancelled
- The cache remains in Standby detached state after an unsuccessful activation
- A proper error message is displayed
title: Blocking cache with mismatching cache line size activation.
description: |
Try restoring cache operations from a replicated cache that was initialized
with different cache line size than the original cache.
pass_criteria:
- The activation is cancelled
- The cache remains in Standby detached state after an unsuccessful activation
- A proper error message is displayed
"""
cache_id = 1
active_cls, standby_cls = CacheLineSize.LINE_4KiB, CacheLineSize.LINE_16KiB
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Prepare cache devices"):
active_cache_dev = TestRun.disks["active_cache"]
@ -407,73 +412,69 @@ def test_activate_neg_cache_line_size():
standby_cache_dev = TestRun.disks["standby_cache"]
standby_cache_dev.create_partitions([Size(500, Unit.MebiByte)])
standby_cache_dev = standby_cache_dev.partitions[0]
cache_id = 1
active_cls, standby_cls = CacheLineSize.LINE_4KiB, CacheLineSize.LINE_16KiB
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Start active cache instance."):
active_cache = casadm.start_cache(cache_dev=active_cache_dev, cache_id=cache_id,
cache_line_size=active_cls)
with TestRun.step("Start active cache instance."):
active_cache = casadm.start_cache(cache_dev=active_cache_dev, cache_id=cache_id,
cache_line_size=active_cls)
with TestRun.step("Create dump file with cache metadata"):
with TestRun.step("Get metadata size"):
dmesg_out = TestRun.executor.run_expect_success("dmesg").stdout
md_size = dmesg.get_metadata_size_on_device(dmesg_out)
with TestRun.step("Get metadata size"):
dmesg_out = TestRun.executor.run_expect_success("dmesg").stdout
md_size = dmesg.get_metadata_size_on_device(dmesg_out)
with TestRun.step("Dump the metadata of the cache"):
dump_file_path = "/tmp/test_activate_corrupted.dump"
md_dump = File(dump_file_path)
md_dump.remove(force=True, ignore_errors=True)
dd_count = int(md_size / Size(1, Unit.MebiByte)) + 1
(
Dd().input(active_cache_dev.path)
.output(md_dump.full_path)
.block_size(Size(1, Unit.MebiByte))
.count(dd_count)
.run()
)
md_dump.refresh_item()
with TestRun.step("Dump the metadata of the cache"):
dump_file_path = "/tmp/test_activate_corrupted.dump"
md_dump = File(dump_file_path)
md_dump.remove(force=True, ignore_errors=True)
dd_count = int(md_size / Size(1, Unit.MebiByte)) + 1
(
Dd().input(active_cache_dev.path)
.output(md_dump.full_path)
.block_size(Size(1, Unit.MebiByte))
.count(dd_count)
.run()
)
md_dump.refresh_item()
with TestRun.step("Stop cache instance."):
active_cache.stop()
with TestRun.step("Stop cache instance."):
active_cache.stop()
with TestRun.step("Start standby cache instance."):
standby_cache = casadm.standby_init(cache_dev=standby_cache_dev, cache_id=cache_id,
cache_line_size=standby_cls,
force=True)
with TestRun.step("Start standby cache instance."):
standby_cache = casadm.standby_init(cache_dev=standby_cache_dev, cache_id=cache_id,
cache_line_size=standby_cls,
force=True)
with TestRun.step("Verify if the cache exported object appeared in the system"):
output = TestRun.executor.run_expect_success(
f"ls -la /dev/ | grep {cache_exp_obj_name}"
)
if output.stdout[0] != "b":
TestRun.fail("The cache exported object is not a block device")
with TestRun.step("Verify if the cache exported object appeared in the system"):
output = TestRun.executor.run_expect_success(
f"ls -la /dev/ | grep {cache_exp_obj_name}"
)
if output.stdout[0] != "b":
TestRun.fail("The cache exported object is not a block device")
with TestRun.step("Detach standby cache instance"):
standby_cache.standby_detach()
with TestRun.step("Detach standby cache instance"):
standby_cache.standby_detach()
with TestRun.step(f"Copy changed metadata to the standby instance"):
Dd().input(md_dump.full_path).output(standby_cache_dev.path).run()
sync()
with TestRun.step(f"Copy changed metadata to the standby instance"):
Dd().input(md_dump.full_path).output(standby_cache_dev.path).run()
sync()
with TestRun.step("Try to activate cache instance"):
with pytest.raises(CmdException) as cmdExc:
output = standby_cache.standby_activate(standby_cache_dev)
if not check_stderr_msg(output, cache_line_size_mismatch):
TestRun.LOGGER.error(
f'Expected error message in format '
f'"{cache_line_size_mismatch[0]}"'
f'Got "{output.stderr}" instead.'
)
assert "Failed to activate standby cache." in str(cmdExc.value)
with TestRun.step("Verify if cache is in standby detached state after failed activation"):
cache_status = standby_cache.get_status()
if cache_status != CacheStatus.standby_detached:
with TestRun.step("Try to activate cache instance"):
with pytest.raises(CmdException) as cmdExc:
output = standby_cache.standby_activate(standby_cache_dev)
if not check_stderr_msg(output, cache_line_size_mismatch):
TestRun.LOGGER.error(
f'Expected Cache state: "{CacheStatus.standby.value}" '
f'Got "{cache_status.value}" instead.'
f'Expected error message in format '
f'"{cache_line_size_mismatch[0]}"'
f'Got "{output.stderr}" instead.'
)
assert "Failed to activate standby cache." in str(cmdExc.value)
with TestRun.step("Verify if cache is in standby detached state after failed activation"):
cache_status = standby_cache.get_status()
if cache_status != CacheStatus.standby_detached:
TestRun.LOGGER.error(
f'Expected Cache state: "{CacheStatus.standby.value}" '
f'Got "{cache_status.value}" instead.'
)
@pytest.mark.CI
@ -489,17 +490,18 @@ def test_standby_init_with_preexisting_metadata():
- initialize cache without force flag fails and informative error message is printed
- initialize cache with force flag succeeds and passive instance is present in system
"""
cache_line_size = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Prepare device for cache"):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(200, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cls = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Start standby cache instance"):
cache = casadm.standby_init(
cache_dev=cache_device,
cache_line_size=cls,
cache_line_size=cache_line_size,
cache_id=cache_id,
force=True,
)
@ -512,7 +514,7 @@ def test_standby_init_with_preexisting_metadata():
standby_init_cmd(
cache_dev=cache_device.path,
cache_id=str(cache_id),
cache_line_size=str(int(cls.value.value / Unit.KibiByte.value)),
cache_line_size=str(int(cache_line_size.value.value / Unit.KibiByte.value)),
)
)
if not check_stderr_msg(output, start_cache_with_existing_metadata):
@ -524,7 +526,7 @@ def test_standby_init_with_preexisting_metadata():
with TestRun.step("Try initialize cache with force flag"):
casadm.standby_init(
cache_dev=cache_device,
cache_line_size=cls,
cache_line_size=cache_line_size,
cache_id=cache_id,
force=True,
)
@ -549,12 +551,13 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
- initialize cache without force flag fails and informative error message is printed
- initialize cache with force flag succeeds and passive instance is present in system
"""
cache_line_size = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Prepare device for cache"):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(200, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cls = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Create filesystem on cache device partition"):
cache_device.create_filesystem(filesystem)
@ -564,7 +567,7 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
standby_init_cmd(
cache_dev=cache_device.path,
cache_id=str(cache_id),
cache_line_size=str(int(cls.value.value / Unit.KibiByte.value)),
cache_line_size=str(int(cache_line_size.value.value / Unit.KibiByte.value)),
)
)
if not check_stderr_msg(output, standby_init_with_existing_filesystem):
@ -576,7 +579,7 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
with TestRun.step("Try initialize cache with force flag"):
casadm.standby_init(
cache_dev=cache_device,
cache_line_size=cls,
cache_line_size=cache_line_size,
cache_id=cache_id,
force=True,
)
@ -593,13 +596,18 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
@pytest.mark.require_disk("core", DiskTypeLowerThan("caches"))
def test_standby_activate_with_corepool():
"""
title: Activate standby cache instance with corepool
title: Activate standby cache instance with core pool
description: |
Activation of standby cache with core taken from core pool
pass_criteria:
- During activate metadata on the device match with metadata in DRAM
- Core is in active state after activate
- During activate metadata on the device match with metadata in DRAM
- Core is in active state after activate
"""
cache_id = 1
core_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
cache_line_size = CacheLineSize.LINE_16KiB
with TestRun.step("Prepare cache and core devices"):
caches_dev = TestRun.disks["caches"]
caches_dev.create_partitions([Size(500, Unit.MebiByte), Size(500, Unit.MebiByte)])
@ -609,13 +617,8 @@ def test_standby_activate_with_corepool():
core_dev.create_partitions([Size(200, Unit.MebiByte)])
core_dev = core_dev.partitions[0]
cache_id = 1
core_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
cls = CacheLineSize.LINE_16KiB
with TestRun.step("Start regular cache instance"):
cache = casadm.start_cache(cache_dev=active_cache_dev, cache_line_size=cls,
cache = casadm.start_cache(cache_dev=active_cache_dev, cache_line_size=cache_line_size,
cache_id=cache_id)
with TestRun.step("Add core to regular cache instance"):
@ -629,7 +632,7 @@ def test_standby_activate_with_corepool():
with TestRun.step("Start standby cache instance."):
standby_cache = casadm.standby_init(cache_dev=standby_cache_dev, cache_id=cache_id,
cache_line_size=cls,
cache_line_size=cache_line_size,
force=True)
with TestRun.step(f"Copy changed metadata to the standby instance"):
@ -652,12 +655,12 @@ def test_standby_activate_with_corepool():
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
def test_standby_start_stop(cache_line_size):
"""
title: Start and stop a standby cache instance.
description: Test if cache can be started in standby state and stopped without activation.
pass_criteria:
- A cache exported object appears after starting a cache in standby state
- The data written to the cache exported object committed on the underlying cache device
- The cache exported object disappears after stopping the standby cache instance
title: Start and stop a standby cache instance.
description: Test if cache can be started in standby state and stopped without activation.
pass_criteria:
- A cache exported object appears after starting a cache in standby state
- The data written to the cache exported object committed on the underlying cache device
- The cache exported object disappears after stopping the standby cache instance
"""
with TestRun.step("Prepare a cache device"):
cache_size = Size(500, Unit.MebiByte)

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -10,7 +11,7 @@ from api.cas import casadm, casadm_parser, cli_messages
from api.cas.cli import start_cmd
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Unit, Size
from type_def.size import Unit, Size
CACHE_ID_RANGE = (1, 16384)
CORE_ID_RANGE = (0, 4095)
@ -20,12 +21,12 @@ CORE_ID_RANGE = (0, 4095)
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_start_stop_default_id(shortcut):
"""
title: Test for starting a cache with a default ID - short and long command
description: |
Start a new cache with a default ID and then stop this cache.
pass_criteria:
- The cache has successfully started with default ID
- The cache has successfully stopped
title: Test for starting a cache with a default ID - short and long command
description: |
Start a new cache with a default ID and then stop this cache.
pass_criteria:
- The cache has successfully started with default ID
- The cache has successfully stopped
"""
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks['cache']
@ -61,12 +62,12 @@ def test_cli_start_stop_default_id(shortcut):
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_start_stop_custom_id(shortcut):
"""
title: Test for starting a cache with a custom ID - short and long command
description: |
Start a new cache with a random ID (from allowed pool) and then stop this cache.
pass_criteria:
- The cache has successfully started with a custom ID
- The cache has successfully stopped
title: Test for starting a cache with a custom ID - short and long command
description: |
Start a new cache with a random ID (from allowed pool) and then stop this cache.
pass_criteria:
- The cache has successfully started with a custom ID
- The cache has successfully stopped
"""
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks['cache']
@ -105,13 +106,13 @@ def test_cli_start_stop_custom_id(shortcut):
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_add_remove_default_id(shortcut):
"""
title: Test for adding and removing a core with a default ID - short and long command
description: |
Start a new cache and add a core to it without passing a core ID as an argument
and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
title: Test for adding and removing a core with a default ID - short and long command
description: |
Start a new cache and add a core to it without passing a core ID as an argument
and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
"""
with TestRun.step("Prepare the devices."):
cache_disk = TestRun.disks['cache']
@ -156,13 +157,13 @@ def test_cli_add_remove_default_id(shortcut):
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_add_remove_custom_id(shortcut):
"""
title: Test for adding and removing a core with a custom ID - short and long command
description: |
Start a new cache and add a core to it with passing a random core ID
(from allowed pool) as an argument and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
title: Test for adding and removing a core with a custom ID - short and long command
description: |
Start a new cache and add a core to it with passing a random core ID
(from allowed pool) as an argument and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
"""
with TestRun.step("Prepare the devices."):
cache_disk = TestRun.disks['cache']
@ -208,13 +209,13 @@ def test_cli_add_remove_custom_id(shortcut):
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_load_and_force(shortcut):
"""
title: Test if it is possible to use start command with 'load' and 'force' flag at once
description: |
Try to start cache with 'load' and 'force' options at the same time
and check if it is not possible to do
pass_criteria:
- Start cache command with both 'force' and 'load' options should fail
- Proper message should be received
title: Test if it is possible to use start command with 'load' and 'force' flag at once
description: |
Try to start cache with 'load' and 'force' options at the same time
and check if it is not possible to do
pass_criteria:
- Start cache command with both 'force' and 'load' options should fail
- Proper message should be received
"""
with TestRun.step("Prepare cache."):
cache_device = TestRun.disks['cache']

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -8,14 +9,14 @@ import time
from core.test_run_utils import TestRun
from storage_devices.device import Device
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
from api.cas import casadm
from api.cas.cache_config import CacheMode, CleaningPolicy
from test_utils.os_utils import Udev
from test_tools.udev import Udev
@pytest.mark.CI
@ -23,13 +24,16 @@ from test_utils.os_utils import Udev
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cleaning_policy():
"""
Title: test manual casadm flush
description: | The test is to see if dirty data will be removed from the Cache
or Core after using the casadm command with the corresponding parameter.
title: Test for manual cache and core flushing
description: |
The test is to see if dirty data will be removed from the cache
or core after using the casadm command with the corresponding parameter.
pass_criteria:
- Cache and core are filled with dirty data.
- After cache and core flush dirty data are cleared.
"""
cache_id = 1
with TestRun.step("Prepare devices."):
cache_disk = TestRun.disks["cache"]
cache_disk.create_partitions([Size(1, Unit.GibiByte)])
@ -38,7 +42,8 @@ def test_cleaning_policy():
core_disk = TestRun.disks["core"]
core_disk.create_partitions([Size(1, Unit.GibiByte)])
core_dev = core_disk.partitions[0]
cache_id = 1
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start cache and set cleaning policy to NOP"):

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -14,33 +15,44 @@ from api.cas.casadm import set_param_cutoff_cmd
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_default_params():
"""
title: Default sequential cut-off threshold & policy test
title: Default sequential cutoff threshold & policy test
description: Test if proper default threshold and policy is set after cache start
pass_criteria:
- "Full" shall be default sequential cut-off policy
- There shall be default 1MiB (1024kiB) value for sequential cut-off threshold
- "Full" shall be default sequential cutoff policy
- There shall be default 1MiB (1024kiB) value for sequential cutoff threshold
"""
with TestRun.step("Test prepare (start cache and add core)"):
cache, cores = prepare()
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
with TestRun.step("Getting sequential cut-off parameters"):
params = cores[0].get_seq_cut_off_parameters()
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(1, Unit.GibiByte)])
with TestRun.step("Check if proper sequential cut off policy is set as a default"):
cache_part = cache_device.partitions[0]
core_part = core_device.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, force=True)
core = cache.add_core(core_dev=core_part)
with TestRun.step("Getting sequential cutoff parameters"):
params = core.get_seq_cut_off_parameters()
with TestRun.step("Check if proper sequential cutoff policy is set as a default"):
if params.policy != SeqCutOffPolicy.DEFAULT:
TestRun.fail(f"Wrong sequential cut off policy set: {params.policy} "
TestRun.fail(f"Wrong sequential cutoff policy set: {params.policy} "
f"should be {SeqCutOffPolicy.DEFAULT}")
with TestRun.step("Check if proper sequential cut off threshold is set as a default"):
with TestRun.step("Check if proper sequential cutoff threshold is set as a default"):
if params.threshold != SEQ_CUT_OFF_THRESHOLD_DEFAULT:
TestRun.fail(f"Wrong sequential cut off threshold set: {params.threshold} "
TestRun.fail(f"Wrong sequential cutoff threshold set: {params.threshold} "
f"should be {SEQ_CUT_OFF_THRESHOLD_DEFAULT}")
@ -49,32 +61,41 @@ def test_seq_cutoff_default_params():
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_set_get_policy_core(policy):
"""
title: Sequential cut-off policy set/get test for core
title: Sequential cutoff policy set/get test for core
description: |
Test if CAS is setting proper sequential cut-off policy for core and
returns previously set value
Verify if it is possible to set and get a sequential cutoff policy per core
pass_criteria:
- Sequential cut-off policy obtained from get-param command for the first core must be
- Sequential cutoff policy obtained from get-param command for the first core must be
the same as the one used in set-param command
- Sequential cut-off policy obtained from get-param command for the second core must be
- Sequential cutoff policy obtained from get-param command for the second core must be
proper default value
"""
with TestRun.step("Test prepare (start cache and add 2 cores)"):
cache, cores = prepare(cores_count=2)
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
with TestRun.step(f"Setting core sequential cut off policy mode to {policy}"):
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(1, Unit.GibiByte)] * 2)
cache_part = cache_device.partitions[0]
with TestRun.step("Start cache and add cores"):
cache = casadm.start_cache(cache_part, force=True)
cores = [cache.add_core(core_dev=part) for part in core_device.partitions]
with TestRun.step(f"Setting core sequential cutoff policy mode to {policy}"):
cores[0].set_seq_cutoff_policy(policy)
with TestRun.step("Check if proper sequential cut off policy was set for the first core"):
with TestRun.step("Check if proper sequential cutoff policy was set for the first core"):
if cores[0].get_seq_cut_off_policy() != policy:
TestRun.fail(f"Wrong sequential cut off policy set: "
TestRun.fail(f"Wrong sequential cutoff policy set: "
f"{cores[0].get_seq_cut_off_policy()} "
f"should be {policy}")
with TestRun.step("Check if proper default sequential cut off policy was set for the "
with TestRun.step("Check if proper default sequential cutoff policy was set for the "
"second core"):
if cores[1].get_seq_cut_off_policy() != SeqCutOffPolicy.DEFAULT:
TestRun.fail(f"Wrong default sequential cut off policy: "
TestRun.fail(f"Wrong default sequential cutoff policy: "
f"{cores[1].get_seq_cut_off_policy()} "
f"should be {SeqCutOffPolicy.DEFAULT}")
@ -84,24 +105,33 @@ def test_seq_cutoff_set_get_policy_core(policy):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_set_get_policy_cache(policy):
"""
title: Sequential cut-off policy set/get test for cache
title: Sequential cutoff policy set/get test for cache
description: |
Test if CAS is setting proper sequential cut-off policy for whole cache and
returns previously set value
Verify if it is possible to set and get a sequential cutoff policy for the whole cache
pass_criteria:
- Sequential cut-off policy obtained from get-param command for each of 3 cores must be the
- Sequential cutoff policy obtained from get-param command for each of 3 cores must be the
same as the one used in set-param command for cache
"""
with TestRun.step("Test prepare (start cache and add 3 cores)"):
cache, cores = prepare(cores_count=3)
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
with TestRun.step(f"Setting sequential cut off policy mode {policy} for cache"):
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(1, Unit.GibiByte)] * 3)
cache_part = cache_device.partitions[0]
with TestRun.step("Start cache and add cores"):
cache = casadm.start_cache(cache_part, force=True)
cores = [cache.add_core(core_dev=part) for part in core_device.partitions]
with TestRun.step(f"Setting sequential cutoff policy mode {policy} for cache"):
cache.set_seq_cutoff_policy(policy)
for i in TestRun.iteration(range(0, len(cores)), "Verifying if proper policy was set"):
with TestRun.step(f"Check if proper sequential cut off policy was set for core"):
with TestRun.step(f"Check if proper sequential cutoff policy was set for core"):
if cores[i].get_seq_cut_off_policy() != policy:
TestRun.fail(f"Wrong core sequential cut off policy: "
TestRun.fail(f"Wrong core sequential cutoff policy: "
f"{cores[i].get_seq_cut_off_policy()} "
f"should be {policy}")
@ -110,23 +140,35 @@ def test_seq_cutoff_set_get_policy_cache(policy):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_policy_load():
"""
title: Sequential cut-off policy set/get test with cache load between
title: Sequential cutoff policy set/get test with cache load between
description: |
Set each possible policy for different core, stop cache, test if after cache load
sequential cut-off policy value previously set is being loaded correctly for each core.
Set each possible policy for different core, stop cache, test if after cache load
sequential cutoff policy value previously set is being loaded correctly for each core.
pass_criteria:
- Sequential cut-off policy obtained from get-param command after cache load
- Sequential cutoff policy obtained from get-param command after cache load
must be the same as the one used in set-param command before cache stop
- Sequential cut-off policy loaded for the last core should be the default one
"""
with TestRun.step(f"Test prepare (start cache and add {len(SeqCutOffPolicy) + 1} cores)"):
# Create as many cores as many possible policies including default one
cache, cores = prepare(cores_count=len(SeqCutOffPolicy) + 1)
policies = [policy for policy in SeqCutOffPolicy]
- Sequential cutoff policy loaded for the last core should be the default one
"""
policies = [policy for policy in SeqCutOffPolicy]
for i, core in TestRun.iteration(enumerate(cores[:-1]), "Set all possible policies "
"except the default one"):
with TestRun.step(f"Setting cache sequential cut off policy mode to "
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(1, Unit.GibiByte)] * (len(SeqCutOffPolicy) + 1))
cache_part = cache_device.partitions[0]
with TestRun.step("Start cache and add cores"):
cache = casadm.start_cache(cache_part, force=True)
cores = [cache.add_core(core_dev=part) for part in core_device.partitions]
for i, core in TestRun.iteration(
enumerate(cores[:-1]),
"Set all possible policies except the default one"
):
with TestRun.step(f"Setting cache sequential cutoff policy mode to "
f"{policies[i]}"):
cores[i].set_seq_cutoff_policy(policies[i])
@ -139,18 +181,21 @@ def test_seq_cutoff_policy_load():
with TestRun.step("Getting cores from loaded cache"):
cores = loaded_cache.get_core_devices()
for i, core in TestRun.iteration(enumerate(cores[:-1]), "Check if proper policies have "
"been loaded"):
with TestRun.step(f"Check if proper sequential cut off policy was loaded"):
for i, core in TestRun.iteration(
enumerate(cores[:-1]),
"Check if proper policies have been loaded"
):
with TestRun.step(f"Check if proper sequential cutoff policy was loaded"):
if cores[i].get_seq_cut_off_policy() != policies[i]:
TestRun.fail(f"Wrong sequential cut off policy loaded: "
TestRun.fail(f"Wrong sequential cutoff policy loaded: "
f"{cores[i].get_seq_cut_off_policy()} "
f"should be {policies[i]}")
with TestRun.step(f"Check if proper (default) sequential cut off policy was loaded for "
f"last core"):
with TestRun.step(
"Check if proper (default) sequential cutoff policy was loaded for last core"
):
if cores[len(SeqCutOffPolicy)].get_seq_cut_off_policy() != SeqCutOffPolicy.DEFAULT:
TestRun.fail(f"Wrong sequential cut off policy loaded: "
TestRun.fail(f"Wrong sequential cutoff policy loaded: "
f"{cores[len(SeqCutOffPolicy)].get_seq_cut_off_policy()} "
f"should be {SeqCutOffPolicy.DEFAULT}")
@ -162,29 +207,41 @@ def test_seq_cutoff_policy_load():
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_set_invalid_threshold(threshold):
"""
title: Invalid sequential cut-off threshold test
description: Test if CAS is allowing setting invalid sequential cut-off threshold
title: Invalid sequential cutoff threshold test
description: Validate setting invalid sequential cutoff threshold
pass_criteria:
- Setting invalid sequential cut-off threshold should be blocked
- Setting invalid sequential cutoff threshold should be blocked
"""
with TestRun.step("Test prepare (start cache and add core)"):
cache, cores = prepare()
_threshold = Size(threshold, Unit.KibiByte)
_threshold = Size(threshold, Unit.KibiByte)
with TestRun.step(f"Setting cache sequential cut off threshold to out of range value: "
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(1, Unit.GibiByte)])
cache_part = cache_device.partitions[0]
core_part = core_device.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, force=True)
core = cache.add_core(core_dev=core_part)
with TestRun.step(f"Setting cache sequential cutoff threshold to out of range value: "
f"{_threshold}"):
command = set_param_cutoff_cmd(
cache_id=str(cache.cache_id), core_id=str(cores[0].core_id),
cache_id=str(cache.cache_id), core_id=str(core.core_id),
threshold=str(int(_threshold.get_value(Unit.KiloByte))))
output = TestRun.executor.run_expect_fail(command)
if "Invalid sequential cutoff threshold, must be in the range 1-4194181"\
not in output.stderr:
TestRun.fail("Command succeeded (should fail)!")
with TestRun.step(f"Setting cache sequential cut off threshold "
with TestRun.step(f"Setting cache sequential cutoff threshold "
f"to value passed as a float"):
command = set_param_cutoff_cmd(
cache_id=str(cache.cache_id), core_id=str(cores[0].core_id),
cache_id=str(cache.cache_id), core_id=str(core.core_id),
threshold=str(_threshold.get_value(Unit.KiloByte)))
output = TestRun.executor.run_expect_fail(command)
if "Invalid sequential cutoff threshold, must be a correct unsigned decimal integer"\
@ -198,26 +255,36 @@ def test_seq_cutoff_set_invalid_threshold(threshold):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_set_get_threshold(threshold):
"""
title: Sequential cut-off threshold set/get test
description: |
Test if CAS is setting proper sequential cut-off threshold and returns
previously set value
title: Sequential cutoff threshold set/get test
description: Verify setting and getting value of sequential cutoff threshold
pass_criteria:
- Sequential cut-off threshold obtained from get-param command must be the same as
- Sequential cutoff threshold obtained from get-param command must be the same as
the one used in set-param command
"""
with TestRun.step("Test prepare (start cache and add core)"):
cache, cores = prepare()
_threshold = Size(threshold, Unit.KibiByte)
_threshold = Size(threshold, Unit.KibiByte)
with TestRun.step(f"Setting cache sequential cut off threshold to "
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(1, Unit.GibiByte)])
cache_part = cache_device.partitions[0]
core_part = core_device.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, force=True)
core = cache.add_core(core_dev=core_part)
with TestRun.step(f"Setting cache sequential cutoff threshold to "
f"{_threshold}"):
cores[0].set_seq_cutoff_threshold(_threshold)
core.set_seq_cutoff_threshold(_threshold)
with TestRun.step("Check if proper sequential cut off threshold was set"):
if cores[0].get_seq_cut_off_threshold() != _threshold:
TestRun.fail(f"Wrong sequential cut off threshold set: "
f"{cores[0].get_seq_cut_off_threshold()} "
with TestRun.step("Check if proper sequential cutoff threshold was set"):
if core.get_seq_cut_off_threshold() != _threshold:
TestRun.fail(f"Wrong sequential cutoff threshold set: "
f"{core.get_seq_cut_off_threshold()} "
f"should be {_threshold}")
@ -227,22 +294,31 @@ def test_seq_cutoff_set_get_threshold(threshold):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_threshold_load(threshold):
"""
title: Sequential cut-off threshold set/get test with cache load between
description: |
Test if after cache load sequential cut-off threshold
value previously set is being loaded correctly. Each of possible sequential cut-off
policies is set for different core.
title: Sequential cutoff threshold after loading cache
description: Verify sequential cutoff threshold value after reloading the cache.
pass_criteria:
- Sequential cut-off threshold obtained from get-param command after cache load
- Sequential cutoff threshold obtained from get-param command after cache load
must be the same as the one used in set-param command before cache stop
"""
with TestRun.step("Test prepare (start cache and add core)"):
cache, cores = prepare()
_threshold = Size(threshold, Unit.KibiByte)
_threshold = Size(threshold, Unit.KibiByte)
with TestRun.step(f"Setting cache sequential cut off threshold to "
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(1, Unit.GibiByte)])
cache_part = cache_device.partitions[0]
core_part = core_device.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, force=True)
core = cache.add_core(core_dev=core_part)
with TestRun.step(f"Setting cache sequential cutoff threshold to "
f"{_threshold}"):
cores[0].set_seq_cutoff_threshold(_threshold)
core.set_seq_cutoff_threshold(_threshold)
with TestRun.step("Stopping cache"):
cache.stop()
@ -253,28 +329,8 @@ def test_seq_cutoff_threshold_load(threshold):
with TestRun.step("Getting core from loaded cache"):
cores_load = loaded_cache.get_core_devices()
with TestRun.step("Check if proper sequential cut off policy was loaded"):
with TestRun.step("Check if proper sequential cutoff policy was loaded"):
if cores_load[0].get_seq_cut_off_threshold() != _threshold:
TestRun.fail(f"Wrong sequential cut off threshold set: "
TestRun.fail(f"Wrong sequential cutoff threshold set: "
f"{cores_load[0].get_seq_cut_off_threshold()} "
f"should be {_threshold}")
def prepare(cores_count=1):
cache_device = TestRun.disks['cache']
core_device = TestRun.disks['core']
cache_device.create_partitions([Size(500, Unit.MebiByte)])
partitions = []
for x in range(cores_count):
partitions.append(Size(1, Unit.GibiByte))
core_device.create_partitions(partitions)
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions
TestRun.LOGGER.info("Staring cache")
cache = casadm.start_cache(cache_part, force=True)
TestRun.LOGGER.info("Adding core devices")
core_list = []
for core_part in core_parts:
core_list.append(cache.add_core(core_dev=core_part))
return cache, core_list

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -19,7 +20,7 @@ from api.cas.cache_config import (
)
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
# There should be at least 2 cache instances and 2 cores per cache
@ -35,80 +36,96 @@ number_of_checks = 10
@pytest.mark.parametrizex("cache_mode", CacheMode)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_set_get_seqcutoff_params(cache_mode):
def test_set_get_seq_cutoff_params(cache_mode):
"""
title: Test for setting and reading sequential cut-off parameters.
description: |
Verify that it is possible to set and read all available sequential cut-off
parameters using casadm --set-param and --get-param options.
pass_criteria:
- All sequential cut-off parameters are set to given values.
- All sequential cut-off parameters displays proper values.
title: Test for setting and reading sequential cutoff parameters.
description: |
Verify that it is possible to set and read all available sequential cutoff
parameters using casadm --set-param and --get-param options.
pass_criteria:
- All sequential cutoff parameters are set to given values.
- All sequential cutoff parameters displays proper values.
"""
with TestRun.step("Partition cache and core devices"):
cache_dev, core_dev = storage_prepare()
cache_dev = TestRun.disks["cache"]
cache_parts = [Size(1, Unit.GibiByte)] * caches_count
cache_dev.create_partitions(cache_parts)
core_dev = TestRun.disks["core"]
core_parts = [Size(2, Unit.GibiByte)] * cores_per_cache * caches_count
core_dev.create_partitions(core_parts)
with TestRun.step(
f"Start {caches_count} caches in {cache_mode} cache mode "
f"and add {cores_per_cache} cores per cache"
):
caches, cores = cache_prepare(cache_mode, cache_dev, core_dev)
caches = [
casadm.start_cache(part, cache_mode, force=True) for part in cache_dev.partitions
]
with TestRun.step("Check sequential cut-off default parameters"):
default_seqcutoff_params = SeqCutOffParameters.default_seq_cut_off_params()
cores = [
[
caches[i].add_core(
core_dev.partitions[i * cores_per_cache + j]
) for j in range(cores_per_cache)
] for i in range(caches_count)
]
with TestRun.step("Check sequential cutoff default parameters"):
default_seq_cutoff_params = SeqCutOffParameters.default_seq_cut_off_params()
for i in range(caches_count):
for j in range(cores_per_cache):
check_seqcutoff_parameters(cores[i][j], default_seqcutoff_params)
check_seq_cutoff_parameters(cores[i][j], default_seq_cutoff_params)
with TestRun.step(
"Set new random values for sequential cut-off parameters for one core only"
"Set new random values for sequential cutoff parameters for one core only"
):
for check in range(number_of_checks):
random_seqcutoff_params = new_seqcutoff_parameters_random_values()
cores[0][0].set_seq_cutoff_parameters(random_seqcutoff_params)
random_seq_cutoff_params = new_seq_cutoff_parameters_random_values()
cores[0][0].set_seq_cutoff_parameters(random_seq_cutoff_params)
# Check changed parameters for first core:
check_seqcutoff_parameters(cores[0][0], random_seqcutoff_params)
check_seq_cutoff_parameters(cores[0][0], random_seq_cutoff_params)
# Check default parameters for other cores:
for j in range(1, cores_per_cache):
check_seqcutoff_parameters(cores[0][j], default_seqcutoff_params)
check_seq_cutoff_parameters(cores[0][j], default_seq_cutoff_params)
for i in range(1, caches_count):
for j in range(cores_per_cache):
check_seqcutoff_parameters(cores[i][j], default_seqcutoff_params)
check_seq_cutoff_parameters(cores[i][j], default_seq_cutoff_params)
with TestRun.step(
"Set new random values for sequential cut-off parameters "
"Set new random values for sequential cutoff parameters "
"for all cores within given cache instance"
):
for check in range(number_of_checks):
random_seqcutoff_params = new_seqcutoff_parameters_random_values()
caches[0].set_seq_cutoff_parameters(random_seqcutoff_params)
random_seq_cutoff_params = new_seq_cutoff_parameters_random_values()
caches[0].set_seq_cutoff_parameters(random_seq_cutoff_params)
# Check changed parameters for first cache instance:
for j in range(cores_per_cache):
check_seqcutoff_parameters(cores[0][j], random_seqcutoff_params)
check_seq_cutoff_parameters(cores[0][j], random_seq_cutoff_params)
# Check default parameters for other cache instances:
for i in range(1, caches_count):
for j in range(cores_per_cache):
check_seqcutoff_parameters(cores[i][j], default_seqcutoff_params)
check_seq_cutoff_parameters(cores[i][j], default_seq_cutoff_params)
with TestRun.step(
"Set new random values for sequential cut-off parameters for all cores"
"Set new random values for sequential cutoff parameters for all cores"
):
for check in range(number_of_checks):
seqcutoff_params = []
seq_cutoff_params = []
for i in range(caches_count):
for j in range(cores_per_cache):
random_seqcutoff_params = new_seqcutoff_parameters_random_values()
seqcutoff_params.append(random_seqcutoff_params)
cores[i][j].set_seq_cutoff_parameters(random_seqcutoff_params)
random_seq_cutoff_params = new_seq_cutoff_parameters_random_values()
seq_cutoff_params.append(random_seq_cutoff_params)
cores[i][j].set_seq_cutoff_parameters(random_seq_cutoff_params)
for i in range(caches_count):
for j in range(cores_per_cache):
check_seqcutoff_parameters(
cores[i][j], seqcutoff_params[i * cores_per_cache + j]
check_seq_cutoff_parameters(
cores[i][j], seq_cutoff_params[i * cores_per_cache + j]
)
@ -118,24 +135,36 @@ def test_set_get_seqcutoff_params(cache_mode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_set_get_cleaning_params(cache_mode, cleaning_policy):
"""
title: Test for setting and reading cleaning parameters.
description: |
Verify that it is possible to set and read all available cleaning
parameters for all cleaning policies using casadm --set-param and
--get-param options.
pass_criteria:
- All cleaning parameters are set to given values.
- All cleaning parameters displays proper values.
title: Test for setting and reading cleaning parameters.
description: |
Verify that it is possible to set and read all available cleaning
parameters for all cleaning policies using casadm --set-param and
--get-param options.
pass_criteria:
- All cleaning parameters are set to given values.
- All cleaning parameters displays proper values.
"""
with TestRun.step("Partition cache and core devices"):
cache_dev, core_dev = storage_prepare()
cache_dev = TestRun.disks["cache"]
cache_parts = [Size(1, Unit.GibiByte)] * caches_count
cache_dev.create_partitions(cache_parts)
core_dev = TestRun.disks["core"]
core_parts = [Size(2, Unit.GibiByte)] * cores_per_cache * caches_count
core_dev.create_partitions(core_parts)
with TestRun.step(
f"Start {caches_count} caches in {cache_mode} cache mode "
f"and add {cores_per_cache} cores per cache"
):
caches, cores = cache_prepare(cache_mode, cache_dev, core_dev)
caches = [
casadm.start_cache(part, cache_mode, force=True) for part in cache_dev.partitions
]
for i in range(caches_count):
for j in range(cores_per_cache):
caches[i].add_core(core_dev.partitions[i * cores_per_cache + j])
with TestRun.step(f"Set cleaning policy to {cleaning_policy}"):
if cleaning_policy != CleaningPolicy.DEFAULT:
@ -204,33 +233,7 @@ def test_set_get_cleaning_params(cache_mode, cleaning_policy):
)
def storage_prepare():
cache_dev = TestRun.disks["cache"]
cache_parts = [Size(1, Unit.GibiByte)] * caches_count
cache_dev.create_partitions(cache_parts)
core_dev = TestRun.disks["core"]
core_parts = [Size(2, Unit.GibiByte)] * cores_per_cache * caches_count
core_dev.create_partitions(core_parts)
return cache_dev, core_dev
def cache_prepare(cache_mode, cache_dev, core_dev):
caches = []
for i in range(caches_count):
caches.append(
casadm.start_cache(cache_dev.partitions[i], cache_mode, force=True)
)
cores = [[] for i in range(caches_count)]
for i in range(caches_count):
for j in range(cores_per_cache):
core_partition_nr = i * cores_per_cache + j
cores[i].append(caches[i].add_core(core_dev.partitions[core_partition_nr]))
return caches, cores
def new_seqcutoff_parameters_random_values():
def new_seq_cutoff_parameters_random_values():
return SeqCutOffParameters(
threshold=Size(random.randrange(1, 1000000), Unit.KibiByte),
policy=random.choice(list(SeqCutOffPolicy)),
@ -274,27 +277,27 @@ def new_cleaning_parameters_random_values(cleaning_policy):
return cleaning_params
def check_seqcutoff_parameters(core, seqcutoff_params):
current_seqcutoff_params = core.get_seq_cut_off_parameters()
def check_seq_cutoff_parameters(core, seq_cutoff_params):
current_seq_cutoff_params = core.get_seq_cut_off_parameters()
failed_params = ""
if current_seqcutoff_params.threshold != seqcutoff_params.threshold:
if current_seq_cutoff_params.threshold != seq_cutoff_params.threshold:
failed_params += (
f"Threshold is {current_seqcutoff_params.threshold}, "
f"should be {seqcutoff_params.threshold}\n"
f"Threshold is {current_seq_cutoff_params.threshold}, "
f"should be {seq_cutoff_params.threshold}\n"
)
if current_seqcutoff_params.policy != seqcutoff_params.policy:
if current_seq_cutoff_params.policy != seq_cutoff_params.policy:
failed_params += (
f"Policy is {current_seqcutoff_params.policy}, "
f"should be {seqcutoff_params.policy}\n"
f"Policy is {current_seq_cutoff_params.policy}, "
f"should be {seq_cutoff_params.policy}\n"
)
if current_seqcutoff_params.promotion_count != seqcutoff_params.promotion_count:
if current_seq_cutoff_params.promotion_count != seq_cutoff_params.promotion_count:
failed_params += (
f"Promotion count is {current_seqcutoff_params.promotion_count}, "
f"should be {seqcutoff_params.promotion_count}\n"
f"Promotion count is {current_seq_cutoff_params.promotion_count}, "
f"should be {seq_cutoff_params.promotion_count}\n"
)
if failed_params:
TestRun.LOGGER.error(
f"Sequential cut-off parameters are not correct "
f"Sequential cutoff parameters are not correct "
f"for {core.path}:\n{failed_params}"
)
@ -305,12 +308,12 @@ def check_cleaning_parameters(cache, cleaning_policy, cleaning_params):
failed_params = ""
if current_cleaning_params.wake_up_time != cleaning_params.wake_up_time:
failed_params += (
f"Wake Up time is {current_cleaning_params.wake_up_time}, "
f"Wake up time is {current_cleaning_params.wake_up_time}, "
f"should be {cleaning_params.wake_up_time}\n"
)
if current_cleaning_params.staleness_time != cleaning_params.staleness_time:
failed_params += (
f"Staleness Time is {current_cleaning_params.staleness_time}, "
f"Staleness time is {current_cleaning_params.staleness_time}, "
f"should be {cleaning_params.staleness_time}\n"
)
if (
@ -318,7 +321,7 @@ def check_cleaning_parameters(cache, cleaning_policy, cleaning_params):
!= cleaning_params.flush_max_buffers
):
failed_params += (
f"Flush Max Buffers is {current_cleaning_params.flush_max_buffers}, "
f"Flush max buffers is {current_cleaning_params.flush_max_buffers}, "
f"should be {cleaning_params.flush_max_buffers}\n"
)
if (
@ -326,7 +329,7 @@ def check_cleaning_parameters(cache, cleaning_policy, cleaning_params):
!= cleaning_params.activity_threshold
):
failed_params += (
f"Activity Threshold is {current_cleaning_params.activity_threshold}, "
f"Activity threshold is {current_cleaning_params.activity_threshold}, "
f"should be {cleaning_params.activity_threshold}\n"
)
if failed_params:
@ -340,7 +343,7 @@ def check_cleaning_parameters(cache, cleaning_policy, cleaning_params):
failed_params = ""
if current_cleaning_params.wake_up_time != cleaning_params.wake_up_time:
failed_params += (
f"Wake Up time is {current_cleaning_params.wake_up_time}, "
f"Wake up time is {current_cleaning_params.wake_up_time}, "
f"should be {cleaning_params.wake_up_time}\n"
)
if (
@ -348,7 +351,7 @@ def check_cleaning_parameters(cache, cleaning_policy, cleaning_params):
!= cleaning_params.flush_max_buffers
):
failed_params += (
f"Flush Max Buffers is {current_cleaning_params.flush_max_buffers}, "
f"Flush max buffers is {current_cleaning_params.flush_max_buffers}, "
f"should be {cleaning_params.flush_max_buffers}\n"
)
if failed_params:

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import time
@ -12,32 +12,35 @@ from api.cas import casadm, cli_messages, cli
from api.cas.cache_config import CacheMode, CleaningPolicy
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from test_tools.disk_utils import get_device_filesystem_type, Filesystem
from test_tools.fs_tools import Filesystem, get_device_filesystem_type
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from test_utils.disk_finder import get_system_disks
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from test_tools.disk_finder import get_system_disks
from connection.utils.output import CmdException
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_zero_metadata_negative_cases():
"""
title: Test for '--zero-metadata' negative cases.
description: |
Test for '--zero-metadata' scenarios with expected failures.
pass_criteria:
- Zeroing metadata without '--force' failed when run on cache.
- Zeroing metadata with '--force' failed when run on cache.
- Zeroing metadata failed when run on system drive.
- Load cache command failed after successfully zeroing metadata on the cache device.
title: Test for '--zero-metadata' negative cases.
description: Test for '--zero-metadata' scenarios with expected failures.
pass_criteria:
- Zeroing metadata without '--force' failed when run on cache.
- Zeroing metadata with '--force' failed when run on cache.
- Zeroing metadata failed when run on system drive.
- Load cache command failed after successfully zeroing metadata on the cache device.
"""
with TestRun.step("Prepare cache and core devices."):
cache_dev, core_dev, cache_disk = prepare_devices()
cache_disk = TestRun.disks['cache']
cache_disk.create_partitions([Size(100, Unit.MebiByte)])
cache_dev = cache_disk.partitions[0]
core_disk = TestRun.disks['core']
core_disk.create_partitions([Size(5, Unit.GibiByte)])
with TestRun.step("Start cache."):
cache = casadm.start_cache(cache_dev, force=True)
casadm.start_cache(cache_dev, force=True)
with TestRun.step("Try to zero metadata and validate error message."):
try:
@ -75,7 +78,7 @@ def test_zero_metadata_negative_cases():
with TestRun.step("Load cache."):
try:
cache = casadm.load_cache(cache_dev)
casadm.load_cache(cache_dev)
TestRun.LOGGER.error("Loading cache should fail.")
except CmdException:
TestRun.LOGGER.info("Loading cache failed as expected.")
@ -86,16 +89,19 @@ def test_zero_metadata_negative_cases():
@pytest.mark.parametrizex("filesystem", Filesystem)
def test_zero_metadata_filesystem(filesystem):
"""
title: Test for '--zero-metadata' and filesystem.
description: |
Test for '--zero-metadata' on drive with filesystem.
pass_criteria:
- Zeroing metadata on device with filesystem failed and not removed filesystem.
- Zeroing metadata on mounted device failed.
title: Test for '--zero-metadata' and filesystem.
description: Test for '--zero-metadata' on drive with filesystem.
pass_criteria:
- Zeroing metadata on device with filesystem failed and not removed filesystem.
- Zeroing metadata on mounted device failed.
"""
mount_point = "/mnt"
with TestRun.step("Prepare devices."):
cache_dev, core_disk, cache_disk = prepare_devices()
cache_disk = TestRun.disks['cache']
cache_disk.create_partitions([Size(100, Unit.MebiByte)])
cache_dev = cache_disk.partitions[0]
core_disk = TestRun.disks['core']
core_disk.create_partitions([Size(5, Unit.GibiByte)])
with TestRun.step("Create filesystem on core device."):
core_disk.create_filesystem(filesystem)
@ -131,17 +137,21 @@ def test_zero_metadata_filesystem(filesystem):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_zero_metadata_dirty_data():
"""
title: Test for '--zero-metadata' and dirty data scenario.
description: |
Test for '--zero-metadata' with and without 'force' option if there are dirty data
on cache.
pass_criteria:
- Zeroing metadata without force failed on cache with dirty data.
- Zeroing metadata with force ran successfully on cache with dirty data.
- Cache started successfully after zeroing metadata on cache with dirty data.
title: Test for '--zero-metadata' and dirty data scenario.
description: |
Test for '--zero-metadata' with and without 'force' option if there are dirty data
on cache.
pass_criteria:
- Zeroing metadata without force failed on cache with dirty data.
- Zeroing metadata with force ran successfully on cache with dirty data.
- Cache started successfully after zeroing metadata on cache with dirty data.
"""
with TestRun.step("Prepare cache and core devices."):
cache_dev, core_disk, cache_disk = prepare_devices()
cache_disk = TestRun.disks['cache']
cache_disk.create_partitions([Size(100, Unit.MebiByte)])
cache_dev = cache_disk.partitions[0]
core_disk = TestRun.disks['core']
core_disk.create_partitions([Size(5, Unit.GibiByte)])
with TestRun.step("Start cache."):
cache = casadm.start_cache(cache_dev, CacheMode.WB, force=True)
@ -165,7 +175,7 @@ def test_zero_metadata_dirty_data():
with TestRun.step("Start cache (expect to fail)."):
try:
cache = casadm.start_cache(cache_dev, CacheMode.WB)
casadm.start_cache(cache_dev, CacheMode.WB)
except CmdException:
TestRun.LOGGER.info("Start cache failed as expected.")
@ -186,7 +196,7 @@ def test_zero_metadata_dirty_data():
with TestRun.step("Start cache without 'force' option."):
try:
cache = casadm.start_cache(cache_dev, CacheMode.WB)
casadm.start_cache(cache_dev, CacheMode.WB)
TestRun.LOGGER.info("Cache started successfully.")
except CmdException:
TestRun.LOGGER.error("Start cache failed.")
@ -196,21 +206,25 @@ def test_zero_metadata_dirty_data():
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_zero_metadata_dirty_shutdown():
"""
title: Test for '--zero-metadata' and dirty shutdown scenario.
description: |
Test for '--zero-metadata' with and without 'force' option on cache which had been dirty
shut down before.
pass_criteria:
- Zeroing metadata without force failed on cache after dirty shutdown.
- Zeroing metadata with force ran successfully on cache after dirty shutdown.
- Cache started successfully after dirty shutdown and zeroing metadata on cache.
title: Test for '--zero-metadata' and dirty shutdown scenario.
description: |
Test for '--zero-metadata' with and without 'force' option on cache which had been dirty
shut down before.
pass_criteria:
- Zeroing metadata without force failed on cache after dirty shutdown.
- Zeroing metadata with force ran successfully on cache after dirty shutdown.
- Cache started successfully after dirty shutdown and zeroing metadata on cache.
"""
with TestRun.step("Prepare cache and core devices."):
cache_dev, core_disk, cache_disk = prepare_devices()
cache_disk = TestRun.disks['cache']
cache_disk.create_partitions([Size(100, Unit.MebiByte)])
cache_dev = cache_disk.partitions[0]
core_disk = TestRun.disks['core']
core_disk.create_partitions([Size(5, Unit.GibiByte)])
with TestRun.step("Start cache."):
cache = casadm.start_cache(cache_dev, CacheMode.WT, force=True)
core = cache.add_core(core_disk)
cache.add_core(core_disk)
with TestRun.step("Unplug cache device."):
cache_disk.unplug()
@ -227,7 +241,7 @@ def test_zero_metadata_dirty_shutdown():
with TestRun.step("Start cache (expect to fail)."):
try:
cache = casadm.start_cache(cache_dev, CacheMode.WT)
casadm.start_cache(cache_dev, CacheMode.WT)
TestRun.LOGGER.error("Starting cache should fail!")
except CmdException:
TestRun.LOGGER.info("Start cache failed as expected.")
@ -249,17 +263,7 @@ def test_zero_metadata_dirty_shutdown():
with TestRun.step("Start cache."):
try:
cache = casadm.start_cache(cache_dev, CacheMode.WT)
casadm.start_cache(cache_dev, CacheMode.WT)
TestRun.LOGGER.info("Cache started successfully.")
except CmdException:
TestRun.LOGGER.error("Start cache failed.")
def prepare_devices():
cache_disk = TestRun.disks['cache']
cache_disk.create_partitions([Size(100, Unit.MebiByte)])
cache_part = cache_disk.partitions[0]
core_disk = TestRun.disks['core']
core_disk.create_partitions([Size(5, Unit.GibiByte)])
return cache_part, core_disk, cache_disk

View File

@ -1,35 +0,0 @@
#
# Copyright(c) 2021 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
#
import datetime
from storage_devices.lvm import get_block_devices_list
from api.cas.init_config import InitConfig
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite, VerifyMethod
from test_utils.size import Size, Unit
def run_fio_on_lvm(volumes: []):
fio_run = (Fio().create_command()
.read_write(ReadWrite.randrw)
.io_engine(IoEngine.sync)
.io_depth(1)
.time_based()
.run_time(datetime.timedelta(seconds=180))
.do_verify()
.verify(VerifyMethod.md5)
.block_size(Size(1, Unit.Blocks4096)))
for lvm in volumes:
fio_run.add_job().target(lvm).size(lvm.size)
fio_run.run()
def get_test_configuration():
config = InitConfig.create_init_config_from_running_configuration()
devices = get_block_devices_list()
return config, devices

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2023-2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -8,39 +8,52 @@ import os
import posixpath
import sys
import traceback
from datetime import timedelta
import paramiko
import pytest
import yaml
from datetime import timedelta
sys.path.append(os.path.join(os.path.dirname(__file__), "../test-framework"))
from core.test_run import Blocked
from core.test_run_utils import TestRun
from api.cas import installer
from api.cas import casadm
from test_utils import git
from api.cas.cas_service import opencas_drop_in_directory
from storage_devices.raid import Raid
from storage_devices.ramdisk import RamDisk
from test_utils.os_utils import Udev, kill_all_io
from test_utils.disk_finder import get_disk_serial_number
from test_tools.disk_utils import PartitionTable, create_partition_table
from test_tools.os_tools import kill_all_io
from test_tools.udev import Udev
from test_tools.disk_tools import PartitionTable, create_partition_table
from test_tools.device_mapper import DeviceMapper
from test_tools.mdadm import Mdadm
from test_tools.fs_utils import remove
from test_tools.fs_tools import remove, check_if_directory_exists, create_directory
from test_tools import initramfs, git
from log.logger import create_log, Log
from test_utils.singleton import Singleton
from test_utils.common.singleton import Singleton
from storage_devices.lvm import Lvm, LvmConfiguration
from storage_devices.disk import Disk
from storage_devices.drbd import Drbd
class Opencas(metaclass=Singleton):
def __init__(self, repo_dir, working_dir):
self.repo_dir = repo_dir
self.working_dir = working_dir
self.already_updated = False
self.fuzzy_iter_count = 1000
def pytest_addoption(parser):
TestRun.addoption(parser)
parser.addoption("--dut-config", action="append", type=str)
parser.addoption(
"--log-path",
action="store",
default=f"{os.path.join(os.path.dirname(__file__), '../results')}",
)
parser.addoption("--fuzzy-iter-count", action="store")
def pytest_configure(config):
TestRun.configure(config)
def pytest_generate_tests(metafunc):
TestRun.generate_tests(metafunc)
def pytest_collection_modifyitems(config, items):
@ -63,15 +76,17 @@ def pytest_runtest_setup(item):
# User can also have own test wrapper, which runs test prepare, cleanup, etc.
# Then it should be placed in plugins package
test_name = item.name.split('[')[0]
TestRun.LOGGER = create_log(item.config.getoption('--log-path'), test_name)
test_name = item.name.split("[")[0]
TestRun.LOGGER = create_log(item.config.getoption("--log-path"), test_name)
TestRun.LOGGER.unique_test_identifier = f"TEST__{item.name}__random_seed_{TestRun.random_seed}"
duts = item.config.getoption('--dut-config')
duts = item.config.getoption("--dut-config")
required_duts = next(item.iter_markers(name="multidut"), None)
required_duts = required_duts.args[0] if required_duts is not None else 1
if required_duts > len(duts):
raise Exception(f"Test requires {required_duts} DUTs, only {len(duts)} DUT configs "
f"provided")
raise Exception(
f"Test requires {required_duts} DUTs, only {len(duts)} DUT configs provided"
)
else:
duts = duts[:required_duts]
@ -81,12 +96,13 @@ def pytest_runtest_setup(item):
with open(dut) as cfg:
dut_config = yaml.safe_load(cfg)
except Exception as ex:
raise Exception(f"{ex}\n"
f"You need to specify DUT config. See the example_dut_config.py file")
raise Exception(
f"{ex}\nYou need to specify DUT config. See the example_dut_config.py file"
)
dut_config['plugins_dir'] = os.path.join(os.path.dirname(__file__), "../lib")
dut_config['opt_plugins'] = {"test_wrapper": {}, "serial_log": {}, "power_control": {}}
dut_config['extra_logs'] = {"cas": "/var/log/opencas.log"}
dut_config["plugins_dir"] = os.path.join(os.path.dirname(__file__), "../lib")
dut_config["opt_plugins"] = {"test_wrapper": {}, "serial_log": {}, "power_control": {}}
dut_config["extra_logs"] = {"cas": "/var/log/opencas.log"}
try:
TestRun.prepare(item, dut_config)
@ -98,24 +114,30 @@ def pytest_runtest_setup(item):
raise
except Exception:
try:
TestRun.plugin_manager.get_plugin('power_control').power_cycle()
TestRun.plugin_manager.get_plugin("power_control").power_cycle()
TestRun.executor.wait_for_connection()
except Exception:
raise Exception("Failed to connect to DUT.")
TestRun.setup()
except Exception as ex:
raise Exception(f"Exception occurred during test setup:\n"
f"{str(ex)}\n{traceback.format_exc()}")
raise Exception(
f"Exception occurred during test setup:\n{str(ex)}\n{traceback.format_exc()}"
)
TestRun.LOGGER.print_test_identifier_to_logs()
TestRun.usr = Opencas(
repo_dir=os.path.join(os.path.dirname(__file__), "../../.."),
working_dir=dut_config['working_dir'])
if item.config.getoption('--fuzzy-iter-count'):
TestRun.usr.fuzzy_iter_count = int(item.config.getoption('--fuzzy-iter-count'))
working_dir=dut_config["working_dir"],
)
if item.config.getoption("--fuzzy-iter-count"):
TestRun.usr.fuzzy_iter_count = int(item.config.getoption("--fuzzy-iter-count"))
TestRun.LOGGER.info(f"DUT info: {TestRun.dut}")
TestRun.dut.plugin_manager = TestRun.plugin_manager
TestRun.dut.executor = TestRun.executor
TestRun.dut.cache_list = []
TestRun.dut.core_list = []
TestRun.duts.append(TestRun.dut)
base_prepare(item)
@ -123,6 +145,80 @@ def pytest_runtest_setup(item):
TestRun.LOGGER.start_group("Test body")
def base_prepare(item):
with TestRun.LOGGER.step("Cleanup before test"):
TestRun.executor.run("pkill --signal=SIGKILL fsck")
Udev.enable()
kill_all_io(graceful=False)
DeviceMapper.remove_all()
if installer.check_if_installed():
try:
from api.cas.init_config import InitConfig
InitConfig.create_default_init_config()
unmount_cas_devices()
casadm.stop_all_caches()
casadm.remove_all_detached_cores()
except Exception:
pass # TODO: Reboot DUT if test is executed remotely
remove(str(opencas_drop_in_directory), recursive=True, ignore_errors=True)
from storage_devices.drbd import Drbd
if Drbd.is_installed():
__drbd_cleanup()
lvms = Lvm.discover()
if lvms:
Lvm.remove_all()
LvmConfiguration.remove_filters_from_config()
initramfs.update()
raids = Raid.discover()
if len(TestRun.disks):
test_run_disk_ids = {dev.device_id for dev in TestRun.disks.values()}
for raid in raids:
# stop only those RAIDs, which are comprised of test disks
if filter(lambda dev: dev.device_id in test_run_disk_ids, raid.array_devices):
raid.remove_partitions()
raid.unmount()
raid.stop()
for device in raid.array_devices:
Mdadm.zero_superblock(posixpath.join("/dev", device.get_device_id()))
Udev.settle()
RamDisk.remove_all()
if check_if_directory_exists(path=TestRun.TEST_RUN_DATA_PATH):
remove(
path=posixpath.join(TestRun.TEST_RUN_DATA_PATH, "*"),
force=True,
recursive=True,
)
else:
create_directory(path=TestRun.TEST_RUN_DATA_PATH)
for disk in TestRun.disks.values():
disk_serial = Disk.get_disk_serial_number(disk.path)
if disk.serial_number and disk.serial_number != disk_serial:
raise Exception(
f"Serial for {disk.path} doesn't match the one from the config."
f"Serial from config {disk.serial_number}, actual serial {disk_serial}"
)
disk.remove_partitions()
disk.unmount()
Mdadm.zero_superblock(posixpath.join("/dev", disk.get_device_id()))
create_partition_table(disk, PartitionTable.gpt)
TestRun.usr.already_updated = True
TestRun.LOGGER.add_build_info(f"Commit hash:")
TestRun.LOGGER.add_build_info(f"{git.get_current_commit_hash()}")
TestRun.LOGGER.add_build_info(f"Commit message:")
TestRun.LOGGER.add_build_info(f"{git.get_current_commit_message()}")
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
res = (yield).get_result()
@ -142,16 +238,18 @@ def pytest_runtest_teardown():
if not TestRun.executor.is_active():
TestRun.executor.wait_for_connection()
Udev.enable()
kill_all_io()
kill_all_io(graceful=False)
unmount_cas_devices()
if installer.check_if_installed():
casadm.remove_all_detached_cores()
casadm.stop_all_caches()
from api.cas.init_config import InitConfig
InitConfig.create_default_init_config()
from storage_devices.drbd import Drbd
if installer.check_if_installed() and Drbd.is_installed():
try:
casadm.stop_all_caches()
@ -160,41 +258,45 @@ def pytest_runtest_teardown():
elif Drbd.is_installed():
Drbd.down_all()
lvms = Lvm.discover()
if lvms:
Lvm.remove_all()
LvmConfiguration.remove_filters_from_config()
initramfs.update()
DeviceMapper.remove_all()
RamDisk.remove_all()
if check_if_directory_exists(path=TestRun.TEST_RUN_DATA_PATH):
remove(
path=posixpath.join(TestRun.TEST_RUN_DATA_PATH, "*"),
force=True,
recursive=True,
)
except Exception as ex:
TestRun.LOGGER.warning(f"Exception occurred during platform cleanup.\n"
f"{str(ex)}\n{traceback.format_exc()}")
TestRun.LOGGER.warning(
f"Exception occurred during platform cleanup.\n"
f"{str(ex)}\n{traceback.format_exc()}"
)
TestRun.LOGGER.end()
for dut in TestRun.duts:
with TestRun.use_dut(dut):
if TestRun.executor:
os.makedirs(os.path.join(TestRun.LOGGER.base_dir, "dut_info",
dut.ip if dut.ip is not None
else dut.config.get("host")),
exist_ok=True)
os.makedirs(
os.path.join(
TestRun.LOGGER.base_dir,
"dut_info",
dut.ip if dut.ip is not None else dut.config.get("host"),
),
exist_ok=True,
)
TestRun.LOGGER.get_additional_logs()
Log.destroy()
TestRun.teardown()
def pytest_configure(config):
TestRun.configure(config)
def pytest_generate_tests(metafunc):
TestRun.generate_tests(metafunc)
def pytest_addoption(parser):
TestRun.addoption(parser)
parser.addoption("--dut-config", action="append", type=str)
parser.addoption("--log-path", action="store",
default=f"{os.path.join(os.path.dirname(__file__), '../results')}")
parser.addoption("--fuzzy-iter-count", action="store")
def unmount_cas_devices():
output = TestRun.executor.run("cat /proc/mounts | grep cas")
# If exit code is '1' but stdout is empty, there is no mounted cas devices
@ -218,72 +320,18 @@ def unmount_cas_devices():
def __drbd_cleanup():
from storage_devices.drbd import Drbd
Drbd.down_all()
# If drbd instance had been configured on top of the CAS, the previos attempt to stop
# If drbd instance had been configured on top of the CAS, the previous attempt to stop
# failed. As drbd has been stopped try to stop CAS one more time.
if installer.check_if_installed():
casadm.stop_all_caches()
remove("/etc/drbd.d/*.res", force=True, ignore_errors=True)
def base_prepare(item):
with TestRun.LOGGER.step("Cleanup before test"):
TestRun.executor.run("pkill --signal=SIGKILL fsck")
Udev.enable()
kill_all_io()
DeviceMapper.remove_all()
if installer.check_if_installed():
try:
from api.cas.init_config import InitConfig
InitConfig.create_default_init_config()
unmount_cas_devices()
casadm.stop_all_caches()
casadm.remove_all_detached_cores()
except Exception:
pass # TODO: Reboot DUT if test is executed remotely
remove(str(opencas_drop_in_directory), recursive=True, ignore_errors=True)
from storage_devices.drbd import Drbd
if Drbd.is_installed():
__drbd_cleanup()
lvms = Lvm.discover()
if lvms:
Lvm.remove_all()
LvmConfiguration.remove_filters_from_config()
raids = Raid.discover()
for raid in raids:
# stop only those RAIDs, which are comprised of test disks
if all(map(lambda device:
any(map(lambda disk_path:
disk_path in device.get_device_id(),
[bd.get_device_id() for bd in TestRun.dut.disks])),
raid.array_devices)):
raid.remove_partitions()
raid.unmount()
raid.stop()
for device in raid.array_devices:
Mdadm.zero_superblock(posixpath.join('/dev', device.get_device_id()))
Udev.settle()
RamDisk.remove_all()
for disk in TestRun.dut.disks:
disk_serial = get_disk_serial_number(disk.path)
if disk.serial_number and disk.serial_number != disk_serial:
raise Exception(
f"Serial for {disk.path} doesn't match the one from the config."
f"Serial from config {disk.serial_number}, actual serial {disk_serial}"
)
disk.remove_partitions()
disk.unmount()
Mdadm.zero_superblock(posixpath.join('/dev', disk.get_device_id()))
create_partition_table(disk, PartitionTable.gpt)
TestRun.usr.already_updated = True
TestRun.LOGGER.add_build_info(f'Commit hash:')
TestRun.LOGGER.add_build_info(f"{git.get_current_commit_hash()}")
TestRun.LOGGER.add_build_info(f'Commit message:')
TestRun.LOGGER.add_build_info(f'{git.get_current_commit_message()}')
class Opencas(metaclass=Singleton):
def __init__(self, repo_dir, working_dir):
self.repo_dir = repo_dir
self.working_dir = working_dir
self.already_updated = False
self.fuzzy_iter_count = 1000

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -14,7 +15,7 @@ from core.test_run import TestRun
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, VerifyMethod
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Unit, Size
from type_def.size import Unit, Size
start_size = int(Size(512, Unit.Byte))

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -12,14 +13,12 @@ import pytest
from api.cas import casadm
from api.cas.cache_config import CacheMode
from core.test_run import TestRun
from test_tools import fs_utils
from test_tools.disk_utils import Filesystem
from test_tools.fs_tools import Filesystem, create_directory, check_if_directory_exists, md5sum
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, VerifyMethod
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.filesystem.file import File
from test_utils.os_utils import sync
from test_utils.size import Unit, Size
from test_tools.os_tools import sync
from type_def.size import Unit, Size
start_size = Size(512, Unit.Byte).get_value()
@ -72,8 +71,8 @@ def test_data_integrity_5d_dss(filesystems):
with TestRun.step("Create filesystems and mount cores"):
for i, core in enumerate(cores):
mount_point = core.path.replace('/dev/', '/mnt/')
if not fs_utils.check_if_directory_exists(mount_point):
fs_utils.create_directory(mount_point)
if not check_if_directory_exists(mount_point):
create_directory(mount_point)
TestRun.LOGGER.info(f"Create filesystem {filesystems[i].name} on {core.path}")
core.create_filesystem(filesystems[i])
TestRun.LOGGER.info(f"Mount filesystem {filesystems[i].name} on {core.path} to "
@ -106,14 +105,14 @@ def test_data_integrity_5d_dss(filesystems):
core.unmount()
with TestRun.step("Calculate md5 for each core"):
core_md5s = [File(core.full_path).md5sum() for core in cores]
core_md5s = [md5sum(core.path) for core in cores]
with TestRun.step("Stop caches"):
for cache in caches:
cache.stop()
with TestRun.step("Calculate md5 for each core"):
dev_md5s = [File(dev.full_path).md5sum() for dev in core_devices]
dev_md5s = [md5sum(dev.full_path) for dev in core_devices]
with TestRun.step("Compare md5 sums for cores and core devices"):
for core_md5, dev_md5, mode, fs in zip(core_md5s, dev_md5s, cache_modes, filesystems):
@ -171,14 +170,14 @@ def test_data_integrity_5d():
fio_run.run()
with TestRun.step("Calculate md5 for each core"):
core_md5s = [File(core.full_path).md5sum() for core in cores]
core_md5s = [md5sum(core.path) for core in cores]
with TestRun.step("Stop caches"):
for cache in caches:
cache.stop()
with TestRun.step("Calculate md5 for each core"):
dev_md5s = [File(dev.full_path).md5sum() for dev in core_devices]
dev_md5s = [md5sum(dev.full_path) for dev in core_devices]
with TestRun.step("Compare md5 sums for cores and core devices"):
for core_md5, dev_md5, mode in zip(core_md5s, dev_md5s, cache_modes):

View File

@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -19,10 +19,10 @@ from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from storage_devices.ramdisk import RamDisk
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite
from test_utils.asynchronous import start_async_func
from connection.utils.asynchronous import start_async_func
from test_utils.filesystem.directory import Directory
from test_utils.output import CmdException
from test_utils.size import Unit, Size
from connection.utils.output import CmdException
from type_def.size import Unit, Size
ram_disk, tmp_dir, fio_seed = None, None, None
num_jobs = 8
@ -230,7 +230,7 @@ def gen_log(seqno_max):
.set_param("write_iolog", f"{io_log_path}_{i}")
fio.run()
r = re.compile(r"\S+\s+(read|write)\s+(\d+)\s+(\d+)")
r = re.compile(r"\S+\s+\S+\s+write\s+(\d+)\s+(\d+)")
for j in range(num_jobs):
log = f"{io_log_path}_{j}"
nr = 0
@ -238,7 +238,7 @@ def gen_log(seqno_max):
m = r.match(line)
if m:
if nr > max_log_seqno:
block = int(m.group(2)) // block_size.value - j * job_workset_blocks
block = int(m.group(1)) // block_size.value - j * job_workset_blocks
g_io_log[j][block] += [nr]
nr += 1
if nr > seqno_max + 1:

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -13,14 +14,13 @@ from storage_devices.ramdisk import RamDisk
from test_utils.drbd import Resource, Node
from storage_devices.drbd import Drbd
from test_tools.drbdadm import Drbdadm
from test_tools import fs_utils
from test_tools.disk_utils import Filesystem
from test_tools.fs_utils import copy, check_if_file_exists
from test_tools.fs_tools import copy, Filesystem, replace_in_lines, remove, Permissions, \
PermissionsUsers
from test_utils.filesystem.directory import Directory
from test_utils.filesystem.file import File
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
from test_utils.emergency_escape import EmergencyEscape
from test_utils.fstab import add_mountpoint
from test_tools.fstab import add_mountpoint
from storage_devices.lvm import Lvm
@ -102,7 +102,7 @@ def test_create_example_files():
content_before_change = file1.read()
TestRun.LOGGER.info(f"File content: {content_before_change}")
with TestRun.step("Replace single line in file"):
fs_utils.replace_in_lines(file1, 'content line', 'replaced line')
replace_in_lines(file1, 'content line', 'replaced line')
with TestRun.step("Read file content and check if it changed"):
content_after_change = file1.read()
if content_before_change == content_after_change:
@ -115,19 +115,19 @@ def test_create_example_files():
with TestRun.step("Change permissions of second file"):
file2.chmod_numerical(123)
with TestRun.step("Remove second file"):
fs_utils.remove(file2.full_path, True)
remove(file2.full_path, True)
with TestRun.step("List contents of home directory"):
dir1 = Directory("~")
dir_content = dir1.ls()
with TestRun.step("Change permissions of file"):
file1.chmod(fs_utils.Permissions['r'] | fs_utils.Permissions['w'],
fs_utils.PermissionsUsers(7))
file1.chmod(Permissions['r'] | Permissions['w'],
PermissionsUsers(7))
with TestRun.step("Log home directory content"):
for item in dir_content:
TestRun.LOGGER.info(f"Item {str(item)} - {type(item).__name__}")
with TestRun.step("Remove file"):
fs_utils.remove(file1.full_path, True)
remove(file1.full_path, True)
@pytest.mark.require_disk("cache1", DiskTypeSet([DiskType.optane, DiskType.nand]))

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from datetime import timedelta, datetime
@ -10,8 +11,8 @@ from core.test_run import TestRun
from test_tools.dd import Dd
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
from test_utils.os_utils import Udev
from test_utils.size import Size, Unit
from test_tools.udev import Udev
from type_def.size import Size, Unit
from storage_devices.disk import DiskType, DiskTypeSet
from storage_devices.device import Device
from api.cas import casadm, dmesg

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@ -22,14 +23,12 @@ from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet
from storage_devices.drbd import Drbd
from test_tools.dd import Dd
from test_tools.disk_utils import Filesystem
from test_utils.size import Size, Unit
from test_utils.filesystem.file import File
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite
from test_tools.fs_utils import readlink, create_directory
from test_tools.fs_tools import create_directory, Filesystem
from test_utils.drbd import Resource, Node
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
from test_failover_multihost import check_drbd_installed
@ -1096,13 +1095,6 @@ def test_failover_io_long(cls, cleaning_policy, num_iterations):
TestRun.executor.wait_for_connection()
def check_drbd_installed(duts):
for dut in duts:
with TestRun.use_dut(dut):
if not Drbd.is_installed():
TestRun.fail(f"DRBD is not installed on DUT {dut.ip}")
def prepare_devices(duts):
for dut in duts:
with TestRun.use_dut(dut):

View File

@ -4,7 +4,6 @@
# SPDX-License-Identifier: BSD-3-Clause
#
from time import sleep
import pytest
from api.cas import casadm
@ -22,15 +21,13 @@ from storage_devices.disk import DiskType, DiskTypeSet
from storage_devices.drbd import Drbd
from storage_devices.raid import Raid, RaidConfiguration, MetadataVariant, Level
from test_tools.dd import Dd
from test_tools.drbdadm import Drbdadm
from test_tools.disk_utils import Filesystem
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite
from test_tools.fs_utils import readlink
from test_tools.fs_tools import readlink, Filesystem, create_directory
from test_utils.drbd import Resource, Node
from test_utils.os_utils import sync, Udev
from test_utils.size import Size, Unit
from test_tools import fs_utils
from test_tools.os_tools import sync
from test_tools.udev import Udev
from type_def.size import Size, Unit
cache_id = 5
@ -147,7 +144,7 @@ def test_functional_activate_twice_round_trip(filesystem):
primary_node.cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
if filesystem:
TestRun.executor.run(f"rm -rf {mountpoint}")
fs_utils.create_directory(path=mountpoint)
create_directory(path=mountpoint)
core.create_filesystem(filesystem)
core.mount(mountpoint)
@ -318,7 +315,7 @@ def test_functional_activate_twice_new_host(filesystem):
primary_node.cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
if filesystem:
TestRun.executor.run(f"rm -rf {mountpoint}")
fs_utils.create_directory(path=mountpoint)
create_directory(path=mountpoint)
core.create_filesystem(filesystem)
core.mount(mountpoint)
@ -494,7 +491,7 @@ def failover_sequence(standby_node, drbd_resource, filesystem, core):
if filesystem:
with TestRun.use_dut(standby_node), TestRun.step(f"Mount core"):
TestRun.executor.run(f"rm -rf {mountpoint}")
fs_utils.create_directory(path=mountpoint)
create_directory(path=mountpoint)
core.mount(mountpoint)

View File

@ -1,16 +1,15 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import time
import pytest
from api.cas import cli, casadm
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from core.test_run import TestRun
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache_1", DiskTypeSet([DiskType.optane, DiskType.nand]))

View File

@ -1,9 +1,11 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
import math
from api.cas import casadm
from api.cas.cache_config import (
CacheMode,
@ -16,11 +18,12 @@ from api.cas.cache_config import (
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from test_tools.device_mapper import ErrorDevice, DmTable
from test_tools.device_mapper import DmTable
from storage_devices.error_device import ErrorDevice
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, ErrorFilter, VerifyMethod
from test_utils.os_utils import Udev
from test_utils.size import Size, Unit
from test_tools.udev import Udev
from type_def.size import Size, Unit
start_size = Size(512, Unit.Byte)
stop_size = Size(128, Unit.KibiByte)
@ -74,26 +77,18 @@ def test_cache_insert_error(cache_mode, block_size):
if occupancy != 0:
TestRun.fail(f"Occupancy is not zero, but {occupancy}")
# Convert cache writes from bytes to I/O count, assuming cache I/O is sent
# with cacheline granularity.
cache_writes_per_block = max(block_size.get_value() // int(cache_line_size), 1)
cache_writes = stats.block_stats.cache.writes / block_size * cache_writes_per_block
# Convert cache writes from bytes to I/O count.
# Cache errors are accounted with request granularity.
# Blocks are expressed with 4k granularity.
correction = int(math.ceil(Size(1, Unit.Blocks4096) / block_size))
cache_writes_upper = int(stats.block_stats.cache.writes / block_size)
cache_writes_lower = cache_writes_upper - correction + 1
cache_errors = stats.error_stats.cache.total
# Cache error count is accurate, however cache writes is rounded up to 4K in OCF.
# Need to take this into account and round up cache errors accordingly for the
# comparison.
cache_writes_accuracy = max(Size(4, Unit.KibiByte) / block_size, 1)
rounded_cache_errors = (
(cache_errors + cache_writes_accuracy - 1)
// cache_writes_accuracy
* cache_writes_accuracy
)
if cache_writes != rounded_cache_errors:
if not cache_writes_lower <= cache_errors <= cache_writes_upper:
TestRun.fail(
f"Cache errors ({rounded_cache_errors}) should equal to number of"
f" requests to cache ({cache_writes})"
f"Cache errors ({cache_errors}) should equal to number of"
f" requests to cache (range {cache_writes_lower}-{cache_writes_upper})"
)
if cache_mode not in CacheMode.with_traits(CacheModeTrait.LazyWrites):
@ -145,9 +140,12 @@ def test_error_cache_verify_core(cache_mode, block_size):
@pytest.mark.parametrizex("cache_mode", CacheMode.with_traits(CacheModeTrait.LazyWrites))
@pytest.mark.parametrizex(
"block_size", [start_size, Size(1024, Unit.Byte), Size(4, Unit.KibiByte), stop_size]
)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cache_write_lazy_insert_error(cache_mode):
def test_cache_write_lazy_insert_error(cache_mode, block_size):
"""
title: Cache insert test with error device for writes on lazy writes cache mode
description: |
@ -168,7 +166,7 @@ def test_cache_write_lazy_insert_error(cache_mode):
.create_command()
.io_engine(IoEngine.libaio)
.size(core.size)
.blocksize_range([(start_size.get_value(), stop_size.get_value())])
.block_size(block_size)
.read_write(ReadWrite.randwrite)
.target(core)
.continue_on_error(ErrorFilter.io)
@ -186,13 +184,18 @@ def test_cache_write_lazy_insert_error(cache_mode):
if occupancy != 0:
TestRun.fail(f"Occupancy is not zero, but {occupancy}")
cache_writes = stats.block_stats.cache.writes / cache_line_size.value
# Convert cache writes from bytes to I/O count.
# Cache errors are accounted with request granularity.
# Blocks are expressed with 4k granularity.
correction = int(math.ceil(Size(1, Unit.Blocks4096) / block_size))
cache_writes_upper = int(stats.block_stats.cache.writes / block_size)
cache_writes_lower = cache_writes_upper - correction + 1
cache_errors = stats.error_stats.cache.total
if cache_writes != cache_errors:
if not cache_writes_lower <= cache_errors <= cache_writes_upper:
TestRun.fail(
f"Cache errors ({cache_errors}) should equal to number of requests to"
f" cache ({cache_writes})"
f"Cache errors ({cache_errors}) should equal to number of"
f" requests to cache (range {cache_writes_lower}-{cache_writes_upper})"
)
state = cache.get_status()
@ -243,4 +246,8 @@ def prepare_configuration(cache_mode, cache_line_size):
with TestRun.step("Adding core device"):
core = cache.add_core(core_dev=core_device.partitions[0])
with TestRun.step("Purge cache and reset statistics"):
cache.purge_cache()
cache.reset_counters()
return cache, core, core_device.partitions[0]

View File

@ -0,0 +1,420 @@
#
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from time import sleep
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, ErrorFilter
from test_tools.device_mapper import DmTable
from storage_devices.error_device import ErrorDevice
from core.test_run import TestRun
from api.cas import casadm
from api.cas.cache_config import (
CacheMode,
CacheLineSize,
SeqCutOffPolicy,
CleaningPolicy,
CacheModeTrait,
)
from storage_devices.disk import DiskTypeSet, DiskType
from test_utils.io_stats import IoStats
from test_tools.udev import Udev
from type_def.size import Size, Unit
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.sata, DiskType.hdd4k, DiskType.hdd]))
def test_error_device_as_cache_clean_wt(cache_line_size):
"""
title: Validate Open CAS ability to handle read hit I/O error on cache device for clean data
description: |
Perform I/O on exported object in Write-Through mode while error device is cache device and
validate if errors are present in Open CAS stats.
pass_criteria:
- Write error count in io is zero
- Read error count in io is zero
- Write error count in cache statistics is zero
- Total error count in cache statistics is greater than zero
"""
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(400, Unit.MebiByte)])
cache_part = cache_device.partitions[0]
error_device = ErrorDevice("error", cache_part)
with TestRun.step("Start cache in Write-Through mode"):
cache = casadm.start_cache(
error_device, cache_mode=CacheMode.WT, cache_line_size=cache_line_size, force=True
)
with TestRun.step("Disable cleaning policy and sequential cutoff"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step(f"Add core"):
core = cache.add_core(core_dev=core_device.partitions[0])
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Purge cache and reset statistics"):
cache.purge_cache()
cache.reset_counters()
with TestRun.step("Run fio against core to fill it with pattern"):
fio = (
Fio()
.create_command()
.target(core)
.io_engine(IoEngine.libaio)
.io_depth(1)
.num_jobs(1)
.size(cache_part.size)
.read_write(ReadWrite.randwrite)
.block_size(cache_line_size)
.rand_seed(int(cache_part.size.get_value()))
.direct()
.verification_with_pattern("0xabcd")
.do_verify(False)
.continue_on_error(ErrorFilter.io)
)
fio_errors = fio.run()[0].total_errors()
with TestRun.step("Check if fio reported no errors"):
if fio_errors != 0:
TestRun.fail("Fio reported errors!")
with TestRun.step("Stop cache"):
metadata_size = cache.get_metadata_size_on_disk() + Size(1, Unit.MiB)
cache.stop()
with TestRun.step("Enable udev"):
Udev.enable()
with TestRun.step("Enable errors on cache device (after metadata area)"):
error_device.change_table(
error_table(start_lba=metadata_size, stop_lba=cache_part.size).fill_gaps(cache_part)
)
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Load cache and reset counters"):
cache = casadm.load_cache(error_device)
cache.reset_counters()
with TestRun.step("Run io against core with pattern verification"):
fio = (
Fio()
.create_command()
.target(core)
.io_engine(IoEngine.libaio)
.io_depth(1)
.num_jobs(1)
.size(cache_part.size)
.read_write(ReadWrite.randread)
.block_size(cache_line_size)
.rand_seed(int(cache_part.size.get_value()))
.direct()
.verification_with_pattern("0xabcd")
.do_verify(False)
.continue_on_error(ErrorFilter.io)
)
fio_errors = fio.run()[0].total_errors()
with TestRun.step("Check if fio reported no errors"):
if fio_errors != 0:
TestRun.fail("Fio reported errors!")
with TestRun.step("Check cache error statistics"):
stats = cache.get_statistics()
write_errors_in_cache = stats.error_stats.cache.writes
if write_errors_in_cache != 0:
TestRun.fail(f"Write errors in cache stats detected ({write_errors_in_cache})!")
total_errors_in_cache = stats.error_stats.cache.total
if total_errors_in_cache == 0:
TestRun.fail(
f"Total errors in cache stats ({total_errors_in_cache}) should be greater than 0!"
)
TestRun.LOGGER.info(f"Total number of I/O errors in cache stats: {total_errors_in_cache}")
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.sata, DiskType.hdd4k, DiskType.hdd]))
def test_error_device_as_cache_clean_wa(cache_line_size):
"""
title: Validate Open CAS ability to handle read hit I/O error on cache device for clean data
description: |
Perform I/O on exported object in Write-Around mode while error device is cache device and
validate if errors are present in Open CAS stats.
pass_criteria:
- Write error count in io is zero
- Read error count in io is zero
- Read error count in cache statistics is zero
- Total error count in cache statistics is greater than zero
"""
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(500, Unit.MebiByte)])
core_device.create_partitions([Size(400, Unit.MebiByte)])
cache_part = cache_device.partitions[0]
error_device = ErrorDevice("error", cache_part)
with TestRun.step("Start cache in Write-Around"):
cache = casadm.start_cache(
error_device, cache_mode=CacheMode.WA, cache_line_size=cache_line_size, force=True
)
with TestRun.step("Disable cleaning policy and sequential cutoff"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step(f"Add core"):
core = cache.add_core(core_dev=core_device.partitions[0])
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Purge cache and reset statistics"):
cache.purge_cache()
cache.reset_counters()
with TestRun.step("Run fio against core to fill it with pattern"):
fio = (
Fio()
.create_command()
.target(core)
.io_engine(IoEngine.libaio)
.io_depth(1)
.num_jobs(1)
.size(cache_part.size)
.read_write(ReadWrite.randread)
.block_size(cache_line_size)
.rand_seed(int(cache_part.size.get_value()))
.direct()
.continue_on_error(ErrorFilter.io)
)
fio_errors = fio.run()[0].total_errors()
with TestRun.step("Check if fio reported no errors"):
if fio_errors != 0:
TestRun.fail("Fio reported errors!")
with TestRun.step("Stop cache"):
metadata_size = cache.get_metadata_size_on_disk() + Size(1, Unit.MiB)
cache.stop()
with TestRun.step("Enable udev"):
Udev.enable()
with TestRun.step("Enable errors on cache device (after metadata area)"):
error_device.change_table(
error_table(start_lba=metadata_size, stop_lba=cache_part.size).fill_gaps(cache_part)
)
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Load cache and reset counters"):
cache = casadm.load_cache(error_device)
cache.reset_counters()
with TestRun.step("Run io against core with pattern verification"):
fio = (
Fio()
.create_command()
.target(core)
.io_engine(IoEngine.libaio)
.io_depth(1)
.num_jobs(1)
.size(cache_part.size)
.read_write(ReadWrite.randwrite)
.block_size(cache_line_size)
.rand_seed(int(cache_part.size.get_value()))
.direct()
.verification_with_pattern("0xabcd")
.do_verify(False)
.continue_on_error(ErrorFilter.io)
)
fio_errors = fio.run()[0].total_errors()
with TestRun.step("Check if fio reported no errors"):
if fio_errors != 0:
TestRun.fail("Fio reported errors!")
with TestRun.step("Check cache error statistics"):
stats = cache.get_statistics()
read_errors_in_cache = stats.error_stats.cache.reads
if read_errors_in_cache != 0:
TestRun.fail(f"Reads errors in cache stats detected ({read_errors_in_cache})!")
total_errors_in_cache = stats.error_stats.cache.total
if total_errors_in_cache == 0:
TestRun.fail(
f"Total errors in cache stats ({total_errors_in_cache}) should be greater than 0!"
)
TestRun.LOGGER.info(f"Total number of I/O errors in cache stats: {total_errors_in_cache}")
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
@pytest.mark.parametrizex("cache_mode", CacheMode.with_traits(CacheModeTrait.LazyWrites))
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.sata, DiskType.hdd4k, DiskType.hdd]))
def test_error_device_as_cache_dirty(cache_mode, cache_line_size):
"""
title: Validate Open CAS ability to handle read hit I/O error on cache device for dirty data
description: |
Perform I/O on exported object while error device is used as cache device and validate if
errors are present in Open CAS statistics and no I/O traffic is detected on cores after
enabling errors on cache device.
pass_criteria:
- Write error count in fio is zero
- Read error count in fio is greater than zero
- I/O error count in cache statistics is greater than zero
- I/O traffic stop on the second core after enabling errors on cache device is stopped
"""
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(1, Unit.GibiByte)])
core_device.create_partitions([Size(400, Unit.MebiByte)] * 2)
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions
error_device = ErrorDevice("error", cache_part)
with TestRun.step("Start cache"):
cache = casadm.start_cache(
error_device, cache_mode=cache_mode, cache_line_size=cache_line_size, force=True
)
with TestRun.step("Disable cleaning policy and sequential cutoff"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step(f"Add core"):
cores = [cache.add_core(core_dev=core) for core in core_parts]
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Purge cache and reset statistics"):
cache.purge_cache()
cache.reset_counters()
with TestRun.step("Run io against the first core to fill it with pattern"):
fio = (
Fio()
.create_command()
.target(cores[0])
.io_engine(IoEngine.libaio)
.io_depth(1)
.num_jobs(1)
.size(cache_part.size)
.read_write(ReadWrite.randwrite)
.block_size(cache_line_size)
.rand_seed(int(cache_part.size.get_value()))
.direct()
.verification_with_pattern("0xabcd")
.do_verify(False)
.continue_on_error(ErrorFilter.io)
)
fio_errors = fio.run()[0].total_errors()
with TestRun.step("Check if fio reported no errors"):
if fio_errors != 0:
TestRun.fail("Fio reported errors!")
with TestRun.step("Stop cache"):
cache.stop(no_data_flush=True)
with TestRun.step("Enable udev"):
Udev.enable()
with TestRun.step("Enable errors on cache device (after metadata area)"):
metadata_size = cache.get_metadata_size_on_disk() + Size(1, Unit.MiB)
error_device.change_table(
error_table(start_lba=metadata_size, stop_lba=cache_part.size).fill_gaps(cache_part)
)
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Load cache and reset counters"):
cache = casadm.load_cache(error_device)
cache.reset_counters()
with TestRun.step("Run fio against the first core with pattern verification"):
fio = (
Fio()
.create_command()
.target(cores[0])
.io_engine(IoEngine.libaio)
.io_depth(1)
.num_jobs(1)
.size(cache_part.size)
.read_write(ReadWrite.randread)
.block_size(cache_line_size)
.rand_seed(int(cache_part.size.get_value()))
.direct()
.verification_with_pattern("0xabcd")
.do_verify(False)
.continue_on_error(ErrorFilter.io)
)
fio_errors = fio.run()[0].total_errors()
with TestRun.step("Check if fio reported errors"):
if fio_errors == 0:
TestRun.fail("Fio does not reported read errors!")
TestRun.LOGGER.info(f"Number of fio read errors: {fio_errors}")
with TestRun.step("Check the second core I/O traffic"):
core_2_errors_before = IoStats.get_io_stats(cores[1].get_device_id())
sleep(5)
core_2_errors_after = IoStats.get_io_stats(cores[1].get_device_id())
if (
core_2_errors_after.reads > core_2_errors_before.reads
or core_2_errors_after.writes > core_2_errors_before.writes
):
TestRun.fail(f"I/O traffic detected on the second core ({cores[1]})!")
else:
TestRun.LOGGER.info(f"I/O traffic stopped on the second core ({cores[1]})")
with TestRun.step("Check total cache error statistics"):
stats = cache.get_statistics()
total_errors_in_cache = stats.error_stats.cache.total
if total_errors_in_cache == 0:
TestRun.fail(
f"Total errors in cache stats ({total_errors_in_cache}) should be greater than 0!"
)
TestRun.LOGGER.info(f"Total number of I/O errors in cache stats: {total_errors_in_cache}")
def error_table(start_lba: Size, stop_lba: Size):
return DmTable.uniform_error_table(
start_lba=int(start_lba.get_value(Unit.Blocks512)),
stop_lba=int(stop_lba.get_value(Unit.Blocks512)),
num_error_zones=100,
error_zone_size=Size(5, Unit.Blocks512),
)

View File

@ -1,22 +1,30 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2023-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
import posixpath
import pytest
from datetime import timedelta
from api.cas import casadm, casadm_parser, cli
from api.cas.cache_config import CacheMode, CleaningPolicy, CacheModeTrait
from api.cas.casadm_parser import wait_for_flushing
from api.cas.cli import attach_cache_cmd
from connection.utils.output import CmdException
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.disk_utils import Filesystem
from storage_devices.nullblk import NullBlk
from test_tools.dd import Dd
from test_utils import os_utils
from test_utils.os_utils import Udev, DropCachesMode
from test_utils.size import Size, Unit
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from test_tools.fs_tools import Filesystem, create_random_test_file
from test_tools.os_tools import DropCachesMode, sync, drop_caches
from test_tools.udev import Udev
from type_def.size import Size, Unit
from tests.lazy_writes.recovery.recovery_tests_methods import compare_files
from test_tools import fs_utils
mount_point = "/mnt/cas"
test_file_path = f"{mount_point}/test_file"
@ -64,8 +72,8 @@ def test_interrupt_core_flush(cache_mode, filesystem):
test_file_md5sum_before = test_file.md5sum()
with TestRun.step("Get number of dirty data on exported object before interruption."):
os_utils.sync()
os_utils.drop_caches(DropCachesMode.ALL)
sync()
drop_caches(DropCachesMode.ALL)
core_dirty_blocks_before = core.get_dirty_blocks()
with TestRun.step("Start flushing core device."):
@ -148,8 +156,8 @@ def test_interrupt_cache_flush(cache_mode, filesystem):
test_file_md5sum_before = test_file.md5sum()
with TestRun.step("Get number of dirty data on exported object before interruption."):
os_utils.sync()
os_utils.drop_caches(DropCachesMode.ALL)
sync()
drop_caches(DropCachesMode.ALL)
cache_dirty_blocks_before = cache.get_dirty_blocks()
with TestRun.step("Start flushing cache."):
@ -196,17 +204,17 @@ def test_interrupt_cache_flush(cache_mode, filesystem):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_interrupt_core_remove(cache_mode, filesystem):
"""
title: Test if OpenCAS works correctly after core's removing interruption.
title: Core removal interruption.
description: |
Negative test of the ability of OpenCAS to handle core's removing interruption.
Test for proper handling of 'core remove' operation interruption.
pass_criteria:
- No system crash.
- Core would not be removed from cache after interruption.
- Flushing would be stopped after interruption.
- Md5sum are correct during all test steps.
- Checksums are correct during all test steps.
- Dirty blocks quantity after interruption is lower but non-zero.
"""
with TestRun.step("Prepare cache and core."):
with TestRun.step("Prepare cache and core devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([cache_size])
cache_part = cache_dev.partitions[0]
@ -215,37 +223,36 @@ def test_interrupt_core_remove(cache_mode, filesystem):
core_part = core_dev.partitions[0]
for _ in TestRun.iteration(
range(iterations_per_config), f"Reload cache configuration {iterations_per_config} times."
range(iterations_per_config), f"Reload cache configuration {iterations_per_config} times"
):
with TestRun.step("Start cache."):
with TestRun.step("Start cache"):
cache = casadm.start_cache(cache_part, cache_mode, force=True)
with TestRun.step("Set cleaning policy to NOP."):
with TestRun.step("Set cleaning policy to NOP"):
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step(f"Add core device with {filesystem} filesystem and mount it."):
with TestRun.step(f"Add core device with {filesystem} filesystem and mount it"):
core_part.create_filesystem(filesystem)
core = cache.add_core(core_part)
core.mount(mount_point)
with TestRun.step(f"Create test file in mount point of exported object."):
with TestRun.step("Create test file in mount point of exported object"):
test_file = create_test_file()
with TestRun.step("Check md5 sum of test file."):
test_file_md5sum_before = test_file.md5sum()
with TestRun.step("Calculate checksum of test file"):
test_file_crc32sum_before = test_file.crc32sum()
with TestRun.step(
"Get number of dirty data on exported object before core removal interruption."
"Get number of dirty data on exported object before core removal interruption"
):
os_utils.sync()
os_utils.drop_caches(DropCachesMode.ALL)
sync()
drop_caches(DropCachesMode.ALL)
cache_dirty_blocks_before = cache.get_dirty_blocks()
with TestRun.step("Unmount core."):
with TestRun.step("Unmount core"):
core.unmount()
with TestRun.step("Start removing core device."):
with TestRun.step("Start removing core"):
flush_pid = TestRun.executor.run_in_background(
cli.remove_core_cmd(str(cache.cache_id), str(core.core_id))
)
@ -257,42 +264,39 @@ def test_interrupt_core_remove(cache_mode, filesystem):
percentage = casadm_parser.get_flushing_progress(cache.cache_id, core.core_id)
TestRun.executor.run(f"kill -s SIGINT {flush_pid}")
with TestRun.step("Check md5 sum of test file after interruption."):
cache.set_cache_mode(CacheMode.WO)
test_file_md5sum_interrupt = test_file.md5sum()
cache.set_cache_mode(cache_mode)
with TestRun.step(
"Check number of dirty data on exported object after core removal interruption."
"Check number of dirty data on exported object after core removal interruption"
):
cache_dirty_blocks_after = cache.get_dirty_blocks()
if cache_dirty_blocks_after >= cache_dirty_blocks_before:
TestRun.LOGGER.error(
"Quantity of dirty lines after core removal interruption " "should be lower."
"Quantity of dirty lines after core removal interruption should be lower."
)
if int(cache_dirty_blocks_after) == 0:
TestRun.LOGGER.error(
"Quantity of dirty lines after core removal interruption " "should not be zero."
"Quantity of dirty lines after core removal interruption should not be zero."
)
with TestRun.step("Remove core from cache."):
core.remove_core()
with TestRun.step("Mount core and verify test file checksum after interruption"):
core.mount(mount_point)
with TestRun.step("Stop cache."):
if test_file.crc32sum() != test_file_crc32sum_before:
TestRun.LOGGER.error("Checksum after interrupting core removal is different.")
with TestRun.step("Unmount core"):
core.unmount()
with TestRun.step("Stop cache"):
cache.stop()
with TestRun.step("Mount core device."):
with TestRun.step("Mount core device"):
core_part.mount(mount_point)
with TestRun.step("Check md5 sum of test file again."):
if test_file_md5sum_before != test_file.md5sum():
TestRun.LOGGER.error("Md5 sum before interrupting core removal is different.")
with TestRun.step("Verify checksum of test file again"):
if test_file.crc32sum() != test_file_crc32sum_before:
TestRun.LOGGER.error("Checksum after core removal is different.")
is_sum_diff_after_interrupt = test_file_md5sum_interrupt != test_file.md5sum()
if is_sum_diff_after_interrupt:
TestRun.LOGGER.error("Md5 sum after interrupting core removal is different.")
with TestRun.step("Unmount core device."):
with TestRun.step("Unmount core device"):
core_part.unmount()
@ -314,77 +318,104 @@ def test_interrupt_cache_mode_switch_parametrized(cache_mode, stop_percentage):
- Md5sum are correct during all test steps.
- Dirty blocks quantity after interruption is lower but non-zero.
"""
test_file_size = Size(1, Unit.GibiByte)
test_file_path = "/mnt/cas/test_file"
with TestRun.step("Prepare cache and core."):
cache_part, core_part = prepare()
cache_dev = TestRun.disks["cache"]
core_dev = TestRun.disks["core"]
cache_dev.create_partitions([cache_size])
core_dev.create_partitions([cache_size * 2])
cache_part = cache_dev.partitions[0]
core_part = core_dev.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
for _ in TestRun.iteration(
range(iterations_per_config), f"Reload cache configuration {iterations_per_config} times."
):
with TestRun.step("Start cache."):
with TestRun.step("Start cache"):
cache = casadm.start_cache(cache_part, cache_mode, force=True)
with TestRun.step("Set cleaning policy to NOP."):
with TestRun.step("Set cleaning policy to NOP"):
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step(f"Add core device."):
with TestRun.step(f"Add core device"):
core = cache.add_core(core_part)
with TestRun.step(f"Create test file in mount point of exported object."):
test_file_size = Size(1024, Unit.MebiByte)
test_file = fs_utils.create_random_test_file(test_file_path, test_file_size)
with TestRun.step(f"Create test file in mount point of exported object"):
test_file = create_random_test_file(test_file_path, test_file_size)
with TestRun.step("Check md5 sum of test file."):
with TestRun.step("Calculate md5sum of test file"):
test_file_md5_before = test_file.md5sum()
with TestRun.step("Export file to CAS"):
Dd().block_size(test_file_size).input(test_file.full_path).output(core.path).oflag(
"direct"
).run()
with TestRun.step("Copy test data to core"):
dd = (
Dd()
.block_size(test_file_size)
.input(test_file.full_path)
.output(core.path)
.oflag("direct")
)
dd.run()
with TestRun.step("Get number of dirty data on exported object before interruption."):
os_utils.sync()
os_utils.drop_caches(DropCachesMode.ALL)
with TestRun.step("Get number of dirty data on exported object before interruption"):
sync()
drop_caches(DropCachesMode.ALL)
cache_dirty_blocks_before = cache.get_dirty_blocks()
with TestRun.step("Start switching cache mode."):
with TestRun.step("Start switching cache mode"):
flush_pid = TestRun.executor.run_in_background(
cli.set_cache_mode_cmd(
str(CacheMode.DEFAULT.name.lower()), str(cache.cache_id), "yes"
cache_mode=str(CacheMode.DEFAULT.name.lower()),
cache_id=str(cache.cache_id),
flush_cache="yes",
)
)
with TestRun.step("Send interruption signal."):
with TestRun.step("Kill flush process during cache flush operation"):
wait_for_flushing(cache, core)
percentage = casadm_parser.get_flushing_progress(cache.cache_id, core.core_id)
while percentage < stop_percentage:
percentage = casadm_parser.get_flushing_progress(cache.cache_id, core.core_id)
TestRun.executor.run(f"kill -s SIGINT {flush_pid}")
TestRun.executor.kill_process(flush_pid)
with TestRun.step("Check number of dirty data on exported object after interruption."):
with TestRun.step("Check number of dirty data on exported object after interruption"):
cache_dirty_blocks_after = cache.get_dirty_blocks()
if cache_dirty_blocks_after >= cache_dirty_blocks_before:
TestRun.LOGGER.error(
"Quantity of dirty lines after cache mode switching "
"interruption should be lower."
)
if int(cache_dirty_blocks_after) == 0:
if cache_dirty_blocks_after == Size.zero():
TestRun.LOGGER.error(
"Quantity of dirty lines after cache mode switching "
"interruption should not be zero."
)
with TestRun.step("Check cache mode."):
with TestRun.step("Check cache mode"):
if cache.get_cache_mode() != cache_mode:
TestRun.LOGGER.error("Cache mode should remain the same.")
with TestRun.step("Unmount core and stop cache."):
with TestRun.step("Stop cache"):
cache.stop()
with TestRun.step("Check md5 sum of test file again."):
Dd().block_size(test_file_size).input(core.path).output(test_file.full_path).oflag(
"direct"
).run()
with TestRun.step("Copy test data from the exported object to a file"):
dd = (
Dd()
.block_size(test_file_size)
.input(core.path)
.output(test_file.full_path)
.oflag("direct")
)
dd.run()
with TestRun.step("Compare md5 sum of test files"):
target_file_md5 = test_file.md5sum()
compare_files(test_file_md5_before, target_file_md5)
@ -423,11 +454,11 @@ def test_interrupt_cache_stop(cache_mode, filesystem):
core.mount(mount_point)
with TestRun.step(f"Create test file in mount point of exported object."):
test_file = create_test_file()
create_test_file()
with TestRun.step("Get number of dirty data on exported object before interruption."):
os_utils.sync()
os_utils.drop_caches(DropCachesMode.ALL)
sync()
drop_caches(DropCachesMode.ALL)
cache_dirty_blocks_before = cache.get_dirty_blocks()
with TestRun.step("Unmount core."):
@ -464,6 +495,144 @@ def test_interrupt_cache_stop(cache_mode, filesystem):
core_part.unmount()
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", CacheMode)
def test_interrupt_attach(cache_mode):
"""
title: Test for attach interruption.
description: Validate handling interruption of cache attach.
pass_criteria:
- No crash during attach interruption.
- Cache attach completed successfully.
- No system crash.
"""
with TestRun.step("Prepare cache and core devices"):
nullblk = NullBlk.create(size_gb=1500)
cache_dev = nullblk[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(2, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
cache.add_core(core_dev)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step("Detach cache"):
cache.detach()
with TestRun.step("Start attaching cache in background"):
cache_attach_pid = TestRun.executor.run_in_background(
attach_cache_cmd(
cache_id=str(cache.cache_id),
cache_dev=cache_dev.path
)
)
with TestRun.step("Try to interrupt cache attaching"):
TestRun.executor.kill_process(cache_attach_pid)
with TestRun.step("Wait for cache attach to end"):
TestRun.executor.wait_cmd_finish(
cache_attach_pid, timeout=timedelta(minutes=10)
)
with TestRun.step("Verify if cache attach ended successfully"):
caches = casadm_parser.get_caches()
if len(caches) != 1:
TestRun.fail(f"Wrong amount of caches: {len(caches)}, expected: 1")
if caches[0].cache_device.path == cache_dev.path:
TestRun.LOGGER.info("Operation ended successfully")
else:
TestRun.fail(
"Cache attaching failed"
"expected behaviour: attach completed successfully"
"actual behaviour: attach interrupted"
)
@pytest.mark.parametrizex("filesystem", Filesystem)
@pytest.mark.parametrizex("cache_mode", CacheMode.with_traits(CacheModeTrait.LazyWrites))
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_detach_interrupt_cache_flush(filesystem, cache_mode):
"""
title: Test for flush interruption using cache detach operation.
description: Validate handling detach during cache flush.
pass_criteria:
- No system crash.
- Detach operation doesn't stop cache flush.
"""
with TestRun.step("Prepare cache and core devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(5, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start cache"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step(f"Add core device with {filesystem} filesystem and mount it"):
core_dev.create_filesystem(filesystem)
core = cache.add_core(core_dev)
core.mount(mount_point)
with TestRun.step("Populate cache with dirty data"):
fio = (
Fio()
.create_command()
.size(Size(4, Unit.GibiByte))
.read_write(ReadWrite.randrw)
.io_engine(IoEngine.libaio)
.block_size(Size(1, Unit.Blocks4096))
.target(posixpath.join(mount_point, "test_file"))
)
fio.run()
if cache.get_dirty_blocks() <= Size.zero():
TestRun.fail("Failed to populate cache with dirty data")
if core.get_dirty_blocks() <= Size.zero():
TestRun.fail("There is no dirty data on core")
with TestRun.step("Start flushing cache"):
flush_pid = TestRun.executor.run_in_background(
cli.flush_cache_cmd(str(cache.cache_id))
)
with TestRun.step("Interrupt cache flushing by cache detach"):
wait_for_flushing(cache, core)
percentage = casadm_parser.get_flushing_progress(cache.cache_id, core.core_id)
while percentage < 50:
percentage = casadm_parser.get_flushing_progress(
cache.cache_id, core.core_id
)
with TestRun.step("Detach cache"):
try:
cache.detach()
TestRun.fail("Cache detach during flush succeed, expected: fail")
except CmdException:
TestRun.LOGGER.info(
"Cache detach during flush failed, as expected"
)
TestRun.executor.wait_cmd_finish(flush_pid)
cache.detach()
def prepare():
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([cache_size])
@ -482,7 +651,7 @@ def create_test_file():
bs = Size(512, Unit.KibiByte)
cnt = int(cache_size.value / bs.value)
test_file = File.create_file(test_file_path)
dd = Dd().output(test_file_path).input("/dev/zero").block_size(bs).count(cnt)
dd = Dd().output(test_file_path).input("/dev/zero").block_size(bs).count(cnt).oflag("direct")
dd.run()
test_file.refresh_item()
return test_file

View File

@ -12,8 +12,8 @@ from api.cas.core import Core
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from core.test_run import TestRun
from test_tools.dd import Dd
from test_utils.os_utils import Udev, sync
from test_utils.size import Size, Unit
from test_tools.udev import Udev
from type_def.size import Size, Unit
block_size = Size(1, Unit.Blocks4096)
@ -289,7 +289,7 @@ def test_one_core_fail_dirty():
with TestRun.step("Check if core device is really out of cache."):
output = str(casadm.list_caches().stdout.splitlines())
if core_part1.path in output:
TestRun.exception("The first core device should be unplugged!")
TestRun.LOGGER.exception("The first core device should be unplugged!")
with TestRun.step("Verify that I/O to the remaining cores does not insert to cache"):
dd_builder(cache_mode, core2, Size(100, Unit.MebiByte)).run()

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -7,14 +8,12 @@ import pytest
from api.cas import casadm, casadm_parser, cli, cli_messages
from api.cas.cache_config import CacheMode, CleaningPolicy, CacheModeTrait
from test_tools.fs_tools import create_random_test_file
from test_tools.udev import Udev
from tests.lazy_writes.recovery.recovery_tests_methods import copy_file, compare_files
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools import fs_utils
from test_tools.dd import Dd
from test_tools.disk_utils import Filesystem
from test_utils import os_utils
from test_utils.size import Size, Unit
from type_def.size import Size, Unit
mount_point = "/mnt/cas"
test_file_path = f"{mount_point}/test_file"
@ -48,7 +47,7 @@ def test_stop_no_flush_load_cache(cache_mode):
with TestRun.step(f"Create test file in mount point of exported object and check its md5 sum."):
test_file_size = Size(48, Unit.MebiByte)
test_file = fs_utils.create_random_test_file(test_file_path, test_file_size)
test_file = create_random_test_file(test_file_path, test_file_size)
test_file_md5_before = test_file.md5sum()
copy_file(source=test_file.full_path, target=core.path, size=test_file_size,
direct="oflag")
@ -101,5 +100,5 @@ def prepare():
core_dev = TestRun.disks['core']
core_dev.create_partitions([Size(2, Unit.GibiByte)])
core_part = core_dev.partitions[0]
os_utils.Udev.disable()
Udev.disable()
return cache_part, core_part

View File

@ -5,22 +5,17 @@
#
import pytest
from collections import namedtuple
import random
from api.cas import casadm
from api.cas import dmesg
from api.cas.cli import casadm_bin
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.size import Size, Unit
from api.cas.cli_messages import check_stderr_msg, missing_param, disallowed_param
from type_def.size import Size, Unit
from api.cas.cache_config import CacheLineSize, CacheMode
from api.cas.cli import standby_activate_cmd, standby_load_cmd
from api.cas.dmesg import get_md_section_size
from api.cas.ioclass_config import IoClass
from test_tools.dd import Dd
from test_utils.os_utils import sync
from test_tools.os_tools import sync
from test_utils.filesystem.file import File

View File

@ -6,13 +6,13 @@
import pytest
from api.cas import cli, casadm
from api.cas import casadm
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from core.test_run import TestRun
from test_utils.size import Size, Unit
from api.cas.cache_config import CacheLineSize, CacheMode, CacheStatus
from type_def.size import Size, Unit
from api.cas.cache_config import CacheLineSize, CacheStatus
from api.cas.casadm_params import StatsFilter
from api.cas.casadm_parser import get_core_info_by_path
from api.cas.casadm_parser import get_core_info_for_cache_by_path
from api.cas.core import CoreStatus, Core
from test_tools.dd import Dd
from api.cas.cli import standby_activate_cmd
@ -173,7 +173,11 @@ def test_activate_incomplete_cache():
TestRun.fail(f"Expected one inactive core. Got {inactive_core_count}")
with TestRun.step("Check if core is in an appropriate state"):
core_status = CoreStatus[get_core_info_by_path(core_dev_path)["status"].lower()]
core_status = CoreStatus[
get_core_info_for_cache_by_path(
core_disk_path=core_dev_path, target_cache_id=cache.cache_id
)["status"].lower()
]
if core_status != CoreStatus.inactive:
TestRun.fail(
"The core is in an invalid state. "

View File

@ -1,5 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@ -8,15 +9,16 @@ import pytest
from api.cas import casadm, casadm_parser, cli, cli_messages
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools import fs_utils
from test_tools.disk_utils import Filesystem
from test_utils.size import Size, Unit
from test_tools.disk_tools import get_block_size, create_partitions
from test_tools.fs_tools import Filesystem, create_random_test_file, check_if_file_exists
from test_utils.filesystem.file import File
from test_utils.filesystem.symlink import Symlink
from type_def.size import Size, Unit
mount_point = "/mnt/cas"
mount_point, mount_point2 = "/mnt/cas", "/mnt/cas2"
test_file_path = f"{mount_point}/test_file"
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_load_cache_with_mounted_core():
@ -44,7 +46,7 @@ def test_load_cache_with_mounted_core():
core.mount(mount_point)
with TestRun.step(f"Create test file in mount point of exported object and check its md5 sum."):
test_file = fs_utils.create_random_test_file(test_file_path)
test_file = create_random_test_file(test_file_path)
test_file_md5_before = test_file.md5sum()
with TestRun.step("Unmount core device."):
@ -79,6 +81,7 @@ def test_load_cache_with_mounted_core():
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.require_disk("core2", DiskTypeLowerThan("cache"))
def test_stop_cache_with_mounted_partition():
"""
title: Fault injection test for removing core and stopping cache with mounted core.
@ -89,6 +92,83 @@ def test_stop_cache_with_mounted_partition():
- No system crash.
- Unable to stop cache when partition is mounted.
- Unable to remove core when partition is mounted.
- casadm displays proper message.
"""
with TestRun.step("Prepare cache device."):
cache_dev = TestRun.disks['cache']
cache_dev.create_partitions([Size(1, Unit.GibiByte)])
cache_part = cache_dev.partitions[0]
with TestRun.step("Prepare 2 core devices."):
core_dev, core_dev2 = TestRun.disks['core'], TestRun.disks['core2']
with TestRun.step("Start cache."):
cache = casadm.start_cache(cache_part, force=True)
with TestRun.step("Add core devices to cache."):
core = cache.add_core(core_dev)
core2 = cache.add_core(core_dev2)
with TestRun.step("Create partitions on one exported object."):
core.block_size = Size(get_block_size(core.get_device_id()))
create_partitions(core, 2 * [Size(4, Unit.GibiByte)])
fs_part = core.partitions[0]
with TestRun.step("Create xfs filesystems on one exported object partition "
"and on the non-partitioned exported object."):
fs_part.create_filesystem(Filesystem.xfs)
core2.create_filesystem(Filesystem.xfs)
with TestRun.step("Mount created filesystems."):
fs_part.mount(mount_point)
core2.mount(mount_point2)
with TestRun.step("Ensure /etc/mtab exists."):
if not check_if_file_exists("/etc/mtab"):
Symlink.create_symlink("/proc/self/mounts", "/etc/mtab")
with TestRun.step("Try to remove the core with partitions from cache."):
output = TestRun.executor.run_expect_fail(cli.remove_core_cmd(cache_id=str(cache.cache_id),
core_id=str(core.core_id)))
messages = cli_messages.remove_mounted_core.copy()
messages.append(fs_part.path)
cli_messages.check_stderr_msg(output, messages)
with TestRun.step("Try to remove the core without partitions from cache."):
output = TestRun.executor.run_expect_fail(cli.remove_core_cmd(cache_id=str(cache.cache_id),
core_id=str(core2.core_id)))
messages = cli_messages.remove_mounted_core.copy()
messages.append(core2.path)
cli_messages.check_stderr_msg(output, messages)
with TestRun.step("Try to stop CAS."):
output = TestRun.executor.run_expect_fail(cli.stop_cmd(cache_id=str(cache.cache_id)))
messages = cli_messages.stop_cache_mounted_core.copy()
messages.append(fs_part.path)
messages.append(core2.path)
cli_messages.check_stderr_msg(output, messages)
with TestRun.step("Unmount core devices."):
fs_part.unmount()
core2.unmount()
with TestRun.step("Stop cache."):
casadm.stop_all_caches()
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_stop_cache_with_mounted_partition_no_mtab():
"""
title: Test for removing core and stopping cache when casadm is unable to check mounts.
description: |
Negative test of the ability of CAS to remove core and stop cache while core
is still mounted and casadm is unable to check mounts.
pass_criteria:
- No system crash.
- Unable to stop cache when partition is mounted.
- Unable to remove core when partition is mounted.
- casadm displays proper message informing that mount check was performed by kernel module
"""
with TestRun.step("Prepare cache and core devices. Start CAS."):
cache_dev = TestRun.disks['cache']
@ -104,18 +184,32 @@ def test_stop_cache_with_mounted_partition():
core = cache.add_core(core_part)
core.mount(mount_point)
with TestRun.step("Move /etc/mtab"):
if check_if_file_exists("/etc/mtab"):
mtab = File("/etc/mtab")
else:
mtab = Symlink.create_symlink("/proc/self/mounts", "/etc/mtab")
mtab.move("/tmp")
with TestRun.step("Try to remove core from cache."):
output = TestRun.executor.run_expect_fail(cli.remove_core_cmd(cache_id=str(cache.cache_id),
core_id=str(core.core_id)))
cli_messages.check_stderr_msg(output, cli_messages.remove_mounted_core)
cli_messages.check_stderr_msg(output, cli_messages.remove_mounted_core_kernel)
with TestRun.step("Try to stop CAS."):
output = TestRun.executor.run_expect_fail(cli.stop_cmd(cache_id=str(cache.cache_id)))
cli_messages.check_stderr_msg(output, cli_messages.stop_cache_mounted_core)
cli_messages.check_stderr_msg(output, cli_messages.stop_cache_mounted_core_kernel)
with TestRun.step("Unmount core device."):
core.unmount()
with TestRun.step("Stop cache."):
casadm.stop_all_caches()
with TestRun.step("Remove core."):
core.remove_core()
with TestRun.step("Re-add core."):
cache.add_core(core_part)
with TestRun.step("Stop cache."):
cache.stop()
mtab.move("/etc")

View File

@ -1,17 +1,17 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
import time
import pytest
from api.cas import casadm, cli, cli_messages
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from connection.utils.output import CmdException
from type_def.size import Size, Unit
log_path = "/var/log/opencas.log"
wait_long_time = 180

Some files were not shown because too many files have changed in this diff Show More