Compare commits

..

3 Commits

Author SHA1 Message Date
02a209aff8 [test] Add unit tests for ACP and ALRU cleaning policy parameters
- Add tests for wake-up and flush-max-buffers parameter validation
- Test parameter boundary values and error conditions
- Cover parameter parsing from configuration strings
- Verify set_param_cleaning_policy commands construction
- Test proper handling of different cleaning policies in configure_cache

These tests ensure the proper validation and handling of cleaning policy
parameters introduced in the previous commits.
2025-04-17 14:21:47 +08:00
09764e0e15 [fix] Enhance cache cleaning policy configuration
- Add set_param_cleaning_policy method for ACP and ALRU policies
- Add validation for wake-up and flush-max-buffers parameters
- Improve cache configuration to handle different cleaning policies
- Fix casctl stop with flush option for proper shutdown
2025-04-16 11:05:32 +08:00
cbcb8bab74 [feat] Tuning opencas for PiiOS
- disable modules reload during installation
- flush at shutdown
- todo: tuning flush parameters
2025-04-15 13:30:33 +02:00
230 changed files with 2834 additions and 5933 deletions

View File

@@ -1,28 +0,0 @@
--max-line-length=80
--no-tree
--ignore AVOID_BUG
--ignore COMMIT_MESSAGE
--ignore FILE_PATH_CHANGES
--ignore PREFER_PR_LEVEL
--ignore SPDX_LICENSE_TAG
--ignore SPLIT_STRING
--ignore MEMORY_BARRIER
--exclude .github
--exclude casadm
--exclude configure.d
--exclude doc
--exclude ocf
--exclude test
--exclude tools
--exclude utils
--exclude .gitignore
--exclude .gitmodules
--exclude .pep8speaks.yml
--exclude LICENSE
--exclude Makefile
--exclude README.md
--exclude configure
--exclude requirements.txt
--exclude version

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
test/** -linguist-detectable

View File

@@ -1,15 +0,0 @@
name: checkpatch review
on: [pull_request]
jobs:
my_review:
name: checkpatch review
runs-on: ubuntu-latest
steps:
- name: 'Calculate PR commits + 1'
run: echo "PR_FETCH_DEPTH=$(( ${{ github.event.pull_request.commits }} + 1 ))" >> $GITHUB_ENV
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: ${{ env.PR_FETCH_DEPTH }}
- name: Run checkpatch review
uses: webispy/checkpatch-action@v9

7
.gitignore vendored
View File

@@ -11,15 +11,8 @@
tags
Module.symvers
Module.markers
*.mod
*.mod.c
*.out
modules.order
__pycache__/
*.py[cod]
*$py.class
*.gz
casadm/casadm
modules/include/ocf
modules/generated_defines.h

4
.gitmodules vendored
View File

@@ -1,6 +1,6 @@
[submodule "ocf"]
path = ocf
url = https://github.com/Open-CAS/ocf.git
url = https://git.piicloud.cn/github/ocf.git
[submodule "test/functional/test-framework"]
path = test/functional/test-framework
url = https://github.com/Open-CAS/test-framework.git
url = https://git.piicloud.cn/github/opencas-test-framework.git

77
README-TD.md Normal file
View File

@@ -0,0 +1,77 @@
# Test Cas_Config
```
python3 -m venv test_env
source test_env/bin/activate
pip3 install pytest
pytest test/utils_tests/opencas-py-tests/test_cas_config_01.py -vv
```
```shell
pytest test/utils_tests/opencas-py-tests/test_cas_config_01.py -vv
================================================================================== test session starts ==================================================================================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0 -- /root/workspace/git/open-cas-linux/open-cas_env/bin/python3
cachedir: .pytest_cache
rootdir: /root/workspace/git/open-cas-linux
collected 56 items
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_exception PASSED [ 1%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_no_vertag PASSED [ 3%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_comments_only PASSED [ 5%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_inconsistent_configs[caches_config0-cores_config0-ConflictingConfigException] PASSED [ 7%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_inconsistent_configs[caches_config1-cores_config1-ConflictingConfigException] PASSED [ 8%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_inconsistent_configs[caches_config2-cores_config2-ConflictingConfigException] PASSED [ 10%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_inconsistent_configs[caches_config3-cores_config3-AlreadyConfiguredException] PASSED [ 12%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_inconsistent_configs[caches_config4-cores_config4-AlreadyConfiguredException] PASSED [ 14%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_inconsistent_configs[caches_config5-cores_config5-KeyError] PASSED [ 16%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_inconsistent_configs[caches_config6-cores_config6-ConflictingConfigException] PASSED [ 17%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_is_empty_non_empty PASSED [ 19%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_double_add_cache PASSED [ 21%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_double_add_core PASSED [ 23%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_insert_core_no_cache PASSED [ 25%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_add_same_cache_symlinked_01 PASSED [ 26%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_add_same_cache_symlinked_02 PASSED [ 28%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_add_same_core_symlinked_01 PASSED [ 30%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_add_same_core_symlinked_02 PASSED [ 32%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_get_by_id_path_not_found PASSED [ 33%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_to_file[caches_config0-cores_config0] PASSED [ 35%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_to_file[caches_config1-cores_config1] PASSED [ 37%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_to_file[caches_config2-cores_config2] PASSED [ 39%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_to_file[caches_config3-cores_config3] PASSED [ 41%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_to_file[caches_config4-cores_config4] PASSED [ 42%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_insert_cache_insert_core_to_file[caches_config0-cores_config0] PASSED [ 44%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_insert_cache_insert_core_to_file[caches_config1-cores_config1] PASSED [ 46%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_insert_cache_insert_core_to_file[caches_config2-cores_config2] PASSED [ 48%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cas_config_from_file_insert_cache_insert_core_to_file[caches_config3-cores_config3] PASSED [ 50%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-100-500-None] PASSED [ 51%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-0-1-None] PASSED [ 53%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-10000-9999-None] PASSED [ 55%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp--1-500-ValueError] PASSED [ 57%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-10001-500-ValueError] PASSED [ 58%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-abc-500-ValueError] PASSED [ 60%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-100-0-ValueError] PASSED [ 62%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-100-10001-ValueError] PASSED [ 64%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[acp-100-abc-ValueError] PASSED [ 66%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-100-500-None] PASSED [ 67%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-0-1-None] PASSED [ 69%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-3599-10000-None] PASSED [ 71%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru--1-500-ValueError] PASSED [ 73%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-3601-500-ValueError] PASSED [ 75%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-abc-500-ValueError] PASSED [ 76%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-100-0-ValueError] PASSED [ 78%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-100-10001-ValueError] PASSED [ 80%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[alru-100-abc-ValueError] PASSED [ 82%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_cleaning_policy_parameters[nop-100-500-None] PASSED [ 83%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_from_line_with_cleaning_parameters[1 /dev/dummy WT cleaning_policy=acp,wake_up=100,flush_max_buffers=500-None] PASSED [ 85%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_from_line_with_cleaning_parameters[1 /dev/dummy WT cleaning_policy=alru,wake_up=60,flush_max_buffers=100-None] PASSED [ 87%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_from_line_with_cleaning_parameters[1 /dev/dummy WT cleaning_policy=acp,wake_up=10001,flush_max_buffers=500-ValueError] PASSED [ 89%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_from_line_with_cleaning_parameters[1 /dev/dummy WT cleaning_policy=acp,wake_up=100,flush_max_buffers=0-ValueError] PASSED [ 91%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_from_line_with_cleaning_parameters[1 /dev/dummy WT cleaning_policy=alru,wake_up=3601,flush_max_buffers=100-ValueError] PASSED [ 92%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_from_line_with_cleaning_parameters[1 /dev/dummy WT cleaning_policy=alru,wake_up=100,flush_max_buffers=10001-ValueError] PASSED [ 94%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_cache_config_from_line_with_cleaning_parameters[1 /dev/dummy WT cleaning_policy=nop,wake_up=100,flush_max_buffers=500-None] PASSED [ 96%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_set_param_cleaning_policy PASSED [ 98%]
test/utils_tests/opencas-py-tests/test_cas_config_01.py::test_configure_cache_cleaning_policy PASSED [100%]
================================================================================== 56 passed in 0.14s ===================================================================================
```

View File

@@ -25,14 +25,14 @@ Open CAS uses Safe string library (safeclib) that is MIT licensed.
We recommend using the latest version, which contains all the important fixes
and performance improvements. Bugfix releases are guaranteed only for the
latest major release line (currently 24.9.x).
latest major release line (currently 22.6.x).
To download the latest Open CAS Linux release run following commands:
```
wget https://github.com/Open-CAS/open-cas-linux/releases/download/v24.9/open-cas-linux-24.09.0.0900.release.tar.gz
tar -xf open-cas-linux-24.09.0.0900.release.tar.gz
cd open-cas-linux-24.09.0.0900.release/
wget https://github.com/Open-CAS/open-cas-linux/releases/download/v22.6.3/open-cas-linux-22.06.3.0725.release.tar.gz
tar -xf open-cas-linux-22.06.3.0725.release.tar.gz
cd open-cas-linux-22.06.3.0725.release/
```
Alternatively, if you want recent development (unstable) version, you can clone GitHub repository:

View File

@@ -1,6 +1,5 @@
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -122,7 +121,7 @@ LDFLAGS = -z noexecstack -z relro -z now -pie -pthread -lm
# Targets
#
all: sync manpage
all: sync
$(MAKE) build
build: $(TARGETS)
@@ -157,14 +156,10 @@ endif
-include $(addprefix $(OBJDIR),$(OBJS:.o=.d))
manpage:
gzip -k -f $(TARGET).8
clean:
@echo " CLEAN "
@rm -f *.a $(TARGETS)
@rm -f $(shell find -name \*.d) $(shell find -name \*.o)
@rm -f $(TARGET).8.gz
distclean: clean
@@ -173,12 +168,11 @@ install: install_files
install_files:
@echo "Installing casadm"
@install -m 755 -D $(TARGET) $(DESTDIR)$(BINARY_PATH)/$(TARGET)
@install -m 644 -D $(TARGET).8.gz $(DESTDIR)/usr/share/man/man8/$(TARGET).8.gz
@mandb -q
@install -m 644 -D $(TARGET).8 $(DESTDIR)/usr/share/man/man8/$(TARGET).8
uninstall:
@echo "Uninstalling casadm"
$(call remove-file,$(DESTDIR)$(BINARY_PATH)/$(TARGET))
$(call remove-file,$(DESTDIR)/usr/share/man/man8/$(TARGET).8.gz)
$(call remove-file,$(DESTDIR)/usr/share/man/man8/$(TARGET).8)
.PHONY: clean distclean all sync build install uninstall

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -45,8 +45,8 @@
#define CORE_ADD_MAX_TIMEOUT 30
bool device_mounts_detected(const char *pattern, int cmplen);
void print_mounted_devices(const char *pattern, int cmplen);
int is_cache_mounted(int cache_id);
int is_core_mounted(int cache_id, int core_id);
/* KCAS_IOCTL_CACHE_CHECK_DEVICE wrapper */
int _check_cache_device(const char *device_path,
@@ -70,7 +70,7 @@ static const char *core_states_name[] = {
#define STANDBY_DETACHED_STATE "Standby detached"
#define CACHE_STATE_LENGTH 20
#define CACHE_STATE_LENGHT 20
#define CAS_LOG_FILE "/var/log/opencas.log"
#define CAS_LOG_LEVEL LOG_INFO
@@ -1025,22 +1025,6 @@ static int _start_cache(uint16_t cache_id, unsigned int cache_init,
cache_device);
} else {
print_err(cmd.ext_err_code);
if (OCF_ERR_METADATA_FOUND == cmd.ext_err_code) {
/* print instructions specific for start/attach */
if (start) {
cas_printf(LOG_ERR,
"Please load cache metadata using --load"
" option or use --force to\n discard on-disk"
" metadata and start fresh cache instance.\n"
);
} else {
cas_printf(LOG_ERR,
"Please attach another device or use --force"
" to discard on-disk metadata\n"
" and attach this device to cache instance.\n"
);
}
}
}
return FAILURE;
}
@@ -1135,16 +1119,8 @@ int stop_cache(uint16_t cache_id, int flush)
int status;
/* Don't stop instance with mounted filesystem */
int cmplen = 0;
char pattern[80];
/* verify if any core (or core partition) for this cache is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-", cache_id) - 1;
if (device_mounts_detected(pattern, cmplen)) {
cas_printf(LOG_ERR, "Can't stop cache instance %d due to mounted devices:\n", cache_id);
print_mounted_devices(pattern, cmplen);
if (is_cache_mounted(cache_id) == FAILURE)
return FAILURE;
}
fd = open_ctrl_device();
if (fd == -1)
@@ -1827,52 +1803,58 @@ int add_core(unsigned int cache_id, unsigned int core_id, const char *core_devic
return SUCCESS;
}
bool device_mounts_detected(const char *pattern, int cmplen)
int _check_if_mounted(int cache_id, int core_id)
{
FILE *mtab;
struct mntent *mstruct;
int no_match = 0, error = 0;
char dev_buf[80];
int difference = 0, error = 0;
if (core_id >= 0) {
/* verify if specific core is mounted */
snprintf(dev_buf, sizeof(dev_buf), "/dev/cas%d-%d", cache_id, core_id);
} else {
/* verify if any core from given cache is mounted */
snprintf(dev_buf, sizeof(dev_buf), "/dev/cas%d-", cache_id);
}
mtab = setmntent("/etc/mtab", "r");
if (!mtab) {
/* if /etc/mtab not found then the kernel will check for mounts */
return false;
if (!mtab)
{
cas_printf(LOG_ERR, "Error while accessing /etc/mtab\n");
return FAILURE;
}
while ((mstruct = getmntent(mtab)) != NULL) {
error = strcmp_s(mstruct->mnt_fsname, cmplen, pattern, &no_match);
error = strcmp_s(mstruct->mnt_fsname, PATH_MAX, dev_buf, &difference);
/* mstruct->mnt_fsname is /dev/... block device path, not a mountpoint */
if (error != EOK)
return false;
if (no_match)
continue;
return true;
return FAILURE;
if (!difference) {
if (core_id<0) {
cas_printf(LOG_ERR,
"Can't stop cache instance %d. Device %s is mounted!\n",
cache_id, mstruct->mnt_fsname);
} else {
cas_printf(LOG_ERR,
"Can't remove core %d from cache %d."
" Device %s is mounted!\n",
core_id, cache_id, mstruct->mnt_fsname);
}
return FAILURE;
}
}
return SUCCESS;
return false;
}
void print_mounted_devices(const char *pattern, int cmplen)
int is_cache_mounted(int cache_id)
{
FILE *mtab;
struct mntent *mstruct;
int no_match = 0, error = 0;
return _check_if_mounted(cache_id, -1);
}
mtab = setmntent("/etc/mtab", "r");
if (!mtab) {
/* should exist, but if /etc/mtab not found we cannot print mounted devices */
return;
}
while ((mstruct = getmntent(mtab)) != NULL) {
error = strcmp_s(mstruct->mnt_fsname, cmplen, pattern, &no_match);
/* mstruct->mnt_fsname is /dev/... block device path, not a mountpoint */
if (error != EOK || no_match)
continue;
cas_printf(LOG_ERR, "%s\n", mstruct->mnt_fsname);
}
int is_core_mounted(int cache_id, int core_id)
{
return _check_if_mounted(cache_id, core_id);
}
int remove_core(unsigned int cache_id, unsigned int core_id,
@@ -1882,23 +1864,7 @@ int remove_core(unsigned int cache_id, unsigned int core_id,
struct kcas_remove_core cmd;
/* don't even attempt ioctl if filesystem is mounted */
bool mounts_detected = false;
int cmplen = 0;
char pattern[80];
/* verify if specific core is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-%d", cache_id, core_id);
mounts_detected = device_mounts_detected(pattern, cmplen);
if (!mounts_detected) {
/* verify if any partition of the core is mounted */
cmplen = snprintf(pattern, sizeof(pattern), "/dev/cas%d-%dp", cache_id, core_id) - 1;
mounts_detected = device_mounts_detected(pattern, cmplen);
}
if (mounts_detected) {
cas_printf(LOG_ERR, "Can't remove core %d from "
"cache %d due to mounted devices:\n",
core_id, cache_id);
print_mounted_devices(pattern, cmplen);
if (SUCCESS != is_core_mounted(cache_id, core_id)) {
return FAILURE;
}
@@ -1963,6 +1929,11 @@ int remove_inactive_core(unsigned int cache_id, unsigned int core_id,
int fd = 0;
struct kcas_remove_inactive cmd;
/* don't even attempt ioctl if filesystem is mounted */
if (SUCCESS != is_core_mounted(cache_id, core_id)) {
return FAILURE;
}
fd = open_ctrl_device();
if (fd == -1)
return FAILURE;
@@ -2218,7 +2189,7 @@ int partition_list(unsigned int cache_id, unsigned int output_format)
fclose(intermediate_file[1]);
if (!result && stat_format_output(intermediate_file[0], stdout,
use_csv?RAW_CSV:TEXT)) {
cas_printf(LOG_ERR, "An error occurred during statistics formatting.\n");
cas_printf(LOG_ERR, "An error occured during statistics formatting.\n");
result = FAILURE;
}
fclose(intermediate_file[0]);
@@ -2343,10 +2314,6 @@ static inline int partition_get_line(CSVFILE *csv,
}
strncpy_s(cnfg->info[part_id].name, sizeof(cnfg->info[part_id].name),
name, strnlen_s(name, sizeof(cnfg->info[part_id].name)));
if (0 == part_id && strcmp(name, "unclassified")) {
cas_printf(LOG_ERR, "IO class 0 must have the default name 'unclassified'\n");
return FAILURE;
}
/* Validate Priority*/
*error_col = part_csv_coll_prio;
@@ -2434,7 +2401,7 @@ int partition_get_config(CSVFILE *csv, struct kcas_io_classes *cnfg,
return FAILURE;
} else {
cas_printf(LOG_ERR,
"I/O error occurred while reading"
"I/O error occured while reading"
" IO Classes configuration file"
" supplied.\n");
return FAILURE;
@@ -2681,7 +2648,7 @@ void *list_printout(void *ctx)
struct list_printout_ctx *spc = ctx;
if (stat_format_output(spc->intermediate,
spc->out, spc->type)) {
cas_printf(LOG_ERR, "An error occurred during statistics formatting.\n");
cas_printf(LOG_ERR, "An error occured during statistics formatting.\n");
spc->result = FAILURE;
} else {
spc->result = SUCCESS;
@@ -2820,24 +2787,20 @@ int list_caches(unsigned int list_format, bool by_id_path)
for (i = 0; i < caches_count; ++i) {
curr_cache = caches[i];
char status_buf[CACHE_STATE_LENGTH];
char status_buf[CACHE_STATE_LENGHT];
const char *tmp_status;
char mode_string[12];
char exp_obj[32];
char cache_ctrl_dev[MAX_STR_LEN] = "-";
float cache_flush_prog;
float core_flush_prog;
bool cache_device_detached =
((curr_cache->state & (1 << ocf_cache_state_standby)) |
(curr_cache->state & (1 << ocf_cache_state_detached)));
bool cache_device_detached;
if (!by_id_path && !cache_device_detached) {
if (!by_id_path && !curr_cache->standby_detached) {
if (get_dev_path(curr_cache->device, curr_cache->device,
sizeof(curr_cache->device))) {
cas_printf(LOG_WARNING,
"WARNING: Cannot resolve path to "
"cache %d. By-id path will be shown "
"for that cache.\n", curr_cache->id);
cas_printf(LOG_WARNING, "WARNING: Cannot resolve path "
"to cache. By-id path will be shown for that cache.\n");
}
}
@@ -2863,6 +2826,11 @@ int list_caches(unsigned int list_format, bool by_id_path)
}
}
cache_device_detached =
((curr_cache->state & (1 << ocf_cache_state_standby)) |
(curr_cache->state & (1 << ocf_cache_state_detached)))
;
fprintf(intermediate_file[1], TAG(TREE_BRANCH)
"%s,%u,%s,%s,%s,%s\n",
"cache", /* type */
@@ -2886,7 +2854,7 @@ int list_caches(unsigned int list_format, bool by_id_path)
}
if (core_flush_prog || cache_flush_prog) {
snprintf(status_buf, CACHE_STATE_LENGTH,
snprintf(status_buf, CACHE_STATE_LENGHT,
"%s (%3.1f %%)", "Flushing", core_flush_prog);
tmp_status = status_buf;
} else {
@@ -2914,7 +2882,7 @@ int list_caches(unsigned int list_format, bool by_id_path)
pthread_join(thread, 0);
if (printout_ctx.result) {
result = 1;
cas_printf(LOG_ERR, "An error occurred during list formatting.\n");
cas_printf(LOG_ERR, "An error occured during list formatting.\n");
}
fclose(intermediate_file[0]);
@@ -3048,7 +3016,7 @@ int zero_md(const char *cache_device, bool force)
}
close(fd);
cas_printf(LOG_INFO, "OpenCAS's metadata wiped successfully from device '%s'.\n", cache_device);
cas_printf(LOG_INFO, "OpenCAS's metadata wiped succesfully from device '%s'.\n", cache_device);
return SUCCESS;
}

View File

@@ -2237,7 +2237,7 @@ static cli_command cas_commands[] = {
.options = attach_cache_options,
.command_handle_opts = start_cache_command_handle_option,
.handle = handle_cache_attach,
.flags = CLI_SU_REQUIRED,
.flags = (CLI_SU_REQUIRED | CLI_COMMAND_BLOCKED),
.help = NULL,
},
{
@@ -2247,7 +2247,7 @@ static cli_command cas_commands[] = {
.options = detach_options,
.command_handle_opts = command_handle_option,
.handle = handle_cache_detach,
.flags = CLI_SU_REQUIRED,
.flags = (CLI_SU_REQUIRED | CLI_COMMAND_BLOCKED),
.help = NULL,
},
{

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -151,7 +151,9 @@ struct {
},
{
OCF_ERR_METADATA_FOUND,
"Old metadata found on device"
"Old metadata found on device.\nPlease load cache metadata using --load"
" option or use --force to\n discard on-disk metadata and"
" start fresh cache instance.\n"
},
{
OCF_ERR_SUPERBLOCK_MISMATCH,

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -10,19 +10,16 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct block_device bd; bdev_partno;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/genhd.h" ||
if compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/genhd.h" ||
compile_module $cur_name "struct gendisk *disk = NULL; struct xarray xa; xa = disk->part_tbl;" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct block_device bd; bd = *disk_part_iter_next(NULL);" "linux/blk_types.h" "linux/genhd.h"
then
echo $cur_name "3" >> $config_file_path
echo $cur_name "2" >> $config_file_path
elif compile_module $cur_name "struct hd_struct hd; hd = *disk_part_iter_next(NULL);" "linux/genhd.h"
then
echo $cur_name "4" >> $config_file_path
echo $cur_name "3" >> $config_file_path
else
echo $cur_name "X" >> $config_file_path
fi
@@ -39,23 +36,6 @@ apply() {
struct block_device *part;
unsigned long idx;
xa_for_each(&disk->part_tbl, idx, part) {
if ((part_no = bdev_partno(part))) {
break;
}
}
return part_no;
}" ;;
"2")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{
int part_no = 0;
struct gendisk *disk = bd->bd_disk;
struct block_device *part;
unsigned long idx;
xa_for_each(&disk->part_tbl, idx, part) {
if ((part_no = part->bd_partno)) {
break;
@@ -64,7 +44,7 @@ apply() {
return part_no;
}" ;;
"3")
"2")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{
@@ -86,7 +66,7 @@ apply() {
return part_no;
}" ;;
"4")
"3")
add_function "
static inline int cas_bd_get_next_part(struct block_device *bd)
{

View File

@@ -1,7 +1,6 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -23,7 +22,7 @@ apply() {
case "$1" in
"1")
add_function "
static inline void _cas_cleanup_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
blk_cleanup_disk(gd);
}"
@@ -32,7 +31,7 @@ apply() {
"2")
add_function "
static inline void _cas_cleanup_disk(struct gendisk *gd)
static inline void cas_cleanup_disk(struct gendisk *gd)
{
put_disk(gd);
}"

View File

@@ -1,7 +1,6 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -34,8 +33,7 @@ apply() {
add_function "
static inline void cas_cleanup_queue(struct request_queue *q)
{
if (queue_is_mq(q))
blk_mq_destroy_queue(q);
blk_mq_destroy_queue(q);
}"
;;

View File

@@ -1,7 +1,6 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -23,11 +22,6 @@ check() {
output=$((output+2))
fi
if compile_module $cur_name "BLK_MQ_F_SHOULD_MERGE ;" "linux/blk-mq.h"
then
output=$((output+4))
fi
echo $cur_name $output >> $config_file_path
}
@@ -48,14 +42,6 @@ apply() {
else
add_define "CAS_BLK_MQ_F_BLOCKING 0"
fi
if ((arg & 4))
then
add_define "CAS_BLK_MQ_F_SHOULD_MERGE \\
BLK_MQ_F_SHOULD_MERGE"
else
add_define "CAS_BLK_MQ_F_SHOULD_MERGE 0"
fi
}
conf_run $@

View File

@@ -1,45 +0,0 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "page_folio((struct page *)NULL);" "linux/page-flags.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline struct address_space *cas_page_mapping(struct page *page)
{
struct folio *folio = page_folio(page);
return folio->mapping;
}" ;;
"2")
add_function "
static inline struct address_space *cas_page_mapping(struct page *page)
{
if (PageCompound(page))
return NULL;
return page->mapping;
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -1,52 +0,0 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "preempt_model_voluntary();" "linux/preempt.h" &&
compile_module $cur_name "preempt_model_none();" "linux/preempt.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline int cas_preempt_model_voluntary(void)
{
return preempt_model_voluntary();
}"
add_function "
static inline int cas_preempt_model_none(void)
{
return preempt_model_none();
}" ;;
"2")
add_function "
static inline int cas_preempt_model_voluntary(void)
{
return IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY);
}"
add_function "
static inline int cas_preempt_model_none(void)
{
return IS_ENABLED(CONFIG_PREEMPT_NONE);
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -1,48 +0,0 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_queue_max_discard_sectors(NULL, 0);" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline void cas_queue_max_discard_sectors(
struct request_queue *q,
unsigned int max_discard_sectors)
{
blk_queue_max_discard_sectors(q, max_discard_sectors);
}" ;;
"2")
add_function "
static inline void cas_queue_max_discard_sectors(
struct request_queue *q,
unsigned int max_discard_sectors)
{
struct queue_limits *lim = &q->limits;
lim->max_hw_discard_sectors = max_discard_sectors;
lim->max_discard_sectors =
min(max_discard_sectors, lim->max_user_discard_sectors);
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -12,18 +12,18 @@ check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct queue_limits q; q.max_write_zeroes_sectors;" "linux/blkdev.h"
if compile_module $cur_name "struct queue_limits q; q.limits_aux;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "struct queue_limits q; q.max_write_zeroes_sectors;" "linux/blkdev.h"
then
if compile_module $cur_name "struct queue_limits q; q.max_write_same_sectors;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
else
echo $cur_name "3" >> $config_file_path
fi
elif compile_module $cur_name "struct queue_limits q; q.max_write_same_sectors;" "linux/blkdev.h"
then
echo $cur_name "3" >> $config_file_path
elif compile_module $cur_name "struct queue_limits q; q.limits_aux;" "linux/blkdev.h"
then
echo $cur_name "4" >> $config_file_path
else
@@ -37,55 +37,6 @@ apply() {
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"2")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"3")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"4")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
struct queue_limits_aux *l_aux = exp_q->limits.limits_aux;
exp_q->limits = *cache_q_limits;
@@ -112,6 +63,55 @@ apply() {
if (queue_virt_boundary(cache_q))
queue_flag_set(QUEUE_FLAG_NOMERGES, cache_q);
}" ;;
"2")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"3")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_zeroes_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
"4")
add_function "
static inline void cas_copy_queue_limits(struct request_queue *exp_q,
struct queue_limits *cache_q_limits, struct request_queue *core_q)
{
exp_q->limits = *cache_q_limits;
exp_q->limits.max_sectors = core_q->limits.max_sectors;
exp_q->limits.max_hw_sectors = core_q->limits.max_hw_sectors;
exp_q->limits.max_segments = core_q->limits.max_segments;
exp_q->limits.max_write_same_sectors = 0;
}"
add_function "
static inline void cas_cache_set_no_merges_flag(struct request_queue *cache_q)
{
}" ;;
*)

View File

@@ -1,42 +0,0 @@
#!/bin/bash
#
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "struct queue_limits q; q.misaligned;" "linux/blkdev.h"
then
echo $cur_name 1 >> $config_file_path
else
echo $cur_name 2 >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline bool cas_queue_limits_is_misaligned(
struct queue_limits *lim)
{
return lim->misaligned;
}" ;;
"2")
add_function "
static inline bool cas_queue_limits_is_misaligned(
struct queue_limits *lim)
{
return lim->features & BLK_FLAG_MISALIGNED;
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -1,39 +0,0 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
. $(dirname $3)/conf_framework.sh
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "(int)QUEUE_FLAG_NONROT;" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
else
echo $cur_name "2" >> $config_file_path
fi
}
apply() {
case "$1" in
"1")
add_function "
static inline void cas_queue_set_nonrot(struct request_queue *q)
{
q->queue_flags |= (1 << QUEUE_FLAG_NONROT);
}" ;;
"2")
add_function "
static inline void cas_queue_set_nonrot(struct request_queue *q)
{
}" ;;
*)
exit 1
esac
}
conf_run $@

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -11,18 +11,15 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "blk_alloc_disk(NULL, 0);" "linux/blkdev.h"
if compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL, NULL);" "linux/blk-mq.h"
then
echo $cur_name 1 >> $config_file_path
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL, NULL);" "linux/blk-mq.h"
then
echo $cur_name 2 >> $config_file_path
elif compile_module $cur_name "blk_mq_alloc_disk(NULL, NULL);" "linux/blk-mq.h"
then
echo $cur_name 3 >> $config_file_path
echo $cur_name 2 >> $config_file_path
elif compile_module $cur_name "alloc_disk(0);" "linux/genhd.h"
then
echo $cur_name 4 >> $config_file_path
echo $cur_name 3 >> $config_file_path
else
echo $cur_name X >> $config_file_path
fi
@@ -31,73 +28,50 @@ check() {
apply() {
case "$1" in
"1")
add_typedef "struct queue_limits cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
{
*gd = blk_alloc_disk(lim, NUMA_NO_NODE);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*gd = blk_mq_alloc_disk(tag_set, NULL, NULL);
if (!(*gd))
return -ENOMEM;
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
{
_cas_cleanup_disk(gd);
cas_cleanup_disk(gd);
}"
;;
"2")
add_typedef "struct queue_limits cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
{
*gd = blk_mq_alloc_disk(tag_set, lim, NULL);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*gd = blk_mq_alloc_disk(tag_set, NULL);
if (!(*gd))
return -ENOMEM;
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
{
_cas_cleanup_disk(gd);
cas_cleanup_disk(gd);
}"
;;
"3")
add_typedef "void* cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
{
*gd = blk_mq_alloc_disk(tag_set, NULL);
if (IS_ERR(*gd))
return PTR_ERR(*gd);
*queue = (*gd)->queue;
return 0;
}"
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
{
_cas_cleanup_disk(gd);
}"
;;
"4")
add_typedef "void* cas_queue_limits_t;"
add_function "
static inline int cas_alloc_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set, cas_queue_limits_t *lim)
static inline int cas_alloc_mq_disk(struct gendisk **gd, struct request_queue **queue,
struct blk_mq_tag_set *tag_set)
{
*gd = alloc_disk(1);
if (!(*gd))
@@ -114,7 +88,7 @@ apply() {
}"
add_function "
static inline void cas_cleanup_disk(struct gendisk *gd)
static inline void cas_cleanup_mq_disk(struct gendisk *gd)
{
blk_cleanup_queue(gd->queue);
gd->queue = NULL;

View File

@@ -1,7 +1,6 @@
#!/bin/bash
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -10,15 +9,12 @@
check() {
cur_name=$(basename $2)
config_file_path=$1
if compile_module $cur_name "BLK_FEAT_WRITE_CACHE;" "linux/blk-mq.h"
if compile_module $cur_name "blk_queue_write_cache(NULL, 0, 0);" "linux/blkdev.h"
then
echo $cur_name "1" >> $config_file_path
elif compile_module $cur_name "blk_queue_write_cache(NULL, 0, 0);" "linux/blkdev.h"
then
echo $cur_name "2" >> $config_file_path
elif compile_module $cur_name "struct request_queue rq; rq.flush_flags;" "linux/blkdev.h"
then
echo $cur_name "3" >> $config_file_path
echo $cur_name "2" >> $config_file_path
else
echo $cur_name "X" >> $config_file_path
fi
@@ -27,39 +23,21 @@ check() {
apply() {
case "$1" in
"1")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
(q->limits.features & BLK_FEAT_WRITE_CACHE)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
(q->limits.features & BLK_FEAT_FUA)"
add_define "CAS_BLK_FEAT_WRITE_CACHE BLK_FEAT_WRITE_CACHE"
add_define "CAS_BLK_FEAT_FUA BLK_FEAT_FUA"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag) \\
({ lim->features |= flag; })"
add_function "
static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua) {}" ;;
"2")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
test_bit(QUEUE_FLAG_WC, &(q)->queue_flags)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)"
add_define "CAS_BLK_FEAT_WRITE_CACHE 0"
add_define "CAS_BLK_FEAT_FUA 0"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag)"
add_function "
static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua)
{
blk_queue_write_cache(q, flush, fua);
}" ;;
"3")
"2")
add_define "CAS_CHECK_QUEUE_FLUSH(q) \\
CAS_IS_SET_FLUSH((q)->flush_flags)"
add_define "CAS_CHECK_QUEUE_FUA(q) \\
((q)->flush_flags & REQ_FUA)"
add_define "CAS_BLK_FEAT_WRITE_CACHE 0"
add_define "CAS_BLK_FEAT_FUA 0"
add_define "CAS_SET_QUEUE_LIMIT(lim, flag)"
add_function "static inline void cas_set_queue_flush_fua(struct request_queue *q,
bool flush, bool fua)
{

View File

@@ -1,6 +1,5 @@
#
# Copyright(c) 2012-2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
# If $(M) is defined, we've been invoked from the
@@ -53,11 +52,6 @@ distclean: clean distsync
install: install_files
@$(DEPMOD)
@$(MODPROBE) $(CACHE_MODULE) || ( \
echo "See dmesg for more information" >&2 && \
rm -f $(DESTDIR)$(MODULES_DIR)/$(CACHE_MODULE).ko && exit 1 \
)
install_files:
@echo "Installing Open-CAS modules"

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -29,8 +29,8 @@
trace_printk(format, ##__VA_ARGS__)
#else
#define CAS_CLS_DEBUG_MSG(format, ...) ({})
#define CAS_CLS_DEBUG_TRACE(format, ...) ({})
#define CAS_CLS_DEBUG_MSG(format, ...)
#define CAS_CLS_DEBUG_TRACE(format, ...)
#endif
/* Done condition test - always accepts and stops evaluation */
@@ -53,7 +53,7 @@ static cas_cls_eval_t _cas_cls_metadata_test(struct cas_classifier *cls,
if (PageAnon(io->page))
return cas_cls_eval_no;
if (PageSlab(io->page)) {
if (PageSlab(io->page) || PageCompound(io->page)) {
/* A filesystem issues IO on pages that does not belongs
* to the file page cache. It means that it is a
* part of metadata
@@ -61,7 +61,7 @@ static cas_cls_eval_t _cas_cls_metadata_test(struct cas_classifier *cls,
return cas_cls_eval_yes;
}
if (!cas_page_mapping(io->page)) {
if (!io->page->mapping) {
/* XFS case, page are allocated internally and do not
* have references into inode
*/
@@ -221,42 +221,6 @@ static int _cas_cls_string_ctr(struct cas_classifier *cls,
return 0;
}
/* IO direction condition constructor. @data is expected to contain string
* translated to IO direction.
*/
static int _cas_cls_direction_ctr(struct cas_classifier *cls,
struct cas_cls_condition *c, char *data)
{
uint64_t direction;
struct cas_cls_numeric *ctx;
if (!data) {
CAS_CLS_MSG(KERN_ERR, "Missing IO direction specifier\n");
return -EINVAL;
}
if (strncmp("read", data, 5) == 0) {
direction = READ;
} else if (strncmp("write", data, 6) == 0) {
direction = WRITE;
} else {
CAS_CLS_MSG(KERN_ERR, "Invalid IO direction specifier '%s'\n"
" allowed specifiers: 'read', 'write'\n", data);
return -EINVAL;
}
ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->operator = cas_cls_numeric_eq;
ctx->v_u64 = direction;
c->context = ctx;
return 0;
}
/* Unsigned int numeric test function */
static cas_cls_eval_t _cas_cls_numeric_test_u(
struct cas_cls_condition *c, uint64_t val)
@@ -700,14 +664,6 @@ static cas_cls_eval_t _cas_cls_request_size_test(
return _cas_cls_numeric_test_u(c, CAS_BIO_BISIZE(io->bio));
}
/* Request IO direction test function */
static cas_cls_eval_t _cas_cls_request_direction_test(
struct cas_classifier *cls, struct cas_cls_condition *c,
struct cas_cls_io *io, ocf_part_id_t part_id)
{
return _cas_cls_numeric_test_u(c, bio_data_dir(io->bio));
}
/* Array of condition handlers */
static struct cas_cls_condition_handler _handlers[] = {
{ "done", _cas_cls_done_test, _cas_cls_generic_ctr },
@@ -733,8 +689,6 @@ static struct cas_cls_condition_handler _handlers[] = {
_cas_cls_generic_dtr },
{ "request_size", _cas_cls_request_size_test, _cas_cls_numeric_ctr,
_cas_cls_generic_dtr },
{ "io_direction", _cas_cls_request_direction_test,
_cas_cls_direction_ctr, _cas_cls_generic_dtr },
#ifdef CAS_WLTH_SUPPORT
{ "wlth", _cas_cls_wlth_test, _cas_cls_numeric_ctr,
_cas_cls_generic_dtr},
@@ -803,7 +757,7 @@ static struct cas_cls_condition * _cas_cls_create_condition(
return c;
}
/* Read single condition from text input and return cas_cls_condition
/* Read single codnition from text input and return cas_cls_condition
* representation. *rule pointer is advanced to point to next condition.
* Input @rule string is modified to speed up parsing (selected bytes are
* overwritten with 0).
@@ -811,7 +765,7 @@ static struct cas_cls_condition * _cas_cls_create_condition(
* *l_op contains logical operator from previous condition and gets overwritten
* with operator read from currently parsed condition.
*
* Returns pointer to condition if successful.
* Returns pointer to condition if successfull.
* Returns NULL if no more conditions in string.
* Returns error pointer in case of syntax or runtime error.
*/
@@ -1096,11 +1050,9 @@ int cas_cls_rule_create(ocf_cache_t cache,
return -ENOMEM;
r = _cas_cls_rule_create(cls, part_id, _rule);
if (IS_ERR(r)) {
CAS_CLS_DEBUG_MSG(
"Cannot create rule: %s => %d\n", rule, part_id);
if (IS_ERR(r))
ret = _cas_cls_rule_err_to_cass_err(PTR_ERR(r));
} else {
else {
CAS_CLS_DEBUG_MSG("Created rule: %s => %d\n", rule, part_id);
*cls_rule = r;
ret = 0;
@@ -1229,7 +1181,6 @@ static void _cas_cls_get_bio_context(struct bio *bio,
struct cas_cls_io *ctx)
{
struct page *page = NULL;
struct address_space *mapping;
if (!bio)
return;
@@ -1247,14 +1198,13 @@ static void _cas_cls_get_bio_context(struct bio *bio,
if (PageAnon(page))
return;
if (PageSlab(page))
if (PageSlab(page) || PageCompound(page))
return;
mapping = cas_page_mapping(page);
if (!mapping)
if (!page->mapping)
return;
ctx->inode = mapping->host;
ctx->inode = page->mapping->host;
return;
}

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <linux/module.h>
@@ -351,8 +351,7 @@ static int _cas_init_tag_set(struct cas_disk *dsk, struct blk_mq_tag_set *set)
set->queue_depth = CAS_BLKDEV_DEFAULT_RQ;
set->cmd_size = 0;
set->flags = CAS_BLK_MQ_F_SHOULD_MERGE | CAS_BLK_MQ_F_STACKING |
CAS_BLK_MQ_F_BLOCKING;
set->flags = BLK_MQ_F_SHOULD_MERGE | CAS_BLK_MQ_F_STACKING | CAS_BLK_MQ_F_BLOCKING;
set->driver_data = dsk;
@@ -389,36 +388,12 @@ static int _cas_exp_obj_check_path(const char *dev_name)
return result;
}
static ssize_t device_attr_serial_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct gendisk *gd = dev_to_disk(dev);
struct cas_disk *dsk = gd->private_data;
struct cas_exp_obj *exp_obj = dsk->exp_obj;
return sysfs_emit(buf, "opencas-%s", exp_obj->dev_name);
}
static struct device_attribute device_attr_serial =
__ATTR(serial, 0444, device_attr_serial_show, NULL);
static struct attribute *device_attrs[] = {
&device_attr_serial.attr,
NULL,
};
static const struct attribute_group device_attr_group = {
.attrs = device_attrs,
.name = "device",
};
int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
struct module *owner, struct cas_exp_obj_ops *ops, void *priv)
{
struct cas_exp_obj *exp_obj;
struct request_queue *queue;
struct gendisk *gd;
cas_queue_limits_t queue_limits;
int result = 0;
BUG_ON(!owner);
@@ -467,15 +442,7 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
goto error_init_tag_set;
}
if (exp_obj->ops->set_queue_limits) {
result = exp_obj->ops->set_queue_limits(dsk, priv,
&queue_limits);
if (result)
goto error_set_queue_limits;
}
result = cas_alloc_disk(&gd, &queue, &exp_obj->tag_set,
&queue_limits);
result = cas_alloc_mq_disk(&gd, &queue, &exp_obj->tag_set);
if (result) {
goto error_alloc_mq_disk;
}
@@ -506,14 +473,9 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
goto error_set_geometry;
}
result = cas_add_disk(gd);
if (result)
if (cas_add_disk(gd))
goto error_add_disk;
result = sysfs_create_group(&disk_to_dev(gd)->kobj, &device_attr_group);
if (result)
goto error_sysfs;
result = bd_claim_by_disk(cas_disk_get_blkdev(dsk), dsk, gd);
if (result)
goto error_bd_claim;
@@ -521,18 +483,15 @@ int cas_exp_obj_create(struct cas_disk *dsk, const char *dev_name,
return 0;
error_bd_claim:
sysfs_remove_group(&disk_to_dev(gd)->kobj, &device_attr_group);
error_sysfs:
del_gendisk(dsk->exp_obj->gd);
error_add_disk:
error_set_geometry:
exp_obj->private = NULL;
_cas_exp_obj_clear_dev_t(dsk);
error_exp_obj_set_dev_t:
cas_cleanup_disk(gd);
cas_cleanup_mq_disk(gd);
exp_obj->gd = NULL;
error_alloc_mq_disk:
error_set_queue_limits:
blk_mq_free_tag_set(&exp_obj->tag_set);
error_init_tag_set:
module_put(owner);

View File

@@ -1,12 +1,11 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __CASDISK_EXP_OBJ_H__
#define __CASDISK_EXP_OBJ_H__
#include "linux_kernel_version.h"
#include <linux/fs.h>
struct cas_disk;
@@ -18,12 +17,6 @@ struct cas_exp_obj_ops {
*/
int (*set_geometry)(struct cas_disk *dsk, void *private);
/**
* @brief Set queue limits of exported object (top) block device.
*/
int (*set_queue_limits)(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim);
/**
* @brief submit_bio of exported object (top) block device.
*

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -2405,8 +2405,7 @@ static int cache_mngt_check_bdev(struct ocf_mngt_cache_device_config *cfg,
printk(KERN_WARNING "New cache device block properties "
"differ from the previous one.\n");
}
if (cas_queue_limits_is_misaligned(&tmp_limits)) {
if (tmp_limits.misaligned) {
reattach_properties_diff = true;
printk(KERN_WARNING "New cache device block interval "
"doesn't line up with the previous one.\n");

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -40,6 +40,7 @@
#include <linux/mm.h>
#include <linux/blk-mq.h>
#include <linux/ktime.h>
#include "exp_obj.h"
#include "generated_defines.h"

View File

@@ -1,6 +1,5 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2025 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -43,56 +42,10 @@ MODULE_PARM_DESC(seq_cut_off_mb,
ocf_ctx_t cas_ctx;
struct cas_module cas_module;
static inline uint32_t involuntary_preemption_enabled(void)
{
bool config_dynamic = IS_ENABLED(CONFIG_PREEMPT_DYNAMIC);
bool config_rt = IS_ENABLED(CONFIG_PREEMPT_RT);
bool config_preempt = IS_ENABLED(CONFIG_PREEMPT);
bool config_lazy = IS_ENABLED(CONFIG_PREEMPT_LAZY);
bool config_none = IS_ENABLED(CONFIG_PREEMPT_NONE);
if (!config_dynamic && !config_rt && !config_preempt && !config_lazy)
return false;
if (config_none)
return false;
if (config_rt || config_preempt || config_lazy) {
printk(KERN_ERR OCF_PREFIX_SHORT
"The kernel has been built with involuntary preemption "
"enabled.\nFailed to load Open CAS kernel module.\n");
return true;
}
#ifdef CONFIG_PREEMPT_DYNAMIC
printk(KERN_WARNING OCF_PREFIX_SHORT
"The kernel has been compiled with preemption configurable\n"
"at boot time (PREEMPT_DYNAMIC=y). Open CAS doesn't support\n"
"kernels with involuntary preemption so make sure to set\n"
"\"preempt=\" to \"none\" or \"voluntary\" in the kernel"
" command line\n");
if (!cas_preempt_model_none() && !cas_preempt_model_voluntary()) {
printk(KERN_ERR OCF_PREFIX_SHORT
"The kernel has been booted with involuntary "
"preemption enabled.\nFailed to load Open CAS kernel "
"module.\n");
return true;
} else {
return false;
}
#endif
return false;
}
static int __init cas_init_module(void)
{
int result = 0;
if (involuntary_preemption_enabled())
return -ENOTSUP;
if (!writeback_queue_unblock_size || !max_writeback_queue_size) {
printk(KERN_ERR OCF_PREFIX_SHORT
"Invalid module parameter.\n");

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -407,11 +407,6 @@ static inline u64 env_atomic64_inc_return(env_atomic64 *a)
return atomic64_inc_return(a);
}
static inline u64 env_atomic64_dec_return(env_atomic64 *a)
{
return atomic64_dec_return(a);
}
static inline u64 env_atomic64_cmpxchg(atomic64_t *a, u64 old, u64 new)
{
return atomic64_cmpxchg(a, old, new);

View File

@@ -1,345 +0,0 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "ocf_env_refcnt.h"
#include "ocf/ocf_err.h"
#include "ocf_env.h"
#define ENV_REFCNT_CB_ARMING 1
#define ENV_REFCNT_CB_ARMED 2
static void _env_refcnt_do_on_cpus_cb(struct work_struct *work)
{
struct notify_cpu_work *ctx =
container_of(work, struct notify_cpu_work, work);
ctx->cb(ctx->priv);
env_atomic_dec(&ctx->rc->notify.to_notify);
wake_up(&ctx->rc->notify.notify_wait_queue);
}
static void _env_refcnt_do_on_cpus(struct env_refcnt *rc,
env_refcnt_do_on_cpu_cb_t cb, void *priv)
{
int cpu_no;
struct notify_cpu_work *work;
ENV_BUG_ON(env_atomic_read(&rc->notify.to_notify));
for_each_online_cpu(cpu_no) {
work = rc->notify.notify_work_items[cpu_no];
env_atomic_inc(&rc->notify.to_notify);
work->cb = cb;
work->rc = rc;
work->priv = priv;
INIT_WORK(&work->work, _env_refcnt_do_on_cpus_cb);
queue_work_on(cpu_no, rc->notify.notify_work_queue,
&work->work);
}
wait_event(rc->notify.notify_wait_queue,
!env_atomic_read(&rc->notify.to_notify));
}
static void _env_refcnt_init_pcpu(void *ctx)
{
struct env_refcnt *rc = ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(rc->pcpu);
pcpu->freeze = false;
env_atomic64_set(&pcpu->counter, 0);
}
int env_refcnt_init(struct env_refcnt *rc, const char *name, size_t name_len)
{
int cpu_no, result;
env_memset(rc, sizeof(*rc), 0);
env_strncpy(rc->name, sizeof(rc->name), name, name_len);
rc->pcpu = alloc_percpu(struct env_refcnt_pcpu);
if (!rc->pcpu)
return -OCF_ERR_NO_MEM;
init_waitqueue_head(&rc->notify.notify_wait_queue);
rc->notify.notify_work_queue = alloc_workqueue("refcnt_%s", 0,
0, rc->name);
if (!rc->notify.notify_work_queue) {
result = -OCF_ERR_NO_MEM;
goto cleanup_pcpu;
}
rc->notify.notify_work_items = env_vzalloc(
sizeof(*rc->notify.notify_work_items) * num_online_cpus());
if (!rc->notify.notify_work_items) {
result = -OCF_ERR_NO_MEM;
goto cleanup_wq;
}
for_each_online_cpu(cpu_no) {
rc->notify.notify_work_items[cpu_no] = env_vmalloc(
sizeof(*rc->notify.notify_work_items[cpu_no]));
if (!rc->notify.notify_work_items[cpu_no]) {
result = -OCF_ERR_NO_MEM;
goto cleanup_work;
}
}
result = env_spinlock_init(&rc->freeze.lock);
if (result)
goto cleanup_work;
_env_refcnt_do_on_cpus(rc, _env_refcnt_init_pcpu, rc);
rc->callback.pfn = NULL;
rc->callback.priv = NULL;
return 0;
cleanup_work:
for_each_online_cpu(cpu_no) {
if (rc->notify.notify_work_items[cpu_no]) {
env_vfree(rc->notify.notify_work_items[cpu_no]);
rc->notify.notify_work_items[cpu_no] = NULL;
}
}
env_vfree(rc->notify.notify_work_items);
rc->notify.notify_work_items = NULL;
cleanup_wq:
destroy_workqueue(rc->notify.notify_work_queue);
rc->notify.notify_work_queue = NULL;
cleanup_pcpu:
free_percpu(rc->pcpu);
rc->pcpu = NULL;
return result;
}
void env_refcnt_deinit(struct env_refcnt *rc)
{
int cpu_no;
env_spinlock_destroy(&rc->freeze.lock);
ENV_BUG_ON(env_atomic_read(&rc->notify.to_notify));
for_each_online_cpu(cpu_no) {
if (rc->notify.notify_work_items[cpu_no]) {
env_vfree(rc->notify.notify_work_items[cpu_no]);
rc->notify.notify_work_items[cpu_no] = NULL;
}
}
env_vfree(rc->notify.notify_work_items);
rc->notify.notify_work_items = NULL;
destroy_workqueue(rc->notify.notify_work_queue);
rc->notify.notify_work_queue = NULL;
free_percpu(rc->pcpu);
rc->pcpu = NULL;
}
static inline void _env_refcnt_call_freeze_cb(struct env_refcnt *rc)
{
bool fire;
fire = (env_atomic_cmpxchg(&rc->callback.armed, ENV_REFCNT_CB_ARMED, 0)
== ENV_REFCNT_CB_ARMED);
smp_mb();
if (fire)
rc->callback.pfn(rc->callback.priv);
}
void env_refcnt_dec(struct env_refcnt *rc)
{
struct env_refcnt_pcpu *pcpu;
bool freeze;
int64_t countdown = 0;
bool callback;
unsigned long flags;
pcpu = get_cpu_ptr(rc->pcpu);
freeze = pcpu->freeze;
if (!freeze)
env_atomic64_dec(&pcpu->counter);
put_cpu_ptr(pcpu);
if (freeze) {
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
countdown = env_atomic64_dec_return(&rc->freeze.countdown);
callback = !rc->freeze.initializing && countdown == 0;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
}
bool env_refcnt_inc(struct env_refcnt *rc)
{
struct env_refcnt_pcpu *pcpu;
bool freeze;
pcpu = get_cpu_ptr(rc->pcpu);
freeze = pcpu->freeze;
if (!freeze)
env_atomic64_inc(&pcpu->counter);
put_cpu_ptr(pcpu);
return !freeze;
}
struct env_refcnt_freeze_ctx {
struct env_refcnt *rc;
env_atomic64 sum;
};
static void _env_refcnt_freeze_pcpu(void *_ctx)
{
struct env_refcnt_freeze_ctx *ctx = _ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(ctx->rc->pcpu);
pcpu->freeze = true;
env_atomic64_add(env_atomic64_read(&pcpu->counter), &ctx->sum);
}
void env_refcnt_freeze(struct env_refcnt *rc)
{
struct env_refcnt_freeze_ctx ctx;
int freeze_cnt;
bool callback;
unsigned long flags;
ctx.rc = rc;
env_atomic64_set(&ctx.sum, 0);
/* initiate freeze */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
freeze_cnt = ++(rc->freeze.counter);
if (freeze_cnt > 1) {
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return;
}
rc->freeze.initializing = true;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
/* notify CPUs about freeze */
_env_refcnt_do_on_cpus(rc, _env_refcnt_freeze_pcpu, &ctx);
/* update countdown */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
env_atomic64_add(env_atomic64_read(&ctx.sum), &rc->freeze.countdown);
rc->freeze.initializing = false;
callback = (env_atomic64_read(&rc->freeze.countdown) == 0);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
/* if countdown finished trigger callback */
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
void env_refcnt_register_zero_cb(struct env_refcnt *rc, env_refcnt_cb_t cb,
void *priv)
{
bool callback;
bool concurrent_arming;
unsigned long flags;
concurrent_arming = (env_atomic_inc_return(&rc->callback.armed)
> ENV_REFCNT_CB_ARMING);
ENV_BUG_ON(concurrent_arming);
/* arm callback */
rc->callback.pfn = cb;
rc->callback.priv = priv;
smp_wmb();
env_atomic_set(&rc->callback.armed, ENV_REFCNT_CB_ARMED);
/* fire callback in case countdown finished */
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
callback = (
env_atomic64_read(&rc->freeze.countdown) == 0 &&
!rc->freeze.initializing
);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
if (callback)
_env_refcnt_call_freeze_cb(rc);
}
static void _env_refcnt_unfreeze_pcpu(void *_ctx)
{
struct env_refcnt_freeze_ctx *ctx = _ctx;
struct env_refcnt_pcpu *pcpu = this_cpu_ptr(ctx->rc->pcpu);
ENV_BUG_ON(!pcpu->freeze);
env_atomic64_set(&pcpu->counter, 0);
pcpu->freeze = false;
}
void env_refcnt_unfreeze(struct env_refcnt *rc)
{
struct env_refcnt_freeze_ctx ctx;
int freeze_cnt;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
freeze_cnt = --(rc->freeze.counter);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
ENV_BUG_ON(freeze_cnt < 0);
if (freeze_cnt > 0)
return;
ENV_BUG_ON(env_atomic64_read(&rc->freeze.countdown));
/* disarm callback */
env_atomic_set(&rc->callback.armed, 0);
smp_wmb();
/* notify CPUs about unfreeze */
ctx.rc = rc;
_env_refcnt_do_on_cpus(rc, _env_refcnt_unfreeze_pcpu, &ctx);
}
bool env_refcnt_frozen(struct env_refcnt *rc)
{
bool frozen;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
frozen = !!rc->freeze.counter;
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return frozen;
}
bool env_refcnt_zeroed(struct env_refcnt *rc)
{
bool frozen;
bool initializing;
int64_t countdown;
unsigned long flags;
env_spinlock_lock_irqsave(&rc->freeze.lock, flags);
frozen = !!rc->freeze.counter;
initializing = rc->freeze.initializing;
countdown = env_atomic64_read(&rc->freeze.countdown);
env_spinlock_unlock_irqrestore(&rc->freeze.lock, flags);
return frozen && !initializing && countdown == 0;
}

View File

@@ -1,104 +0,0 @@
/*
* Copyright(c) 2019-2021 Intel Corporation
* Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __OCF_ENV_REFCNT_H__
#define __OCF_ENV_REFCNT_H__
#include "ocf_env.h"
typedef void (*env_refcnt_cb_t)(void *priv);
struct env_refcnt_pcpu {
env_atomic64 counter;
bool freeze;
};
typedef void (*env_refcnt_do_on_cpu_cb_t)(void *priv);
struct notify_cpu_work {
struct work_struct work;
/* function to call on each cpu */
env_refcnt_do_on_cpu_cb_t cb;
/* priv passed to cb */
void *priv;
/* refcnt instance */
struct env_refcnt *rc;
};
struct env_refcnt {
struct env_refcnt_pcpu __percpu *pcpu __aligned(64);
struct {
/* freeze counter */
int counter;
/* global counter used instead of per-CPU ones after
* freeze
*/
env_atomic64 countdown;
/* freeze initializing - freeze was requested but not all
* CPUs were notified.
*/
bool initializing;
env_spinlock lock;
} freeze;
struct {
struct notify_cpu_work **notify_work_items;
env_atomic to_notify;
wait_queue_head_t notify_wait_queue;
struct workqueue_struct *notify_work_queue;
} notify;
struct {
env_atomic armed;
env_refcnt_cb_t pfn;
void *priv;
} callback;
char name[32];
};
/* Initialize reference counter */
int env_refcnt_init(struct env_refcnt *rc, const char *name, size_t name_len);
void env_refcnt_deinit(struct env_refcnt *rc);
/* Try to increment counter. Returns counter value (> 0) if successful, 0
* if counter is frozen
*/
bool env_refcnt_inc(struct env_refcnt *rc);
/* Decrement reference counter */
void env_refcnt_dec(struct env_refcnt *rc);
/* Disallow incrementing of underlying counter - attempts to increment counter
* will be failing until env_refcnt_unfreeze is called.
* It's ok to call freeze multiple times, in which case counter is frozen
* until all freeze calls are offset by a corresponding unfreeze.
*/
void env_refcnt_freeze(struct env_refcnt *rc);
/* Cancel the effect of single env_refcnt_freeze call */
void env_refcnt_unfreeze(struct env_refcnt *rc);
bool env_refcnt_frozen(struct env_refcnt *rc);
bool env_refcnt_zeroed(struct env_refcnt *rc);
/* Register callback to be called when reference counter drops to 0.
* Must be called after counter is frozen.
* Cannot be called until previously regsitered callback had fired.
*/
void env_refcnt_register_zero_cb(struct env_refcnt *rc, env_refcnt_cb_t cb,
void *priv);
#endif // __OCF_ENV_REFCNT_H__

View File

@@ -86,6 +86,10 @@ long cas_service_ioctl_ctrl(struct file *filp, unsigned int cmd,
GET_CMD_INFO(cmd_info, arg);
printk(KERN_ERR "Cache attach is not supported!\n");
retval = -ENOTSUP;
RETURN_CMD_RESULT(cmd_info, arg, retval);
cache_name_from_id(cache_name, cmd_info->cache_id);
retval = cache_mngt_attach_cache_cfg(cache_name, OCF_CACHE_NAME_SIZE,
@@ -104,6 +108,9 @@ long cas_service_ioctl_ctrl(struct file *filp, unsigned int cmd,
char cache_name[OCF_CACHE_NAME_SIZE];
GET_CMD_INFO(cmd_info, arg);
printk(KERN_ERR "Cache detach is not supported!\n");
retval = -ENOTSUP;
RETURN_CMD_RESULT(cmd_info, arg, retval);
cache_name_from_id(cache_name, cmd_info->cache_id);

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -73,7 +73,6 @@ static int _cas_cleaner_thread(void *data)
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
struct cas_thread_info *info;
uint32_t ms;
ocf_queue_t queue;
BUG_ON(!c);
@@ -95,10 +94,7 @@ static int _cas_cleaner_thread(void *data)
atomic_set(&info->kicked, 0);
init_completion(&info->sync_compl);
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
ocf_cleaner_run(c, queue);
ocf_cleaner_run(c, cache_priv->io_queues[smp_processor_id()]);
wait_for_completion(&info->sync_compl);
/*

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies
* Copyright(c) 2024 Huawei Technologies
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -208,13 +208,9 @@ void *cas_rpool_try_get(struct cas_reserve_pool *rpool_master, int *cpu)
CAS_DEBUG_TRACE();
get_cpu();
*cpu = smp_processor_id();
current_rpool = &rpool_master->rpools[*cpu];
put_cpu();
spin_lock_irqsave(&current_rpool->lock, flags);
if (!list_empty(&current_rpool->list)) {

View File

@@ -1,6 +1,6 @@
/*
* Copyright(c) 2012-2022 Intel Corporation
* Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
* Copyright(c) 2024 Huawei Technologies Co., Ltd.
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -63,14 +63,13 @@ static void blkdev_set_discard_properties(ocf_cache_t cache,
CAS_SET_DISCARD_ZEROES_DATA(exp_q->limits, 0);
if (core_q && cas_has_discard_support(core_bd)) {
cas_queue_max_discard_sectors(exp_q,
core_q->limits.max_discard_sectors);
blk_queue_max_discard_sectors(exp_q, core_q->limits.max_discard_sectors);
exp_q->limits.discard_alignment =
bdev_discard_alignment(core_bd);
exp_q->limits.discard_granularity =
core_q->limits.discard_granularity;
} else {
cas_queue_max_discard_sectors(exp_q,
blk_queue_max_discard_sectors(exp_q,
min((uint64_t)core_sectors, (uint64_t)UINT_MAX));
exp_q->limits.discard_granularity = ocf_cache_get_line_size(cache);
exp_q->limits.discard_alignment = 0;
@@ -130,37 +129,7 @@ static int blkdev_core_set_geometry(struct cas_disk *dsk, void *private)
blkdev_set_discard_properties(cache, exp_q, core_bd, sectors);
cas_queue_set_nonrot(exp_q);
return 0;
}
static int blkdev_core_set_queue_limits(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim)
{
ocf_core_t core = private;
ocf_cache_t cache = ocf_core_get_cache(core);
ocf_volume_t core_vol = ocf_core_get_volume(core);
struct bd_object *bd_core_vol;
struct request_queue *core_q;
bool flush, fua;
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
bd_core_vol = bd_object(core_vol);
core_q = cas_disk_get_queue(bd_core_vol->dsk);
flush = (CAS_CHECK_QUEUE_FLUSH(core_q) ||
cache_priv->device_properties.flush);
fua = (CAS_CHECK_QUEUE_FUA(core_q) ||
cache_priv->device_properties.fua);
memset(lim, 0, sizeof(cas_queue_limits_t));
if (flush)
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_WRITE_CACHE);
if (fua)
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_FUA);
exp_q->queue_flags |= (1 << QUEUE_FLAG_NONROT);
return 0;
}
@@ -248,16 +217,12 @@ static int blkdev_handle_data_single(struct bd_object *bvol, struct bio *bio,
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue;
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_io_t io;
struct blk_data *data;
uint64_t flags = CAS_BIO_OP_FLAGS(bio);
int ret;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
data = cas_alloc_blk_data(bio_segments(bio), GFP_NOIO);
if (!data) {
CAS_PRINT_RL(KERN_CRIT "BIO data vector allocation error\n");
@@ -367,13 +332,9 @@ static void blkdev_handle_discard(struct bd_object *bvol, struct bio *bio)
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue;
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_io_t io;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
io = ocf_volume_new_io(bvol->front_volume, queue,
CAS_BIO_BISECTOR(bio) << SECTOR_SHIFT,
CAS_BIO_BISIZE(bio), OCF_WRITE, 0, 0);
@@ -419,13 +380,9 @@ static void blkdev_handle_flush(struct bd_object *bvol, struct bio *bio)
{
ocf_cache_t cache = ocf_volume_get_cache(bvol->front_volume);
struct cache_priv *cache_priv = ocf_cache_get_priv(cache);
ocf_queue_t queue;
ocf_queue_t queue = cache_priv->io_queues[smp_processor_id()];
ocf_io_t io;
get_cpu();
queue = cache_priv->io_queues[smp_processor_id()];
put_cpu();
io = ocf_volume_new_io(bvol->front_volume, queue, 0, 0, OCF_WRITE, 0,
CAS_SET_FLUSH(0));
if (!io) {
@@ -471,7 +428,6 @@ static void blkdev_core_submit_bio(struct cas_disk *dsk,
static struct cas_exp_obj_ops kcas_core_exp_obj_ops = {
.set_geometry = blkdev_core_set_geometry,
.set_queue_limits = blkdev_core_set_queue_limits,
.submit_bio = blkdev_core_submit_bio,
};
@@ -514,37 +470,6 @@ static int blkdev_cache_set_geometry(struct cas_disk *dsk, void *private)
return 0;
}
static int blkdev_cache_set_queue_limits(struct cas_disk *dsk, void *private,
cas_queue_limits_t *lim)
{
ocf_cache_t cache;
ocf_volume_t volume;
struct bd_object *bvol;
struct request_queue *cache_q;
struct block_device *bd;
BUG_ON(!private);
cache = private;
volume = ocf_cache_get_volume(cache);
bvol = bd_object(volume);
bd = cas_disk_get_blkdev(bvol->dsk);
BUG_ON(!bd);
cache_q = bd->bd_disk->queue;
memset(lim, 0, sizeof(cas_queue_limits_t));
if (CAS_CHECK_QUEUE_FLUSH(cache_q))
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_WRITE_CACHE);
if (CAS_CHECK_QUEUE_FUA(cache_q))
CAS_SET_QUEUE_LIMIT(lim, CAS_BLK_FEAT_FUA);
return 0;
}
static void blkdev_cache_submit_bio(struct cas_disk *dsk,
struct bio *bio, void *private)
{
@@ -560,7 +485,6 @@ static void blkdev_cache_submit_bio(struct cas_disk *dsk,
static struct cas_exp_obj_ops kcas_cache_exp_obj_ops = {
.set_geometry = blkdev_cache_set_geometry,
.set_queue_limits = blkdev_cache_set_queue_limits,
.submit_bio = blkdev_cache_submit_bio,
};

2
ocf

Submodule ocf updated: a63479c7cd...6ad1007e6f

View File

@@ -1,59 +1,36 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from datetime import timedelta
from typing import List
from api.cas import casadm
from api.cas.cache_config import (
CacheLineSize,
CleaningPolicy,
CacheStatus,
CacheMode,
FlushParametersAlru,
FlushParametersAcp,
SeqCutOffParameters,
SeqCutOffPolicy,
PromotionPolicy,
PromotionParametersNhit,
CacheConfig,
)
from api.cas.casadm_params import StatsFilter
from api.cas.casadm_parser import (get_cas_devices_dict, get_cores, get_flush_parameters_alru,
get_flush_parameters_acp, get_io_class_list)
from api.cas.casadm_parser import *
from api.cas.core import Core
from api.cas.dmesg import get_metadata_size_on_device
from api.cas.statistics import CacheStats, CacheIoClassStats
from connection.utils.output import Output
from storage_devices.device import Device
from test_tools.os_tools import sync
from type_def.size import Size
from test_utils.os_utils import *
from test_utils.output import Output
class Cache:
def __init__(
self, cache_id: int, device: Device = None, cache_line_size: CacheLineSize = None
) -> None:
self.cache_id = cache_id
self.cache_device = device if device else self.__get_cache_device()
self.__cache_line_size = cache_line_size
def __init__(self, device: Device, cache_id: int = None) -> None:
self.cache_device = device
self.cache_id = cache_id if cache_id else self.__get_cache_id()
self.__cache_line_size = None
def __get_cache_id(self) -> int:
device_path = self.__get_cache_device_path()
def __get_cache_device(self) -> Device | None:
caches_dict = get_cas_devices_dict()["caches"]
cache = next(
iter([cache for cache in caches_dict.values() if cache["id"] == self.cache_id])
)
if not cache:
return None
for cache in caches_dict.values():
if cache["device_path"] == device_path:
return int(cache["id"])
if cache["device_path"] is "-":
return None
raise Exception(f"There is no cache started on {device_path}")
return Device(path=cache["device_path"])
def __get_cache_device_path(self) -> str:
return self.cache_device.path if self.cache_device is not None else "-"
def get_core_devices(self) -> list:
return get_cores(self.cache_id)
@@ -217,8 +194,8 @@ class Cache:
def set_params_nhit(self, promotion_params_nhit: PromotionParametersNhit) -> Output:
return casadm.set_param_promotion_nhit(
self.cache_id,
threshold=promotion_params_nhit.threshold,
trigger=promotion_params_nhit.trigger,
threshold=promotion_params_nhit.threshold.get_value(),
trigger=promotion_params_nhit.trigger
)
def get_cache_config(self) -> CacheConfig:
@@ -231,18 +208,10 @@ class Cache:
def standby_detach(self, shortcut: bool = False) -> Output:
return casadm.standby_detach_cache(cache_id=self.cache_id, shortcut=shortcut)
def standby_activate(self, device: Device, shortcut: bool = False) -> Output:
def standby_activate(self, device, shortcut: bool = False) -> Output:
return casadm.standby_activate_cache(
cache_id=self.cache_id, cache_dev=device, shortcut=shortcut
)
def attach(self, device: Device, force: bool = False) -> Output:
cmd_output = casadm.attach_cache(cache_id=self.cache_id, device=device, force=force)
return cmd_output
def detach(self) -> Output:
cmd_output = casadm.detach_cache(cache_id=self.cache_id)
return cmd_output
def has_volatile_metadata(self) -> bool:
return self.get_metadata_size_on_disk() == Size.zero()

View File

@@ -1,14 +1,14 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from enum import Enum, IntFlag
from test_tools.os_tools import get_kernel_module_parameter
from type_def.size import Size, Unit
from type_def.time import Time
from test_utils.os_utils import get_kernel_module_parameter
from test_utils.size import Size, Unit
from test_utils.time import Time
class CacheLineSize(Enum):
@@ -72,9 +72,9 @@ class CacheMode(Enum):
class SeqCutOffPolicy(Enum):
full = "full"
always = "always"
never = "never"
full = 0
always = 1
never = 2
DEFAULT = full
@classmethod
@@ -85,9 +85,6 @@ class SeqCutOffPolicy(Enum):
raise ValueError(f"{name} is not a valid sequential cut off name")
def __str__(self):
return self.value
class MetadataMode(Enum):
normal = "normal"
@@ -125,7 +122,6 @@ class CacheStatus(Enum):
incomplete = "incomplete"
standby = "standby"
standby_detached = "standby detached"
detached = "detached"
def __str__(self):
return self.value
@@ -244,7 +240,7 @@ class SeqCutOffParameters:
class PromotionParametersNhit:
def __init__(self, threshold: int = None, trigger: int = None):
def __init__(self, threshold: Size = None, trigger: int = None):
self.threshold = threshold
self.trigger = trigger

View File

@@ -6,7 +6,8 @@
from enum import Enum
from core.test_run import TestRun
from test_tools.os_tools import unload_kernel_module, load_kernel_module
from test_utils import os_utils
from test_utils.os_utils import ModuleRemoveMethod
class CasModule(Enum):
@@ -14,12 +15,12 @@ class CasModule(Enum):
def reload_all_cas_modules():
unload_kernel_module(CasModule.cache.value)
load_kernel_module(CasModule.cache.value)
os_utils.unload_kernel_module(CasModule.cache.value, ModuleRemoveMethod.modprobe)
os_utils.load_kernel_module(CasModule.cache.value)
def unload_all_cas_modules():
unload_kernel_module(CasModule.cache.value)
os_utils.unload_kernel_module(CasModule.cache.value, os_utils.ModuleRemoveMethod.rmmod)
def is_cas_management_dev_present():

View File

@@ -9,7 +9,7 @@ import os
import re
from core.test_run import TestRun
from test_tools.fs_tools import check_if_directory_exists, find_all_files
from test_tools.fs_utils import check_if_directory_exists, find_all_files
from test_tools.linux_packaging import DebSet, RpmSet

View File

@@ -9,13 +9,13 @@ from datetime import timedelta
from string import Template
from textwrap import dedent
from test_tools.fs_tools import (
from test_tools.fs_utils import (
check_if_directory_exists,
create_directory,
write_file,
remove,
)
from test_tools.systemctl import reload_daemon
from test_utils.systemd import reload_daemon
opencas_drop_in_directory = Path("/etc/systemd/system/open-cas.service.d/")
test_drop_in_file = Path("10-modified-timeout.conf")

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -20,9 +20,9 @@ from api.cas.cli import *
from api.cas.core import Core
from core.test_run import TestRun
from storage_devices.device import Device
from test_tools.os_tools import reload_kernel_module
from connection.utils.output import CmdException, Output
from type_def.size import Size, Unit
from test_utils.os_utils import reload_kernel_module
from test_utils.output import CmdException, Output
from test_utils.size import Size, Unit
# casadm commands
@@ -48,7 +48,6 @@ def start_cache(
)
_cache_id = str(cache_id) if cache_id is not None else None
_cache_mode = cache_mode.name.lower() if cache_mode else None
output = TestRun.executor.run(
start_cmd(
cache_dev=cache_dev.path,
@@ -60,71 +59,33 @@ def start_cache(
shortcut=shortcut,
)
)
if output.exit_code != 0:
raise CmdException("Failed to start cache.", output)
if not _cache_id:
from api.cas.casadm_parser import get_caches
cache_list = get_caches()
attached_cache_list = [cache for cache in cache_list if cache.cache_device is not None]
# compare path of old and new caches, returning the only one created now.
# This will be needed in case cache_id not present in cli command
new_cache = next(
cache for cache in attached_cache_list if cache.cache_device.path == cache_dev.path
)
_cache_id = new_cache.cache_id
cache = Cache(cache_id=int(_cache_id), device=cache_dev, cache_line_size=_cache_line_size)
TestRun.dut.cache_list.append(cache)
return cache
return Cache(cache_dev)
def load_cache(device: Device, shortcut: bool = False) -> Cache:
from api.cas.casadm_parser import get_caches
caches_before_load = get_caches()
output = TestRun.executor.run(load_cmd(cache_dev=device.path, shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to load cache.", output)
caches_after_load = get_caches()
new_cache = next(cache for cache in caches_after_load if cache.cache_id not in
[cache.cache_id for cache in caches_before_load])
cache = Cache(cache_id=new_cache.cache_id, device=new_cache.cache_device)
TestRun.dut.cache_list.append(cache)
return cache
return Cache(device)
def attach_cache(
cache_id: int, device: Device, force: bool = False, shortcut: bool = False
) -> Output:
def attach_cache(cache_id: int, device: Device, force: bool, shortcut: bool = False) -> Output:
output = TestRun.executor.run(
attach_cache_cmd(
cache_dev=device.path, cache_id=str(cache_id), force=force, shortcut=shortcut
)
)
if output.exit_code != 0:
raise CmdException("Failed to attach cache.", output)
attached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
attached_cache.cache_device = device
return output
def detach_cache(cache_id: int, shortcut: bool = False) -> Output:
output = TestRun.executor.run(detach_cache_cmd(cache_id=str(cache_id), shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to detach cache.", output)
detached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
detached_cache.cache_device = None
return output
@@ -132,16 +93,8 @@ def stop_cache(cache_id: int, no_data_flush: bool = False, shortcut: bool = Fals
output = TestRun.executor.run(
stop_cmd(cache_id=str(cache_id), no_data_flush=no_data_flush, shortcut=shortcut)
)
if output.exit_code != 0:
raise CmdException("Failed to stop cache.", output)
TestRun.dut.cache_list = [
cache for cache in TestRun.dut.cache_list if cache.cache_id != cache_id
]
TestRun.dut.core_list = [core for core in TestRun.dut.core_list if core.cache_id != cache_id]
return output
@@ -239,7 +192,7 @@ def set_param_promotion(cache_id: int, policy: PromotionPolicy, shortcut: bool =
def set_param_promotion_nhit(
cache_id: int, threshold: int = None, trigger: int = None, shortcut: bool = False
cache_id: int, threshold: int = None, trigger: int = None, shortcut: bool = False
) -> Output:
_threshold = str(threshold) if threshold is not None else None
_trigger = str(trigger) if trigger is not None else None
@@ -314,7 +267,7 @@ def get_param_cleaning_acp(
def get_param_promotion(
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
) -> Output:
_output_format = output_format.name if output_format else None
output = TestRun.executor.run(
@@ -328,7 +281,7 @@ def get_param_promotion(
def get_param_promotion_nhit(
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
cache_id: int, output_format: OutputFormat = None, shortcut: bool = False
) -> Output:
_output_format = output_format.name if output_format else None
output = TestRun.executor.run(
@@ -372,11 +325,7 @@ def add_core(cache: Cache, core_dev: Device, core_id: int = None, shortcut: bool
)
if output.exit_code != 0:
raise CmdException("Failed to add core.", output)
core = Core(core_dev.path, cache.cache_id)
TestRun.dut.core_list.append(core)
return core
return Core(core_dev.path, cache.cache_id)
def remove_core(cache_id: int, core_id: int, force: bool = False, shortcut: bool = False) -> Output:
@@ -387,12 +336,6 @@ def remove_core(cache_id: int, core_id: int, force: bool = False, shortcut: bool
)
if output.exit_code != 0:
raise CmdException("Failed to remove core.", output)
TestRun.dut.core_list = [
core
for core in TestRun.dut.core_list
if core.cache_id != cache_id or core.core_id != core_id
]
return output
@@ -542,41 +485,22 @@ def standby_init(
shortcut=shortcut,
)
)
if output.exit_code != 0:
raise CmdException("Failed to init standby cache.", output)
return Cache(cache_id=cache_id, device=cache_dev)
return Cache(cache_dev)
def standby_load(cache_dev: Device, shortcut: bool = False) -> Cache:
from api.cas.casadm_parser import get_caches
caches_before_load = get_caches()
output = TestRun.executor.run(standby_load_cmd(cache_dev=cache_dev.path, shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to load cache.", output)
caches_after_load = get_caches()
# compare ids of old and new caches, returning the only one created now
new_cache = next(
cache
for cache in caches_after_load
if cache.cache_id not in [cache.cache_id for cache in caches_before_load]
)
cache = Cache(cache_id=new_cache.cache_id, device=new_cache.cache_device)
TestRun.dut.cache_list.append(cache)
return cache
raise CmdException("Failed to load standby cache.", output)
return Cache(cache_dev)
def standby_detach_cache(cache_id: int, shortcut: bool = False) -> Output:
output = TestRun.executor.run(standby_detach_cmd(cache_id=str(cache_id), shortcut=shortcut))
if output.exit_code != 0:
raise CmdException("Failed to detach standby cache.", output)
detached_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
detached_cache.cache_device = None
return output
@@ -586,10 +510,6 @@ def standby_activate_cache(cache_dev: Device, cache_id: int, shortcut: bool = Fa
)
if output.exit_code != 0:
raise CmdException("Failed to activate standby cache.", output)
activated_cache = next(cache for cache in TestRun.dut.cache_list if cache.cache_id == cache_id)
activated_cache.cache_device = cache_dev
return output

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -26,7 +26,7 @@ class OutputFormat(Enum):
class StatsFilter(Enum):
all = "all"
conf = "config"
conf = "configuration"
usage = "usage"
req = "request"
blk = "block"

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -14,12 +14,11 @@ from typing import List
from api.cas import casadm
from api.cas.cache_config import *
from api.cas.casadm_params import *
from api.cas.core_config import CoreStatus
from api.cas.ioclass_config import IoClass
from api.cas.version import CasVersion
from core.test_run_utils import TestRun
from storage_devices.device import Device
from connection.utils.output import CmdException
from test_utils.output import CmdException
class Stats(dict):
@@ -55,12 +54,12 @@ def get_caches() -> list:
def get_cores(cache_id: int) -> list:
from api.cas.core import Core
from api.cas.core import Core, CoreStatus
cores_dict = get_cas_devices_dict()["cores"].values()
def is_active(core):
return core["status"] == CoreStatus.active
return CoreStatus[core["status"].lower()] == CoreStatus.active
return [
Core(core["device_path"], core["cache_id"])
@@ -69,36 +68,6 @@ def get_cores(cache_id: int) -> list:
]
def get_inactive_cores(cache_id: int) -> list:
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_inactive(core):
return core["status"] == CoreStatus.inactive
return [
Core(core["device_path"], core["cache_id"])
for core in cores_dict
if is_inactive(core) and core["cache_id"] == cache_id
]
def get_detached_cores(cache_id: int) -> list:
from api.cas.core import Core
cores_dict = get_cas_devices_dict()["cores"].values()
def is_detached(core):
return core["status"] == CoreStatus.detached
return [
Core(core["device_path"], core["cache_id"])
for core in cores_dict
if is_detached(core) and core["cache_id"] == cache_id
]
def get_cas_devices_dict() -> dict:
device_list = list(csv.DictReader(casadm.list_caches(OutputFormat.csv).stdout.split("\n")))
devices = {"caches": {}, "cores": {}, "core_pool": {}}
@@ -111,21 +80,21 @@ def get_cas_devices_dict() -> dict:
params = [
("id", cache_id),
("device_path", device["disk"]),
("status", CacheStatus(device["status"].lower())),
("status", device["status"]),
]
devices["caches"][cache_id] = dict([(key, value) for key, value in params])
elif device["type"] == "core":
params = [
("cache_id", cache_id),
("core_id", (int(device["id"]) if device["id"] != "-" else device["id"])),
("device_path", device["disk"]),
("status", CoreStatus(device["status"].lower())),
("exp_obj", device["device"]),
("status", device["status"]),
]
if core_pool:
params.append(("core_pool", device))
devices["core_pool"][device["disk"]] = dict([(key, value) for key, value in params])
devices["core_pool"][device["disk"]] = dict(
[(key, value) for key, value in params]
)
else:
devices["cores"][(cache_id, int(device["id"]))] = dict(
[(key, value) for key, value in params]
@@ -236,14 +205,11 @@ def get_io_class_list(cache_id: int) -> list:
return ret
def get_core_info_for_cache_by_path(core_disk_path: str, target_cache_id: int) -> dict | None:
def get_core_info_by_path(core_disk_path) -> dict | None:
output = casadm.list_caches(OutputFormat.csv, by_id_path=True)
reader = csv.DictReader(io.StringIO(output.stdout))
cache_id = -1
for row in reader:
if row["type"] == "cache":
cache_id = int(row["id"])
if row["type"] == "core" and row["disk"] == core_disk_path and target_cache_id == cache_id:
if row["type"] == "core" and row["disk"] == core_disk_path:
return {
"core_id": row["id"],
"core_device": row["disk"],

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -192,7 +192,7 @@ remove_core_help = [
remove_inactive_help = [
r"Usage: casadm --remove-inactive --cache-id \<ID\> --core-id \<ID\> \[option\.\.\.\]",
r"casadm --remove-inactive --cache-id \<ID\> --core-id \<ID\> \[option\.\.\.\]",
r"Remove inactive core device from cache instance",
r"Options that are valid with --remove-inactive are:",
r"-i --cache-id \<ID\> Identifier of cache instance \<1-16384\>",
@@ -285,7 +285,7 @@ standby_help = [
]
zero_metadata_help = [
r"Usage: casadm --zero-metadata --device \<DEVICE\> \[option\.\.\.\]",
r"Usage: casadm --zero-metadata --device \<DEVICE\> \[option\.\.\.\]]",
r"Clear metadata from caching device",
r"Options that are valid with --zero-metadata are:",
r"-d --device \<DEVICE\> Path to device on which metadata would be cleared",

View File

@@ -1,27 +1,13 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import re
from connection.utils.output import Output
from core.test_run import TestRun
attach_not_enough_memory = [
r"Not enough free RAM\.\nYou need at least \d+.\d+GB to attach a device to cache "
r"with cache line size equal \d+kB.\n"
r"Try with greater cache line size\."
]
attach_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device",
r"Please attach another device or use --force to discard on-disk metadata",
r" and attach this device to cache instance\."
]
from test_utils.output import Output
load_inactive_core_missing = [
r"WARNING: Can not resolve path to core \d+ from cache \d+\. By-id path will be shown for that "
@@ -31,18 +17,11 @@ load_inactive_core_missing = [
start_cache_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device",
r"Old metadata found on device\.",
r"Please load cache metadata using --load option or use --force to",
r" discard on-disk metadata and start fresh cache instance\.",
]
attach_cache_with_existing_metadata = [
r"Error inserting cache \d+",
r"Old metadata found on device",
r"Please attach another device or use --force to discard on-disk metadata",
r" and attach this device to cache instance\.",
]
start_cache_on_already_used_dev = [
r"Error inserting cache \d+",
r"Cache device \'\/dev\/\S+\' is already used as cache\.",
@@ -105,20 +84,11 @@ already_cached_core = [
]
remove_mounted_core = [
r"Can\'t remove core \d+ from cache \d+ due to mounted devices:"
]
remove_mounted_core_kernel = [
r"Error while removing core device \d+ from cache instance \d+",
r"Device opens or mount are pending to this cache",
r"Can\'t remove core \d+ from cache \d+\. Device /dev/cas\d+-\d+ is mounted\!"
]
stop_cache_mounted_core = [
r"Can\'t stop cache instance \d+ due to mounted devices:"
]
stop_cache_mounted_core_kernel = [
r"Error while stopping cache \d+",
r"Error while removing cache \d+",
r"Device opens or mount are pending to this cache",
]
@@ -254,12 +224,6 @@ malformed_io_class_header = [
unexpected_cls_option = [r"Option '--cache-line-size \(-x\)' is not allowed"]
attach_not_enough_memory = [
r"Not enough free RAM\.\nYou need at least \d+.\d+GB to attach a device to cache "
r"with cache line size equal \d+kB.\n"
r"Try with greater cache line size\."
]
def check_stderr_msg(output: Output, expected_messages, negate=False):
return __check_string_msg(output.stderr, expected_messages, negate)
@@ -278,7 +242,7 @@ def __check_string_msg(text: str, expected_messages, negate=False):
msg_ok = False
elif matches and negate:
TestRun.LOGGER.error(
f"Message is incorrect, expected to not find: {msg}\n actual: {text}."
f"Message is incorrect, expected to not find: {msg}\n " f"actual: {text}."
)
msg_ok = False
return msg_ok

View File

@@ -1,24 +1,30 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from datetime import timedelta
from typing import List
from enum import Enum
from api.cas import casadm
from api.cas.cache_config import SeqCutOffParameters, SeqCutOffPolicy
from api.cas.casadm_params import StatsFilter
from api.cas.casadm_parser import get_seq_cut_off_parameters, get_cas_devices_dict
from api.cas.core_config import CoreStatus
from api.cas.casadm_parser import get_seq_cut_off_parameters, get_core_info_by_path
from api.cas.statistics import CoreStats, CoreIoClassStats
from core.test_run_utils import TestRun
from storage_devices.device import Device
from test_tools.fs_tools import Filesystem, ls_item
from test_tools.os_tools import sync
from test_tools.common.wait import wait
from type_def.size import Unit, Size
from test_tools import fs_utils, disk_utils
from test_utils.os_utils import wait, sync
from test_utils.size import Unit, Size
class CoreStatus(Enum):
empty = 0
active = 1
inactive = 2
detached = 3
SEQ_CUTOFF_THRESHOLD_MAX = Size(4194181, Unit.KibiByte)
@@ -29,35 +35,20 @@ class Core(Device):
def __init__(self, core_device: str, cache_id: int):
self.core_device = Device(core_device)
self.path = None
self.cache_id = cache_id
core_info = self.__get_core_info()
# "-" is special case for cores in core pool
if core_info["core_id"] != "-":
self.core_id = int(core_info["core_id"])
if core_info["exp_obj"] != "-":
Device.__init__(self, core_info["exp_obj"])
self.cache_id = cache_id
self.partitions = []
self.block_size = None
def __get_core_info(self) -> dict | None:
core_dicts = get_cas_devices_dict()["cores"].values()
# for core
core_device = [
core
for core in core_dicts
if core["cache_id"] == self.cache_id and core["device_path"] == self.core_device.path
]
if core_device:
return core_device[0]
def __get_core_info(self):
return get_core_info_by_path(self.core_device.path)
# for core pool
core_pool_dicts = get_cas_devices_dict()["core_pool"].values()
core_pool_device = [
core for core in core_pool_dicts if core["device_path"] == self.core_device.path
]
return core_pool_device[0]
def create_filesystem(self, fs_type: Filesystem, force=True, blocksize=None):
def create_filesystem(self, fs_type: disk_utils.Filesystem, force=True, blocksize=None):
super().create_filesystem(fs_type, force, blocksize)
self.core_device.filesystem = self.filesystem
@@ -85,8 +76,8 @@ class Core(Device):
percentage_val=percentage_val,
)
def get_status(self) -> CoreStatus:
return self.__get_core_info()["status"]
def get_status(self):
return CoreStatus[self.__get_core_info()["status"].lower()]
def get_seq_cut_off_parameters(self):
return get_seq_cut_off_parameters(self.cache_id, self.core_id)
@@ -146,7 +137,7 @@ class Core(Device):
def check_if_is_present_in_os(self, should_be_visible=True):
device_in_system_message = "CAS device exists in OS."
device_not_in_system_message = "CAS device does not exist in OS."
item = ls_item(self.path)
item = fs_utils.ls_item(f"{self.path}")
if item is not None:
if should_be_visible:
TestRun.LOGGER.info(device_in_system_message)

View File

@@ -1,16 +0,0 @@
#
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from enum import Enum
class CoreStatus(Enum):
empty = "empty"
active = "active"
inactive = "inactive"
detached = "detached"
def __str__(self):
return self.value

View File

@@ -6,8 +6,8 @@
import re
from test_tools.dmesg import get_dmesg
from type_def.size import Size, Unit
from test_utils.dmesg import get_dmesg
from test_utils.size import Size, Unit
def get_metadata_size_on_device(cache_id: int) -> Size:

View File

@@ -7,7 +7,8 @@
from api.cas import casadm_parser
from api.cas.cache_config import CacheMode
from storage_devices.device import Device
from test_tools.fs_tools import remove, write_file
from test_tools import fs_utils
opencas_conf_path = "/etc/opencas/opencas.conf"
@@ -33,7 +34,7 @@ class InitConfig:
@staticmethod
def remove_config_file():
remove(opencas_conf_path, force=False)
fs_utils.remove(opencas_conf_path, force=False)
def save_config_file(self):
config_lines = []
@@ -46,7 +47,7 @@ class InitConfig:
config_lines.append(CoreConfigLine.header)
for c in self.core_config_lines:
config_lines.append(str(c))
write_file(opencas_conf_path, "\n".join(config_lines), False)
fs_utils.write_file(opencas_conf_path, "\n".join(config_lines), False)
@classmethod
def create_init_config_from_running_configuration(
@@ -68,7 +69,7 @@ class InitConfig:
@classmethod
def create_default_init_config(cls):
cas_version = casadm_parser.get_casadm_version()
write_file(opencas_conf_path, f"version={cas_version.base}")
fs_utils.write_file(opencas_conf_path, f"version={cas_version.base}")
class CacheConfigLine:

View File

@@ -9,9 +9,8 @@ import os
from core.test_run import TestRun
from api.cas import cas_module
from api.cas.version import get_installed_cas_version
from test_tools import git
from connection.utils.output import CmdException
from test_tools.os_tools import is_kernel_module_loaded
from test_utils import os_utils, git
from test_utils.output import CmdException
def rsync_opencas_sources():
@@ -99,7 +98,7 @@ def reinstall_opencas(version: str = ""):
def check_if_installed(version: str = ""):
TestRun.LOGGER.info("Check if Open CAS Linux is installed")
output = TestRun.executor.run("which casadm")
modules_loaded = is_kernel_module_loaded(cas_module.CasModule.cache.value)
modules_loaded = os_utils.is_kernel_module_loaded(cas_module.CasModule.cache.value)
if output.exit_code != 0 or not modules_loaded:
TestRun.LOGGER.info("CAS is not installed")

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -14,10 +14,11 @@ from datetime import timedelta
from packaging import version
from core.test_run import TestRun
from test_tools.fs_tools import write_file
from test_tools.os_tools import get_kernel_version
from test_tools import fs_utils
from test_utils import os_utils
from test_utils.generator import random_string
default_config_file_path = TestRun.TEST_RUN_DATA_PATH + "/opencas_ioclass.conf"
default_config_file_path = "/tmp/opencas_ioclass.conf"
MAX_IO_CLASS_ID = 32
MAX_IO_CLASS_PRIORITY = 255
@@ -108,7 +109,7 @@ class IoClass:
ioclass_config_path: str = default_config_file_path,
):
TestRun.LOGGER.info(f"Creating config file {ioclass_config_path}")
write_file(
fs_utils.write_file(
ioclass_config_path, IoClass.list_to_csv(ioclass_list, add_default_rule)
)
@@ -166,7 +167,7 @@ class IoClass:
"file_offset",
"request_size",
]
if get_kernel_version() >= version.Version("4.13"):
if os_utils.get_kernel_version() >= version.Version("4.13"):
rules.append("wlth")
rule = random.choice(rules)
@@ -177,17 +178,13 @@ class IoClass:
def add_random_params(rule: str):
if rule == "directory":
allowed_chars = string.ascii_letters + string.digits + "/"
rule += f":/{''.join(random.choices(allowed_chars, k=random.randint(1, 40)))}"
rule += f":/{random_string(random.randint(1, 40), allowed_chars)}"
elif rule in ["file_size", "lba", "pid", "file_offset", "request_size", "wlth"]:
rule += f":{Operator(random.randrange(len(Operator))).name}:{random.randrange(1000000)}"
elif rule == "io_class":
rule += f":{random.randrange(MAX_IO_CLASS_PRIORITY + 1)}"
elif rule in ["extension", "process_name", "file_name_prefix"]:
allowed_chars = string.ascii_letters + string.digits
rule += f":{''.join(random.choices(allowed_chars, k=random.randint(1, 10)))}"
elif rule == "io_direction":
direction = random.choice(["read", "write"])
rule += f":{direction}"
rule += f":{random_string(random.randint(1, 10))}"
if random.randrange(2):
rule += "&done"
return rule

View File

@@ -10,7 +10,7 @@ from datetime import timedelta
import paramiko
from core.test_run import TestRun
from test_tools.common.wait import wait
from test_utils.os_utils import wait
def check_progress_bar(command: str, progress_bar_expected: bool = True):

View File

@@ -1,18 +1,17 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import csv
from datetime import timedelta
from enum import Enum
from typing import List
from api.cas import casadm
from api.cas.casadm_params import StatsFilter
from connection.utils.output import CmdException
from type_def.size import Size, Unit
from test_utils.size import Size, Unit
class UnitType(Enum):
@@ -23,7 +22,6 @@ class UnitType(Enum):
kibibyte = "[KiB]"
gibibyte = "[GiB]"
seconds = "[s]"
byte = "[B]"
def __str__(self):
return self.value
@@ -59,9 +57,6 @@ class CacheStats:
case StatsFilter.err:
self.error_stats = ErrorStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __str__(self):
# stats_list contains all Class.__str__ methods initialized in CacheStats
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
@@ -73,9 +68,6 @@ class CacheStats:
getattr(other, stats_item) for stats_item in other.__dict__
]
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreStats:
def __init__(
@@ -100,9 +92,6 @@ class CoreStats:
case StatsFilter.err:
self.error_stats = ErrorStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __str__(self):
# stats_list contains all Class.__str__ methods initialized in CacheStats
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
@@ -114,9 +103,6 @@ class CoreStats:
getattr(other, stats_item) for stats_item in other.__dict__
]
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreIoClassStats:
def __init__(
@@ -142,9 +128,6 @@ class CoreIoClassStats:
case StatsFilter.blk:
self.block_stats = BlockStats(stats_dict, percentage_val)
if stats_dict:
raise CmdException(f"Unknown stat(s) left after parsing output cmd\n{stats_dict}")
def __eq__(self, other):
# check if all initialized variable in self(CacheStats) match other(CacheStats)
return [getattr(self, stats_item) for stats_item in self.__dict__] == [
@@ -156,9 +139,6 @@ class CoreIoClassStats:
stats_list = [str(getattr(self, stats_item)) for stats_item in self.__dict__]
return "\n".join(stats_list)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CacheIoClassStats(CoreIoClassStats):
def __init__(
@@ -193,31 +173,12 @@ class CacheConfigStats:
self.cache_line_size = parse_value(
value=stats_dict["Cache line size [KiB]"], unit_type=UnitType.kibibyte
)
footprint_prefix = "Metadata Memory Footprint "
footprint_key = next(k for k in stats_dict if k.startswith(footprint_prefix))
self.metadata_memory_footprint = parse_value(
value=stats_dict[footprint_key],
unit_type=UnitType(footprint_key[len(footprint_prefix) :]),
value=stats_dict["Metadata Memory Footprint [MiB]"], unit_type=UnitType.mebibyte
)
self.dirty_for = parse_value(value=stats_dict["Dirty for [s]"], unit_type=UnitType.seconds)
self.status = stats_dict["Status"]
del stats_dict["Cache Id"]
del stats_dict["Cache Size [4KiB Blocks]"]
del stats_dict["Cache Size [GiB]"]
del stats_dict["Cache Device"]
del stats_dict["Exported Object"]
del stats_dict["Core Devices"]
del stats_dict["Inactive Core Devices"]
del stats_dict["Write Policy"]
del stats_dict["Cleaning Policy"]
del stats_dict["Promotion Policy"]
del stats_dict["Cache line size [KiB]"]
del stats_dict[footprint_key]
del stats_dict["Dirty for [s]"]
del stats_dict["Dirty for"]
del stats_dict["Status"]
def __str__(self):
return (
f"Config stats:\n"
@@ -255,13 +216,10 @@ class CacheConfigStats:
and self.status == other.status
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class CoreConfigStats:
def __init__(self, stats_dict):
self.core_id = int(stats_dict["Core Id"])
self.core_id = stats_dict["Core Id"]
self.core_dev = stats_dict["Core Device"]
self.exp_obj = stats_dict["Exported Object"]
self.core_size = parse_value(
@@ -274,17 +232,6 @@ class CoreConfigStats:
)
self.seq_cutoff_policy = stats_dict["Seq cutoff policy"]
del stats_dict["Core Id"]
del stats_dict["Core Device"]
del stats_dict["Exported Object"]
del stats_dict["Core Size [4KiB Blocks]"]
del stats_dict["Core Size [GiB]"]
del stats_dict["Dirty for [s]"]
del stats_dict["Dirty for"]
del stats_dict["Status"]
del stats_dict["Seq cutoff threshold [KiB]"]
del stats_dict["Seq cutoff policy"]
def __str__(self):
return (
f"Config stats:\n"
@@ -312,9 +259,6 @@ class CoreConfigStats:
and self.seq_cutoff_policy == other.seq_cutoff_policy
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class IoClassConfigStats:
def __init__(self, stats_dict):
@@ -323,11 +267,6 @@ class IoClassConfigStats:
self.eviction_priority = stats_dict["Eviction priority"]
self.max_size = stats_dict["Max size"]
del stats_dict["IO class ID"]
del stats_dict["IO class name"]
del stats_dict["Eviction priority"]
del stats_dict["Max size"]
def __str__(self):
return (
f"Config stats:\n"
@@ -347,9 +286,6 @@ class IoClassConfigStats:
and self.max_size == other.max_size
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class UsageStats:
def __init__(self, stats_dict, percentage_val):
@@ -371,18 +307,6 @@ class UsageStats:
value=stats_dict[f"Inactive Dirty {unit}"], unit_type=unit
)
for unit in [UnitType.percentage, UnitType.block_4k]:
del stats_dict[f"Occupancy {unit}"]
del stats_dict[f"Free {unit}"]
del stats_dict[f"Clean {unit}"]
del stats_dict[f"Dirty {unit}"]
if f"Inactive Dirty {unit}" in stats_dict:
del stats_dict[f"Inactive Occupancy {unit}"]
if f"Inactive Clean {unit}" in stats_dict:
del stats_dict[f"Inactive Clean {unit}"]
if f"Inactive Dirty {unit}" in stats_dict:
del stats_dict[f"Inactive Dirty {unit}"]
def __str__(self):
return (
f"Usage stats:\n"
@@ -408,9 +332,6 @@ class UsageStats:
def __ne__(self, other):
return not self == other
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class IoClassUsageStats:
def __init__(self, stats_dict, percentage_val):
@@ -419,11 +340,6 @@ class IoClassUsageStats:
self.clean = parse_value(value=stats_dict[f"Clean {unit}"], unit_type=unit)
self.dirty = parse_value(value=stats_dict[f"Dirty {unit}"], unit_type=unit)
for unit in [UnitType.percentage, UnitType.block_4k]:
del stats_dict[f"Occupancy {unit}"]
del stats_dict[f"Clean {unit}"]
del stats_dict[f"Dirty {unit}"]
def __str__(self):
return (
f"Usage stats:\n"
@@ -447,22 +363,15 @@ class IoClassUsageStats:
def __ne__(self, other):
return not self == other
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class RequestStats:
def __init__(self, stats_dict, percentage_val):
unit = UnitType.percentage if percentage_val else UnitType.requests
self.read = RequestStatsChunk(
stats_dict=stats_dict,
percentage_val=percentage_val,
operation=OperationType.read,
stats_dict=stats_dict, percentage_val=percentage_val, operation=OperationType.read
)
self.write = RequestStatsChunk(
stats_dict=stats_dict,
percentage_val=percentage_val,
operation=OperationType.write,
stats_dict=stats_dict, percentage_val=percentage_val, operation=OperationType.write
)
self.pass_through_reads = parse_value(
value=stats_dict[f"Pass-Through reads {unit}"], unit_type=unit
@@ -477,17 +386,6 @@ class RequestStats:
value=stats_dict[f"Total requests {unit}"], unit_type=unit
)
for unit in [UnitType.percentage, UnitType.requests]:
for operation in [OperationType.read, OperationType.write]:
del stats_dict[f"{operation} hits {unit}"]
del stats_dict[f"{operation} partial misses {unit}"]
del stats_dict[f"{operation} full misses {unit}"]
del stats_dict[f"{operation} total {unit}"]
del stats_dict[f"Pass-Through reads {unit}"]
del stats_dict[f"Pass-Through writes {unit}"]
del stats_dict[f"Serviced requests {unit}"]
del stats_dict[f"Total requests {unit}"]
def __str__(self):
return (
f"Request stats:\n"
@@ -511,9 +409,6 @@ class RequestStats:
and self.requests_total == other.requests_total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class RequestStatsChunk:
def __init__(self, stats_dict, percentage_val: bool, operation: OperationType):
@@ -545,9 +440,6 @@ class RequestStatsChunk:
and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BlockStats:
def __init__(self, stats_dict, percentage_val):
@@ -563,12 +455,6 @@ class BlockStats:
device="exported object",
)
for unit in [UnitType.percentage, UnitType.block_4k]:
for device in ["core", "cache", "exported object"]:
del stats_dict[f"Reads from {device} {unit}"]
del stats_dict[f"Writes to {device} {unit}"]
del stats_dict[f"Total to/from {device} {unit}"]
def __str__(self):
return (
f"Block stats:\n"
@@ -584,9 +470,6 @@ class BlockStats:
self.core == other.core and self.cache == other.cache and self.exp_obj == other.exp_obj
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class ErrorStats:
def __init__(self, stats_dict, percentage_val):
@@ -599,13 +482,6 @@ class ErrorStats:
)
self.total_errors = parse_value(value=stats_dict[f"Total errors {unit}"], unit_type=unit)
for unit in [UnitType.percentage, UnitType.requests]:
for device in ["Core", "Cache"]:
del stats_dict[f"{device} read errors {unit}"]
del stats_dict[f"{device} write errors {unit}"]
del stats_dict[f"{device} total errors {unit}"]
del stats_dict[f"Total errors {unit}"]
def __str__(self):
return (
f"Error stats:\n"
@@ -623,9 +499,6 @@ class ErrorStats:
and self.total_errors == other.total_errors
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BasicStatsChunk:
def __init__(self, stats_dict: dict, percentage_val: bool, device: str):
@@ -644,9 +517,6 @@ class BasicStatsChunk:
self.reads == other.reads and self.writes == other.writes and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
class BasicStatsChunkError:
def __init__(self, stats_dict: dict, percentage_val: bool, device: str):
@@ -665,9 +535,6 @@ class BasicStatsChunkError:
self.reads == other.reads and self.writes == other.writes and self.total == other.total
)
def __iter__(self):
return iter([getattr(self, stats_item) for stats_item in self.__dict__])
def get_stat_value(stat_dict: dict, key: str):
idx = key.index("[")
@@ -713,10 +580,10 @@ def _get_section_filters(filter: List[StatsFilter], io_class_stats: bool = False
def get_stats_dict(
filter: List[StatsFilter],
cache_id: int,
core_id: int = None,
io_class_id: int = None,
filter: List[StatsFilter],
cache_id: int,
core_id: int = None,
io_class_id: int = None
):
csv_stats = casadm.print_statistics(
cache_id=cache_id,

View File

@@ -6,9 +6,9 @@
import re
from test_tools import git
from test_utils import git
from core.test_run import TestRun
from connection.utils.output import CmdException
from test_utils.output import CmdException
class CasVersion:
@@ -43,7 +43,7 @@ class CasVersion:
def get_available_cas_versions():
release_tags = git.get_tags()
release_tags = git.get_release_tags()
versions = [CasVersion.from_git_tag(tag) for tag in release_tags]

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -12,8 +12,9 @@ from core.test_run import TestRun
from api.cas import casadm
from storage_devices.disk import DiskType, DiskTypeSet
from api.cas.cache_config import CacheMode
from test_tools.fs_tools import Filesystem, remove, create_directory
from type_def.size import Size, Unit
from test_tools import fs_utils
from test_tools.disk_utils import Filesystem
from test_utils.size import Size, Unit
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
@@ -28,10 +29,11 @@ block_sizes = [1, 2, 4, 5, 8, 16, 32, 64, 128]
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.hdd, DiskType.nand]))
def test_support_different_io_size(cache_mode):
"""
title: Support for different I/O sizes
description: Verify support for I/O of size in rage from 512B to 128KiB
title: OpenCAS supports different IO sizes
description: |
OpenCAS supports IO of size in rage from 512b to 128K
pass_criteria:
- No I/O errors
- No IO errors
"""
with TestRun.step("Prepare cache and core devices"):
@@ -46,12 +48,12 @@ def test_support_different_io_size(cache_mode):
)
core = cache.add_core(core_disk.partitions[0])
with TestRun.step("Load the default io class config file"):
with TestRun.step("Load the default ioclass config file"):
cache.load_io_class(opencas_ioclass_conf_path)
with TestRun.step("Create a filesystem on the core device and mount it"):
remove(path=mountpoint, force=True, recursive=True, ignore_errors=True)
create_directory(path=mountpoint)
fs_utils.remove(path=mountpoint, force=True, recursive=True, ignore_errors=True)
fs_utils.create_directory(path=mountpoint)
core.create_filesystem(Filesystem.xfs)
core.mount(mountpoint)

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -17,11 +17,12 @@ from api.cas.cli_messages import (
)
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools import fs_utils
from test_tools.dd import Dd
from test_tools.fs_tools import Filesystem, read_file
from test_tools.disk_utils import Filesystem
from test_utils.filesystem.file import File
from connection.utils.output import CmdException
from type_def.size import Size, Unit
from test_utils.output import CmdException
from test_utils.size import Size, Unit
version_file_path = r"/var/lib/opencas/cas_version"
mountpoint = "/mnt"
@@ -30,45 +31,46 @@ mountpoint = "/mnt"
@pytest.mark.CI
def test_cas_version():
"""
title: Test for version number
title: Test for CAS version
description:
Check if version printed by cmd returns value consistent with version file
Check if CAS print version cmd returns consistent version with version file
pass criteria:
- Version command succeeds
- Versions from cmd and file in /var/lib/opencas/cas_version are consistent
- casadm version command succeeds
- versions from cmd and file in /var/lib/opencas/cas_version are consistent
"""
with TestRun.step("Read version using casadm cmd"):
with TestRun.step("Read cas version using casadm cmd"):
output = casadm.print_version(output_format=OutputFormat.csv)
cmd_version = output.stdout
cmd_cas_versions = [version.split(",")[1] for version in cmd_version.split("\n")[1:]]
with TestRun.step(f"Read version from {version_file_path} location"):
file_read = read_file(version_file_path).split("\n")
with TestRun.step(f"Read cas version from {version_file_path} location"):
file_read = fs_utils.read_file(version_file_path).split("\n")
file_cas_version = next(
(line.split("=")[1] for line in file_read if "CAS_VERSION=" in line)
)
with TestRun.step("Compare cmd and file versions"):
if not all(file_cas_version == cmd_cas_version for cmd_cas_version in cmd_cas_versions):
TestRun.LOGGER.error(f"Cmd and file versions doesn't match")
TestRun.LOGGER.error(f"Cmd and file versions doesn`t match")
@pytest.mark.CI
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
def test_negative_start_cache():
"""
title: Negative test for starting cache
title: Test start cache negative on cache device
description:
Check starting cache using the same device or cache ID twice
Check for negative cache start scenarios
pass criteria:
- Cache start succeeds
- Starting cache on the same device with another ID fails
- Starting cache on another partition with the same ID fails
- Fails to start cache on the same device with another id
- Fails to start cache on another partition with the same id
"""
with TestRun.step("Prepare cache device"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)] * 2)
cache_dev_1 = cache_dev.partitions[0]

View File

@@ -9,7 +9,7 @@ import pytest
from api.cas import casadm
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from type_def.size import Size, Unit
from test_utils.size import Size, Unit
@pytest.mark.CI

View File

@@ -1,262 +0,0 @@
#
# Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import posixpath
import random
import time
import pytest
from api.cas import casadm_parser, casadm
from api.cas.cache_config import CacheLineSize, CacheMode
from api.cas.cli import attach_cache_cmd
from api.cas.cli_messages import check_stderr_msg, attach_with_existing_metadata
from connection.utils.output import CmdException
from core.test_run import TestRun
from core.test_run_utils import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from storage_devices.nullblk import NullBlk
from test_tools.dmesg import clear_dmesg
from test_tools.fs_tools import Filesystem, create_directory, create_random_test_file, \
check_if_directory_exists, remove
from type_def.size import Size, Unit
mountpoint = "/mnt/cas"
test_file_path = f"{mountpoint}/test_file"
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.require_disk("core2", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", CacheMode)
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
def test_attach_device_with_existing_metadata(cache_mode, cache_line_size):
"""
title: Test attaching cache with valid and relevant metadata.
description: |
Attach disk with valid and relevant metadata and verify whether the running configuration
wasn't affected by the values from the old metadata.
pass_criteria:
- no cache crash during attach and detach.
- old metadata doesn't affect running cache.
- no kernel panic
"""
with TestRun.step("Prepare random cache line size and cache mode (different than tested)"):
random_cache_mode = _get_random_uniq_cache_mode(cache_mode)
cache_mode1, cache_mode2 = cache_mode, random_cache_mode
random_cache_line_size = _get_random_uniq_cache_line_size(cache_line_size)
cache_line_size1, cache_line_size2 = cache_line_size, random_cache_line_size
with TestRun.step("Clear dmesg log"):
clear_dmesg()
with TestRun.step("Prepare devices for caches and cores"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
cache_dev2.create_partitions([Size(2, Unit.GibiByte)])
cache_dev2 = cache_dev2.partitions[0]
core_dev1 = TestRun.disks["core"]
core_dev2 = TestRun.disks["core2"]
core_dev1.create_partitions([Size(2, Unit.GibiByte)] * 2)
core_dev2.create_partitions([Size(2, Unit.GibiByte)] * 2)
with TestRun.step("Start 2 caches with different parameters and add core to each"):
cache1 = casadm.start_cache(
cache_dev, force=True, cache_line_size=cache_line_size1
)
if cache1.has_volatile_metadata():
pytest.skip("Non-volatile metadata needed to run this test")
for core in core_dev1.partitions:
cache1.add_core(core)
cache2 = casadm.start_cache(
cache_dev2, force=True, cache_line_size=cache_line_size2
)
for core in core_dev2.partitions:
cache2.add_core(core)
cores_in_cache1_before = {
core.core_device.path for core in casadm_parser.get_cores(cache_id=cache1.cache_id)
}
with TestRun.step(f"Set cache modes for caches to {cache_mode1} and {cache_mode2}"):
cache1.set_cache_mode(cache_mode1)
cache2.set_cache_mode(cache_mode2)
with TestRun.step("Stop second cache"):
cache2.stop()
with TestRun.step("Detach first cache device"):
cache1.detach()
with TestRun.step("Try to attach the other cache device to first cache without force flag"):
try:
cache1.attach(device=cache_dev2)
TestRun.fail("Cache attached successfully"
"Expected: cache fail to attach")
except CmdException as exc:
check_stderr_msg(exc.output, attach_with_existing_metadata)
TestRun.LOGGER.info("Cache attach failed as expected")
with TestRun.step("Attach the other cache device to first cache with force flag"):
cache1.attach(device=cache_dev2, force=True)
cores_after_attach = casadm_parser.get_cores(cache_id=cache1.cache_id)
with TestRun.step("Verify if old configuration doesn`t affect new cache"):
cores_in_cache1 = {core.core_device.path for core in cores_after_attach}
if cores_in_cache1 != cores_in_cache1_before:
TestRun.fail(
f"After attaching cache device, core list has changed:"
f"\nUsed {cores_in_cache1}"
f"\nShould use {cores_in_cache1_before}."
)
if cache1.get_cache_line_size() == cache_line_size2:
TestRun.fail(
f"After attaching cache device, cache line size changed:"
f"\nUsed {cache_line_size2}"
f"\nShould use {cache_line_size1}."
)
if cache1.get_cache_mode() != cache_mode1:
TestRun.fail(
f"After attaching cache device, cache mode changed:"
f"\nUsed {cache1.get_cache_mode()}"
f"\nShould use {cache_mode1}."
)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", [CacheMode.WB, CacheMode.WT])
def test_attach_detach_md5sum(cache_mode):
"""
title: Test for md5sum of file after attach/detach operation.
description: |
Test data integrity after detach/attach operations
pass_criteria:
- CAS doesn't crash during attach and detach.
- md5sums before and after operations match each other
"""
with TestRun.step("Prepare cache and core devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
cache_dev2.create_partitions([Size(3, Unit.GibiByte)])
cache_dev2 = cache_dev2.partitions[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(6, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
core = cache.add_core(core_dev)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step("Create a filesystem on the core device and mount it"):
if check_if_directory_exists(mountpoint):
remove(mountpoint, force=True, recursive=True)
create_directory(path=mountpoint)
core.create_filesystem(Filesystem.xfs)
core.mount(mountpoint)
with TestRun.step("Write data to the exported object"):
test_file_main = create_random_test_file(
target_file_path=posixpath.join(mountpoint, "test_file"),
file_size=Size(5, Unit.GibiByte),
)
with TestRun.step("Calculate test file md5sums before detach"):
test_file_md5sum_before = test_file_main.md5sum()
with TestRun.step("Detach cache device"):
cache.detach()
with TestRun.step("Attach different cache device"):
cache.attach(device=cache_dev2, force=True)
with TestRun.step("Calculate cache test file md5sums after cache attach"):
test_file_md5sum_after = test_file_main.md5sum()
with TestRun.step("Compare test file md5sums"):
if test_file_md5sum_before != test_file_md5sum_after:
TestRun.fail(
f"MD5 sums of core before and after do not match."
f"Expected: {test_file_md5sum_before}"
f"Actual: {test_file_md5sum_after}"
)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrizex("cache_mode", CacheMode)
def test_stop_cache_during_attach(cache_mode):
"""
title: Test cache stop during attach.
description: Test for handling concurrent cache attach and stop.
pass_criteria:
- No system crash.
- Stop operation completed successfully.
"""
with TestRun.step("Create null_blk device for cache"):
nullblk = NullBlk.create(size_gb=1500)
with TestRun.step("Prepare cache and core devices"):
cache_dev = nullblk[0]
core_dev = TestRun.disks["core"]
core_dev.create_partitions([Size(2, Unit.GibiByte)])
core_dev = core_dev.partitions[0]
with TestRun.step(f"Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True, cache_mode=cache_mode)
cache.add_core(core_dev)
with TestRun.step(f"Change cache mode to {cache_mode}"):
cache.set_cache_mode(cache_mode)
with TestRun.step("Detach cache"):
cache.detach()
with TestRun.step("Start cache re-attach in background"):
TestRun.executor.run_in_background(
attach_cache_cmd(str(cache.cache_id), cache_dev.path)
)
time.sleep(1)
with TestRun.step("Stop cache"):
cache.stop()
with TestRun.step("Verify if cache stopped"):
caches = casadm_parser.get_caches()
if caches:
TestRun.fail(
"Cache is still running despite stop operation"
"expected behaviour: Cache stopped"
"actual behaviour: Cache running"
)
def _get_random_uniq_cache_line_size(cache_line_size) -> CacheLineSize:
return random.choice([c for c in list(CacheLineSize) if c is not cache_line_size])
def _get_random_uniq_cache_mode(cache_mode) -> CacheMode:
return random.choice([c for c in list(CacheMode) if c is not cache_mode])

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -17,8 +17,8 @@ from api.cas.cache_config import (
)
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from core.test_run import TestRun
from type_def.size import Size, Unit
from test_tools.udev import Udev
from test_utils.size import Size, Unit
from test_utils.os_utils import Udev
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
@@ -65,10 +65,10 @@ def test_cleaning_policies_in_write_back(cleaning_policy: CleaningPolicy):
cache.set_cleaning_policy(cleaning_policy=cleaning_policy)
set_cleaning_policy_params(cache, cleaning_policy)
with TestRun.step("Check for running cleaner process"):
with TestRun.step("Check for running CAS cleaner"):
output = TestRun.executor.run(f"pgrep {cas_cleaner_process_name}")
if output.exit_code != 0:
TestRun.fail("Cleaner process is not running!")
TestRun.fail("CAS cleaner process is not running!")
with TestRun.step(f"Add {cores_count} cores to the cache"):
cores = [cache.add_core(partition) for partition in core_dev.partitions]
@@ -133,10 +133,10 @@ def test_cleaning_policies_in_write_through(cleaning_policy):
cache.set_cleaning_policy(cleaning_policy=cleaning_policy)
set_cleaning_policy_params(cache, cleaning_policy)
with TestRun.step("Check for running cleaner process"):
with TestRun.step("Check for running CAS cleaner"):
output = TestRun.executor.run(f"pgrep {cas_cleaner_process_name}")
if output.exit_code != 0:
TestRun.fail("Cleaner process is not running!")
TestRun.fail("CAS cleaner process is not running!")
with TestRun.step(f"Add {cores_count} cores to the cache"):
cores = [cache.add_core(partition) for partition in core_dev.partitions]
@@ -193,12 +193,12 @@ def set_cleaning_policy_params(cache, cleaning_policy):
if current_acp_params.wake_up_time != acp_params.wake_up_time:
failed_params += (
f"Wake up time is {current_acp_params.wake_up_time}, "
f"Wake Up time is {current_acp_params.wake_up_time}, "
f"should be {acp_params.wake_up_time}\n"
)
if current_acp_params.flush_max_buffers != acp_params.flush_max_buffers:
failed_params += (
f"Flush max buffers is {current_acp_params.flush_max_buffers}, "
f"Flush Max Buffers is {current_acp_params.flush_max_buffers}, "
f"should be {acp_params.flush_max_buffers}\n"
)
TestRun.LOGGER.error(f"ACP parameters did not switch properly:\n{failed_params}")
@@ -215,22 +215,22 @@ def set_cleaning_policy_params(cache, cleaning_policy):
failed_params = ""
if current_alru_params.wake_up_time != alru_params.wake_up_time:
failed_params += (
f"Wake up time is {current_alru_params.wake_up_time}, "
f"Wake Up time is {current_alru_params.wake_up_time}, "
f"should be {alru_params.wake_up_time}\n"
)
if current_alru_params.staleness_time != alru_params.staleness_time:
failed_params += (
f"Staleness time is {current_alru_params.staleness_time}, "
f"Staleness Time is {current_alru_params.staleness_time}, "
f"should be {alru_params.staleness_time}\n"
)
if current_alru_params.flush_max_buffers != alru_params.flush_max_buffers:
failed_params += (
f"Flush max buffers is {current_alru_params.flush_max_buffers}, "
f"Flush Max Buffers is {current_alru_params.flush_max_buffers}, "
f"should be {alru_params.flush_max_buffers}\n"
)
if current_alru_params.activity_threshold != alru_params.activity_threshold:
failed_params += (
f"Activity threshold is {current_alru_params.activity_threshold}, "
f"Activity Threshold is {current_alru_params.activity_threshold}, "
f"should be {alru_params.activity_threshold}\n"
)
TestRun.LOGGER.error(f"ALRU parameters did not switch properly:\n{failed_params}")
@@ -245,9 +245,9 @@ def check_cleaning_policy_operation(
case CleaningPolicy.alru:
if core_writes_before_wait_for_cleaning != Size.zero():
TestRun.LOGGER.error(
"Cleaner process started to clean dirty data right after I/O! "
"CAS cleaner started to clean dirty data right after IO! "
"According to ALRU parameters set in this test cleaner should "
"wait 10 seconds after I/O before cleaning dirty data"
"wait 10 seconds after IO before cleaning dirty data"
)
if core_writes_after_wait_for_cleaning <= core_writes_before_wait_for_cleaning:
TestRun.LOGGER.error(
@@ -266,9 +266,9 @@ def check_cleaning_policy_operation(
case CleaningPolicy.acp:
if core_writes_before_wait_for_cleaning == Size.zero():
TestRun.LOGGER.error(
"Cleaner process did not start cleaning dirty data right after I/O! "
"CAS cleaner did not start cleaning dirty data right after IO! "
"According to ACP policy cleaner should start "
"cleaning dirty data right after I/O"
"cleaning dirty data right after IO"
)
if core_writes_after_wait_for_cleaning <= core_writes_before_wait_for_cleaning:
TestRun.LOGGER.error(

View File

@@ -1,22 +1,21 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
from time import sleep
import pytest
from time import sleep
from api.cas import casadm, casadm_parser, cli
from api.cas.cache_config import CacheMode, CleaningPolicy, CacheModeTrait, SeqCutOffPolicy
from api.cas.casadm_params import StatsFilter
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from connection.utils.output import CmdException
from type_def.size import Size, Unit
from test_utils.output import CmdException
from test_utils.size import Size, Unit
@pytest.mark.parametrize("cache_mode", CacheMode.with_traits(CacheModeTrait.LazyWrites))
@@ -24,10 +23,10 @@ from type_def.size import Size, Unit
@pytest.mark.require_disk("core", DiskTypeSet([DiskType.hdd, DiskType.hdd4k]))
def test_concurrent_cores_flush(cache_mode: CacheMode):
"""
title: Flush two cores simultaneously - negative.
title: Fail to flush two cores simultaneously.
description: |
Validate that the attempt to flush another core when there is already one flush in
progress on the same cache will fail.
CAS should return an error on attempt to flush second core if there is already
one flush in progress.
pass_criteria:
- No system crash.
- First core flushing should finish successfully.
@@ -40,7 +39,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
core_dev = TestRun.disks["core"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
core_dev.create_partitions([Size(2, Unit.GibiByte)] * 2)
core_dev.create_partitions([Size(5, Unit.GibiByte)] * 2)
cache_part = cache_dev.partitions[0]
core_part1 = core_dev.partitions[0]
@@ -49,7 +48,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
with TestRun.step("Start cache"):
cache = casadm.start_cache(cache_part, cache_mode, force=True)
with TestRun.step("Add both core devices to cache"):
with TestRun.step(f"Add both core devices to cache"):
core1 = cache.add_core(core_part1)
core2 = cache.add_core(core_part2)
@@ -57,34 +56,37 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step("Run fio on both cores"):
data_per_core = cache.size / 2
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.size(data_per_core)
.block_size(Size(4, Unit.MebiByte))
.read_write(ReadWrite.write)
.direct(1)
)
with TestRun.step("Run concurrent fio on both cores"):
fio_pids = []
for core in [core1, core2]:
fio.add_job().target(core.path)
fio.run()
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.target(core.path)
.size(core.size)
.block_size(Size(4, Unit.MebiByte))
.read_write(ReadWrite.write)
.direct(1)
)
fio_pid = fio.run_in_background()
fio_pids.append(fio_pid)
for fio_pid in fio_pids:
if not TestRun.executor.check_if_process_exists(fio_pid):
TestRun.fail("Fio failed to start")
with TestRun.step("Wait for fio to finish"):
for fio_pid in fio_pids:
while TestRun.executor.check_if_process_exists(fio_pid):
sleep(1)
with TestRun.step("Check if both cores contain dirty blocks"):
required_dirty_data = (
(data_per_core * 0.9).align_down(Unit.Blocks4096.value).set_unit(Unit.Blocks4096)
)
core1_dirty_data = core1.get_dirty_blocks()
if core1_dirty_data < required_dirty_data:
TestRun.fail(f"Core {core1.core_id} does not contain enough dirty data.\n"
f"Expected at least {required_dirty_data}, actual {core1_dirty_data}.")
core2_dirty_data_before = core2.get_dirty_blocks()
if core2_dirty_data_before < required_dirty_data:
TestRun.fail(f"Core {core2.core_id} does not contain enough dirty data.\n"
f"Expected at least {required_dirty_data}, actual "
f" {core2_dirty_data_before}.")
if core1.get_dirty_blocks() == Size.zero():
TestRun.fail("The first core does not contain dirty blocks")
if core2.get_dirty_blocks() == Size.zero():
TestRun.fail("The second core does not contain dirty blocks")
core2_dirty_blocks_before = core2.get_dirty_blocks()
with TestRun.step("Start flushing the first core in background"):
output_pid = TestRun.executor.run_in_background(
@@ -102,7 +104,7 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
pass
with TestRun.step(
"Wait until first core reaches 40% flush and start flush operation on the second core"
"Wait until first core reach 40% flush and start flush operation on the second core"
):
percentage = 0
while percentage < 40:
@@ -129,20 +131,18 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
except CmdException:
TestRun.LOGGER.info("The first core is not flushing dirty data anymore")
with TestRun.step("Check the size of dirty data on both cores"):
core1_dirty_data = core1.get_dirty_blocks()
if core1_dirty_data > Size.zero():
with TestRun.step("Check number of dirty data on both cores"):
if core1.get_dirty_blocks() > Size.zero():
TestRun.LOGGER.error(
"There should not be any dirty data on the first core after completed flush.\n"
f"Dirty data: {core1_dirty_data}."
"The quantity of dirty cache lines on the first core "
"after completed flush should be zero"
)
core2_dirty_data_after = core2.get_dirty_blocks()
if core2_dirty_data_after != core2_dirty_data_before:
core2_dirty_blocks_after = core2.get_dirty_blocks()
if core2_dirty_blocks_before != core2_dirty_blocks_after:
TestRun.LOGGER.error(
"Dirty data on the second core after failed flush should not change."
f"Dirty data before flush: {core2_dirty_data_before}, "
f"after: {core2_dirty_data_after}"
"The quantity of dirty cache lines on the second core "
"after failed flush should not change"
)
@@ -151,9 +151,9 @@ def test_concurrent_cores_flush(cache_mode: CacheMode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_concurrent_caches_flush(cache_mode: CacheMode):
"""
title: Flush multiple caches simultaneously.
title: Success to flush two caches simultaneously.
description: |
Check for flushing multiple caches if there is already other flush in progress.
CAS should successfully flush multiple caches if there is already other flush in progress.
pass_criteria:
- No system crash.
- Flush for each cache should finish successfully.
@@ -178,29 +178,28 @@ def test_concurrent_caches_flush(cache_mode: CacheMode):
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step("Add cores to caches"):
with TestRun.step(f"Add core devices to caches"):
cores = [cache.add_core(core_dev=core_dev.partitions[i]) for i, cache in enumerate(caches)]
with TestRun.step("Run fio on all cores"):
fio = (
Fio()
.create_command()
.io_engine(IoEngine.libaio)
.block_size(Size(4, Unit.MebiByte))
.size(cache.size)
.read_write(ReadWrite.write)
.direct(1)
)
fio_pids = []
for core in cores:
fio.add_job().target(core)
fio.run()
fio = (
Fio()
.create_command()
.target(core)
.io_engine(IoEngine.libaio)
.block_size(Size(4, Unit.MebiByte))
.size(core.size)
.read_write(ReadWrite.write)
.direct(1)
)
fio_pids.append(fio.run_in_background())
with TestRun.step("Check if each cache is full of dirty blocks"):
for cache in caches:
cache_stats = cache.get_statistics(stat_filter=[StatsFilter.usage], percentage_val=True)
if cache_stats.usage_stats.dirty < 90:
TestRun.fail(f"Cache {cache.cache_id} should contain at least 90% of dirty data, "
f"actual dirty data: {cache_stats.usage_stats.dirty}%")
if not cache.get_dirty_blocks() != core.size:
TestRun.fail(f"The cache {cache.cache_id} does not contain dirty blocks")
with TestRun.step("Start flush operation on all caches simultaneously"):
flush_pids = [
@@ -215,9 +214,8 @@ def test_concurrent_caches_flush(cache_mode: CacheMode):
with TestRun.step("Check number of dirty data on each cache"):
for cache in caches:
dirty_blocks = cache.get_dirty_blocks()
if dirty_blocks > Size.zero():
if cache.get_dirty_blocks() > Size.zero():
TestRun.LOGGER.error(
f"The quantity of dirty data on cache {cache.cache_id} after complete "
f"flush should be zero, is: {dirty_blocks.set_unit(Unit.Blocks4096)}"
f"The quantity of dirty cache lines on the cache "
f"{str(cache.cache_id)} after complete flush should be zero"
)

View File

@@ -5,14 +5,15 @@
#
import random
import pytest
from api.cas import casadm
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeLowerThan, DiskTypeSet
from test_tools.fs_tools import Filesystem
from connection.utils.output import CmdException
from type_def.size import Size, Unit
from test_tools.disk_utils import Filesystem
from test_utils.output import CmdException
from test_utils.size import Size, Unit
mount_point = "/mnt/cas"
cores_amount = 3

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -15,9 +15,8 @@ from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, VerifyMethod
from test_tools.os_tools import sync
from test_tools.udev import Udev
from type_def.size import Size, Unit
from test_utils.os_utils import Udev, sync
from test_utils.size import Size, Unit
io_size = Size(10000, Unit.Blocks4096)
@@ -46,7 +45,7 @@ def test_cache_stop_and_load(cache_mode):
"""
title: Test for stopping and loading cache back with dynamic cache mode switching.
description: |
Validate the ability to switch cache modes at runtime and
Validate the ability of the CAS to switch cache modes at runtime and
check if all of them are working properly after switching and
after stopping and reloading cache back.
Check also other parameters consistency after reload.
@@ -138,8 +137,10 @@ def test_cache_stop_and_load(cache_mode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cache_mode_switching_during_io(cache_mode_1, cache_mode_2, flush, io_mode):
"""
title: Test for dynamic cache mode switching during I/O.
description: Validate the ability to switch cache modes during I/O on exported object.
title: Test for dynamic cache mode switching during IO.
description: |
Validate the ability of CAS to switch cache modes
during working IO on CAS device.
pass_criteria:
- Cache mode is switched without errors.
"""
@@ -180,7 +181,7 @@ def test_cache_mode_switching_during_io(cache_mode_1, cache_mode_2, flush, io_mo
):
cache.set_cache_mode(cache_mode=cache_mode_2, flush=flush)
with TestRun.step("Check if cache mode has switched properly during I/O"):
with TestRun.step(f"Check if cache mode has switched properly during IO"):
cache_mode_after_switch = cache.get_cache_mode()
if cache_mode_after_switch != cache_mode_2:
TestRun.fail(
@@ -227,7 +228,7 @@ def run_io_and_verify(cache, core, io_mode):
):
TestRun.fail(
"Write-Back cache mode is not working properly! "
"There should be some writes to exported object and none to the core"
"There should be some writes to CAS device and none to the core"
)
case CacheMode.PT:
if (

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -9,7 +9,7 @@ import pytest
from api.cas import casadm, cli, cli_messages
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from type_def.size import Size, Unit
from test_utils.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@@ -18,11 +18,11 @@ def test_remove_multilevel_core():
"""
title: Test of the ability to remove a core used in a multilevel cache.
description: |
Negative test for removing a core when the related exported object
Negative test if OpenCAS does not allow to remove a core when the related exported object
is used as a core device for another cache instance.
pass_criteria:
- No system crash.
- Removing a core used in a multilevel cache instance is forbidden.
- OpenCAS does not allow removing a core used in a multilevel cache instance.
"""
with TestRun.step("Prepare cache and core devices"):

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -21,12 +21,12 @@ from api.cas.casadm_params import StatsFilter
from core.test_run_utils import TestRun
from storage_devices.disk import DiskTypeSet, DiskTypeLowerThan, DiskType
from test_tools.dd import Dd
from test_tools.fs_tools import Filesystem
from test_tools.disk_utils import Filesystem
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import IoEngine, ReadWrite
from test_tools.udev import Udev
from connection.utils.output import CmdException
from type_def.size import Size, Unit
from test_utils.os_utils import Udev
from test_utils.output import CmdException
from test_utils.size import Size, Unit
random_thresholds = random.sample(range(1028, 1024**2, 4), 3)
random_stream_numbers = random.sample(range(2, 128), 3)
@@ -57,7 +57,7 @@ def test_multistream_seq_cutoff_functional(streams_number, threshold):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step(f"Start cache in Write-Back cache mode"):
with TestRun.step(f"Start cache in Write-Back"):
cache_disk = TestRun.disks["cache"]
core_disk = TestRun.disks["core"]
cache = casadm.start_cache(cache_disk, CacheMode.WB, force=True)
@@ -105,7 +105,7 @@ def test_multistream_seq_cutoff_functional(streams_number, threshold):
with TestRun.step(
"Write random number of 4k block requests to each stream and check if all "
"writes were sent in pass-through"
"writes were sent in pass-through mode"
):
core_statistics_before = core.get_statistics([StatsFilter.req, StatsFilter.blk])
random.shuffle(offsets)
@@ -170,7 +170,7 @@ def test_multistream_seq_cutoff_stress_raw(streams_seq_rand):
with TestRun.step("Reset core statistics counters"):
core.reset_counters()
with TestRun.step("Run fio on core device"):
with TestRun.step("Run FIO on core device"):
stream_size = min(core_disk.size / 256, Size(256, Unit.MebiByte))
sequential_streams = streams_seq_rand[0]
random_streams = streams_seq_rand[1]
@@ -216,14 +216,12 @@ def test_multistream_seq_cutoff_stress_fs(streams_seq_rand, filesystem, cache_mo
- No system crash
"""
with TestRun.step("Disable udev"):
with TestRun.step(f"Disable udev"):
Udev.disable()
with TestRun.step("Prepare cache and core devices"):
with TestRun.step("Create filesystem on core device"):
cache_disk = TestRun.disks["cache"]
core_disk = TestRun.disks["core"]
with TestRun.step("Create filesystem on core device"):
core_disk.create_filesystem(filesystem)
with TestRun.step("Start cache and add core"):
@@ -233,7 +231,7 @@ def test_multistream_seq_cutoff_stress_fs(streams_seq_rand, filesystem, cache_mo
with TestRun.step("Mount core"):
core.mount(mount_point=mount_point)
with TestRun.step("Set sequential cutoff policy to always and threshold to 20MiB"):
with TestRun.step(f"Set seq-cutoff policy to always and threshold to 20MiB"):
core.set_seq_cutoff_policy(policy=SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold=Size(20, Unit.MebiByte))
@@ -281,7 +279,7 @@ def run_dd(target_path, count, seek):
TestRun.LOGGER.info(f"dd command:\n{dd}")
output = dd.run()
if output.exit_code != 0:
raise CmdException("Error during I/O", output)
raise CmdException("Error during IO", output)
def check_statistics(stats_before, stats_after, expected_pt_writes, expected_writes_to_cache):

View File

@@ -1,263 +0,0 @@
#
# Copyright(c) 2024-2025 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
import math
import random
import pytest
from api.cas import casadm
from api.cas.cache_config import SeqCutOffPolicy, CleaningPolicy, PromotionPolicy, \
PromotionParametersNhit, CacheMode
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from test_tools.dd import Dd
from test_tools.udev import Udev
from type_def.size import Size, Unit
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_promotion_policy_nhit_threshold():
"""
title: Functional test for promotion policy nhit - threshold
description: |
Test checking if data is cached only after number of hits to given cache line
accordingly to specified promotion nhit threshold.
pass_criteria:
- Promotion policy and hit parameters are set properly
- Data is cached only after number of hits to given cache line specified by threshold param
- Data is written in pass-through before number of hits to given cache line specified by
threshold param
- After meeting specified number of hits to given cache line, writes to other cache lines
are handled in pass-through
"""
random_thresholds = random.sample(range(2, 1000), 10)
additional_writes_count = 10
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(value=5, unit=Unit.GibiByte)])
core_device.create_partitions([Size(value=10, unit=Unit.GibiByte)])
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, cache_mode=CacheMode.WB)
core = cache.add_core(core_parts)
with TestRun.step("Disable sequential cut-off and cleaning"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
cache.reset_counters()
with TestRun.step("Check if statistics of writes to cache and writes to core are zeros"):
check_statistics(
cache,
expected_writes_to_cache=Size.zero(),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Set nhit promotion policy"):
cache.set_promotion_policy(PromotionPolicy.nhit)
for iteration, threshold in enumerate(
TestRun.iteration(
random_thresholds,
"Set and validate nhit promotion policy threshold"
)
):
with TestRun.step(f"Set threshold to {threshold} and trigger to 0%"):
cache.set_params_nhit(
PromotionParametersNhit(
threshold=threshold,
trigger=0
)
)
with TestRun.step("Purge cache"):
cache.purge_cache()
with TestRun.step("Reset counters"):
cache.reset_counters()
with TestRun.step(
"Run dd and check if number of writes to cache and writes to core increase "
"accordingly to nhit parameters"
):
# dd_seek is counted as below to use different part of the cache in each iteration
dd_seek = int(
cache.size.get_value(Unit.Blocks4096) // len(random_thresholds) * iteration
)
for count in range(1, threshold + additional_writes_count):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(1) \
.seek(dd_seek) \
.run()
if count < threshold:
expected_writes_to_cache = Size.zero()
expected_writes_to_core = Size(count, Unit.Blocks4096)
else:
expected_writes_to_cache = Size(count - threshold + 1, Unit.Blocks4096)
expected_writes_to_core = Size(threshold - 1, Unit.Blocks4096)
check_statistics(cache, expected_writes_to_cache, expected_writes_to_core)
with TestRun.step("Write to other cache line and check if it was handled in pass-through"):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(1) \
.seek(int(dd_seek + Unit.Blocks4096.value)) \
.run()
expected_writes_to_core = expected_writes_to_core + Size(1, Unit.Blocks4096)
check_statistics(cache, expected_writes_to_cache, expected_writes_to_core)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_promotion_policy_nhit_trigger():
"""
title: Functional test for promotion policy nhit - trigger
description: |
Test checking if data is cached accordingly to nhit threshold parameter only after reaching
cache occupancy specified by nhit trigger value
pass_criteria:
- Promotion policy and hit parameters are set properly
- Data is cached accordingly to nhit threshold parameter only after reaching
cache occupancy specified by nhit trigger value
- Data is cached without nhit policy before reaching the trigger
"""
random_triggers = random.sample(range(0, 100), 10)
threshold = 2
with TestRun.step("Prepare cache and core devices"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
cache_device.create_partitions([Size(value=50, unit=Unit.MebiByte)])
core_device.create_partitions([Size(value=100, unit=Unit.MebiByte)])
cache_part = cache_device.partitions[0]
core_parts = core_device.partitions[0]
with TestRun.step("Disable udev"):
Udev.disable()
for trigger in TestRun.iteration(
random_triggers,
"Validate nhit promotion policy trigger"
):
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_part, cache_mode=CacheMode.WB, force=True)
core = cache.add_core(core_parts)
with TestRun.step("Disable sequential cut-off and cleaning"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
cache.set_cleaning_policy(CleaningPolicy.nop)
with TestRun.step("Purge cache"):
cache.purge_cache()
with TestRun.step("Reset counters"):
cache.reset_counters()
with TestRun.step("Check if statistics of writes to cache and writes to core are zeros"):
check_statistics(
cache,
expected_writes_to_cache=Size.zero(),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Set nhit promotion policy"):
cache.set_promotion_policy(PromotionPolicy.nhit)
with TestRun.step(f"Set threshold to {threshold} and trigger to {trigger}%"):
cache.set_params_nhit(
PromotionParametersNhit(
threshold=threshold,
trigger=trigger
)
)
with TestRun.step(f"Run dd to fill {trigger}% of cache size with data"):
blocks_count = math.ceil(cache.size.get_value(Unit.Blocks4096) * trigger / 100)
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(blocks_count) \
.seek(0) \
.run()
with TestRun.step("Check if all written data was cached"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size.zero()
)
with TestRun.step("Write to free cached volume sectors"):
free_seek = (blocks_count + 1)
pt_blocks_count = int(cache.size.get_value(Unit.Blocks4096) - blocks_count)
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(pt_blocks_count) \
.seek(free_seek) \
.run()
with TestRun.step("Check if recently written data was written in pass-through"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size(pt_blocks_count, Unit.Blocks4096)
)
with TestRun.step("Write to recently written sectors one more time"):
Dd().input("/dev/random") \
.output(core.path) \
.oflag("direct") \
.block_size(Size(1, Unit.Blocks4096)) \
.count(pt_blocks_count) \
.seek(free_seek) \
.run()
with TestRun.step("Check if recently written data was cached"):
check_statistics(
cache,
expected_writes_to_cache=Size(blocks_count + pt_blocks_count, Unit.Blocks4096),
expected_writes_to_core=Size(pt_blocks_count, Unit.Blocks4096)
)
with TestRun.step("Stop cache"):
cache.stop(no_data_flush=True)
def check_statistics(cache, expected_writes_to_cache, expected_writes_to_core):
cache_stats = cache.get_statistics()
writes_to_cache = cache_stats.block_stats.cache.writes
writes_to_core = cache_stats.block_stats.core.writes
if writes_to_cache != expected_writes_to_cache:
TestRun.LOGGER.error(
f"Number of writes to cache should be "
f"{expected_writes_to_cache.get_value(Unit.Blocks4096)} "
f"but it is {writes_to_cache.get_value(Unit.Blocks4096)}")
if writes_to_core != expected_writes_to_core:
TestRun.LOGGER.error(
f"Number of writes to core should be: "
f"{expected_writes_to_core.get_value(Unit.Blocks4096)} "
f"but it is {writes_to_core.get_value(Unit.Blocks4096)}")

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -15,9 +15,8 @@ from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine, CpusAllowedPolicy
from test_tools.os_tools import sync, get_dut_cpu_physical_cores
from test_tools.udev import Udev
from type_def.size import Size, Unit
from test_utils.os_utils import Udev, sync, get_dut_cpu_physical_cores
from test_utils.size import Size, Unit
class VerifyType(Enum):
@@ -40,14 +39,15 @@ class VerifyType(Enum):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_size):
"""
title: Functional sequential cutoff test with multiple cores
title: Sequential cut-off tests during sequential and random IO 'always' policy with 4 cores
description: |
Test checking if data is cached properly with sequential cutoff "always" policy
when sequential and random I/O is running to multiple cores.
Testing if amount of data written to cache after sequential writes for different
sequential cut-off thresholds on each core, while running sequential IO on 3 out of 4
cores and random IO against the last core, is correct.
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cutoff threshold for three first cores.
- Amount of written blocks to cache is equal to I/O size run against last core.
with sequential cut-off threshold for three first cores.
- Amount of written blocks to cache is equal to io size run against last core.
"""
with TestRun.step("Prepare cache and core devices"):
@@ -75,7 +75,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
)
core_list = [cache.add_core(core_dev=core_part) for core_part in core_parts]
with TestRun.step("Set sequential cutoff parameters for all cores"):
with TestRun.step("Set sequential cut-off parameters for all cores"):
writes_before_list = []
fio_additional_size = Size(10, Unit.Blocks4096)
thresholds_list = [
@@ -95,7 +95,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
core.set_seq_cutoff_policy(SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential I/O against first three cores"):
with TestRun.step("Prepare sequential IO against first three cores"):
block_size = Size(4, Unit.KibiByte)
fio = Fio().create_command().io_engine(IoEngine.libaio).block_size(block_size).direct(True)
@@ -106,7 +106,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
fio_job.target(core.path)
writes_before_list.append(core.get_statistics().block_stats.cache.writes)
with TestRun.step("Prepare random I/O against the last core"):
with TestRun.step("Prepare random IO against the last core"):
fio_job = fio.add_job(f"core_{core_list[-1].core_id}")
fio_job.size(io_sizes_list[-1])
fio_job.read_write(io_type_last)
@@ -116,7 +116,7 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
with TestRun.step("Run fio against all cores"):
fio.run()
with TestRun.step("Verify writes to cache count after I/O"):
with TestRun.step("Verify writes to cache count after IO"):
margins = [
min(block_size * (core.get_seq_cut_off_parameters().promotion_count - 1), threshold)
for core, threshold in zip(core_list[:-1], thresholds_list[:-1])
@@ -158,16 +158,17 @@ def test_seq_cutoff_multi_core(cache_mode, io_type, io_type_last, cache_line_siz
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.optane, DiskType.nand]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_multi_core_cpu_pinned(cache_mode, io_type, io_type_last, cache_line_size):
def test_seq_cutoff_multi_core_io_pinned(cache_mode, io_type, io_type_last, cache_line_size):
"""
title: Functional sequential cutoff test with multiple cores and cpu pinned I/O
title: Sequential cut-off tests during sequential and random IO 'always' policy with 4 cores
description: |
Test checking if data is cached properly with sequential cutoff "always" policy
when sequential and random cpu pinned I/O is running to multiple cores.
Testing if amount of data written to cache after sequential writes for different
sequential cut-off thresholds on each core, while running sequential IO, pinned,
on 3 out of 4 cores and random IO against the last core, is correct.
pass_criteria:
- Amount of written blocks to cache is less or equal than amount set
with sequential cutoff threshold for three first cores.
- Amount of written blocks to cache is equal to I/O size run against last core.
with sequential cut-off threshold for three first cores.
- Amount of written blocks to cache is equal to io size run against last core.
"""
with TestRun.step("Partition cache and core devices"):
@@ -196,7 +197,7 @@ def test_seq_cutoff_multi_core_cpu_pinned(cache_mode, io_type, io_type_last, cac
)
core_list = [cache.add_core(core_dev=core_part) for core_part in core_parts]
with TestRun.step("Set sequential cutoff parameters for all cores"):
with TestRun.step(f"Set sequential cut-off parameters for all cores"):
writes_before_list = []
fio_additional_size = Size(10, Unit.Blocks4096)
thresholds_list = [
@@ -216,9 +217,7 @@ def test_seq_cutoff_multi_core_cpu_pinned(cache_mode, io_type, io_type_last, cac
core.set_seq_cutoff_policy(SeqCutOffPolicy.always)
core.set_seq_cutoff_threshold(threshold)
with TestRun.step(
"Prepare sequential I/O against first three cores and random I/O against the last one"
):
with TestRun.step("Prepare sequential IO against first three cores"):
fio = (
Fio()
.create_command()
@@ -244,10 +243,10 @@ def test_seq_cutoff_multi_core_cpu_pinned(cache_mode, io_type, io_type_last, cac
fio_job.target(core_list[-1].path)
writes_before_list.append(core_list[-1].get_statistics().block_stats.cache.writes)
with TestRun.step("Running I/O against all cores"):
with TestRun.step("Running IO against all cores"):
fio.run()
with TestRun.step("Verifying writes to cache count after I/O"):
with TestRun.step("Verifying writes to cache count after IO"):
for core, writes, threshold, io_size in zip(
core_list[:-1], writes_before_list[:-1], thresholds_list[:-1], io_sizes_list[:-1]
):
@@ -282,14 +281,16 @@ def test_seq_cutoff_multi_core_cpu_pinned(cache_mode, io_type, io_type_last, cac
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
"""
title: Functional test for sequential cutoff threshold parameter
title: Sequential cut-off tests for writes and reads for 'never', 'always' and 'full' policies
description: |
Check if data is cached properly according to sequential cutoff policy and
threshold parameter
Testing if amount of data written to cache after sequential writes and reads for different
sequential cut-off policies with cache configured with different cache line size
is valid for sequential cut-off threshold parameter, assuming that cache occupancy
doesn't reach 100% during test.
pass_criteria:
- Amount of blocks written to cache is less than or equal to amount set
with sequential cutoff parameter in case of 'always' policy.
- Amount of blocks written to cache is at least equal to io size in case of 'never' and 'full'
- Amount of written blocks to cache is less or equal than amount set
with sequential cut-off parameter in case of 'always' policy.
- Amount of written blocks to cache is at least equal io size in case of 'never' and 'full'
policy.
"""
@@ -324,13 +325,13 @@ def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
)
io_size = (threshold + fio_additional_size).align_down(0x1000)
with TestRun.step(f"Setting cache sequential cutoff policy mode to {policy}"):
with TestRun.step(f"Setting cache sequential cut off policy mode to {policy}"):
cache.set_seq_cutoff_policy(policy)
with TestRun.step(f"Setting cache sequential cutoff policy threshold to {threshold}"):
with TestRun.step(f"Setting cache sequential cut off policy threshold to {threshold}"):
cache.set_seq_cutoff_threshold(threshold)
with TestRun.step("Prepare sequential I/O against core"):
with TestRun.step("Prepare sequential IO against core"):
sync()
writes_before = core.get_statistics().block_stats.cache.writes
fio = (
@@ -362,15 +363,16 @@ def test_seq_cutoff_thresh(cache_line_size, io_dir, policy, verify_type):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
"""
title: Functional test for sequential cutoff threshold parameter and 'full' policy
title: Sequential cut-off tests during writes and reads on full cache for 'full' policy
description: |
Check if data is cached properly according to sequential cutoff 'full' policy and given
threshold parameter
Testing if amount of data written to cache after sequential io against fully occupied
cache for 'full' sequential cut-off policy with cache configured with different cache
line sizes is valid for sequential cut-off threshold parameter.
pass_criteria:
- Amount of written blocks to cache is big enough to fill cache when 'never' sequential
cutoff policy is set
cut-off policy is set
- Amount of written blocks to cache is less or equal than amount set
with sequential cutoff parameter in case of 'full' policy.
with sequential cut-off parameter in case of 'full' policy.
"""
with TestRun.step("Partition cache and core devices"):
@@ -404,10 +406,10 @@ def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
)
io_size = (threshold + fio_additional_size).align_down(0x1000)
with TestRun.step(f"Setting cache sequential cutoff policy mode to {SeqCutOffPolicy.never}"):
with TestRun.step(f"Setting cache sequential cut off policy mode to {SeqCutOffPolicy.never}"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.never)
with TestRun.step("Prepare sequential I/O against core"):
with TestRun.step("Prepare sequential IO against core"):
sync()
fio = (
Fio()
@@ -429,13 +431,13 @@ def test_seq_cutoff_thresh_fill(cache_line_size, io_dir):
f"Cache occupancy is too small: {occupancy_percentage}, expected at least 95%"
)
with TestRun.step(f"Setting cache sequential cutoff policy mode to {SeqCutOffPolicy.full}"):
with TestRun.step(f"Setting cache sequential cut off policy mode to {SeqCutOffPolicy.full}"):
cache.set_seq_cutoff_policy(SeqCutOffPolicy.full)
with TestRun.step(f"Setting cache sequential cutoff policy threshold to {threshold}"):
with TestRun.step(f"Setting cache sequential cut off policy threshold to {threshold}"):
cache.set_seq_cutoff_threshold(threshold)
with TestRun.step(f"Running sequential I/O ({io_dir})"):
with TestRun.step(f"Running sequential IO ({io_dir})"):
sync()
writes_before = core.get_statistics().block_stats.cache.writes
fio = (

View File

@@ -1,17 +1,16 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm
from api.cas.cache_config import CacheMode, CacheModeTrait
from api.cas.cache_config import CacheMode
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.udev import Udev
from type_def.size import Unit, Size
from test_utils.os_utils import Udev
from test_utils.size import Unit, Size
from test_tools.dd import Dd
from test_tools.iostat import IOstatBasic
@@ -20,17 +19,19 @@ dd_count = 100
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.parametrize("cache_mode", CacheMode.with_traits(CacheModeTrait.InsertRead))
@pytest.mark.parametrize("cache_mode", [CacheMode.WT, CacheMode.WA, CacheMode.WB])
@pytest.mark.CI()
def test_ci_read(cache_mode):
"""
title: Verification test for caching reads in various cache modes
description: Check if reads are properly cached in various cache modes
title: Verification test for write mode: write around
description: Verify if write mode: write around, works as expected and cache only reads
and does not cache write
pass criteria:
- Reads are cached
- writes are not cached
- reads are cached
"""
with TestRun.step("Prepare cache and core devices"):
with TestRun.step("Prepare partitions"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@@ -43,7 +44,7 @@ def test_ci_read(cache_mode):
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step(f"Start cache in {cache_mode} cache mode"):
with TestRun.step(f"Start cache with cache_mode={cache_mode}"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=cache_mode)
casadm.add_core(cache, core_device)
@@ -61,7 +62,7 @@ def test_ci_read(cache_mode):
dd.run()
with TestRun.step("Collect iostat"):
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device])
read_cache_1 = iostat[0].total_reads
with TestRun.step("Generate cache hits using reads"):
@@ -76,7 +77,7 @@ def test_ci_read(cache_mode):
dd.run()
with TestRun.step("Collect iostat"):
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat = IOstatBasic.get_iostat_list([cache_device.parent_device])
read_cache_2 = iostat[0].total_reads
with TestRun.step("Stop cache"):
@@ -97,14 +98,7 @@ def test_ci_read(cache_mode):
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.CI()
def test_ci_write_around_write():
"""
title: Verification test for writes in Write-Around cache mode
description: Validate I/O statistics after writing to exported object in Write-Around cache mode
pass criteria:
- Writes are not cached
- After inserting writes to core, data is read from core and not from cache
"""
with TestRun.step("Prepare cache and core devices"):
with TestRun.step("Prepare partitions"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@@ -117,16 +111,16 @@ def test_ci_write_around_write():
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start cache in Write-Around mode"):
with TestRun.step("Start CAS Linux in Write Around mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=CacheMode.WA)
casadm.add_core(cache, core_device)
with TestRun.step("Collect iostat before I/O"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
write_core_0 = iostat_core[0].total_writes
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
write_cache_0 = iostat_cache[0].total_writes
with TestRun.step("Submit writes to exported object"):
@@ -142,11 +136,11 @@ def test_ci_write_around_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
write_core_1 = iostat_core[0].total_writes
read_core_1 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
write_cache_1 = iostat_cache[0].total_writes
read_cache_1 = iostat_cache[0].total_reads
@@ -162,10 +156,10 @@ def test_ci_write_around_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
read_core_2 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
read_cache_2 = iostat_cache[0].total_reads
with TestRun.step("Stop cache"):
@@ -188,14 +182,14 @@ def test_ci_write_around_write():
else:
TestRun.LOGGER.error(f"Writes to cache: {write_cache_delta_1} != 0")
with TestRun.step("Verify that data was read from core"):
with TestRun.step("Verify that reads propagated to core"):
read_core_delta_2 = read_core_2 - read_core_1
if read_core_delta_2 == data_write:
TestRun.LOGGER.info(f"Reads from core: {read_core_delta_2} == {data_write}")
else:
TestRun.LOGGER.error(f"Reads from core: {read_core_delta_2} != {data_write}")
with TestRun.step("Verify that data was not read from cache"):
with TestRun.step("Verify that reads did not occur on cache"):
read_cache_delta_2 = read_cache_2 - read_cache_1
if read_cache_delta_2.value == 0:
TestRun.LOGGER.info(f"Reads from cache: {read_cache_delta_2} == 0")
@@ -208,15 +202,7 @@ def test_ci_write_around_write():
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
@pytest.mark.CI()
def test_ci_write_through_write():
"""
title: Verification test for Write-Through cache mode
description: |
Validate if reads and writes are cached properly for cache in Write-Through mode
pass criteria:
- Writes are inserted to cache and core
- Reads are not cached
"""
with TestRun.step("Prepare cache and core devices"):
with TestRun.step("Prepare partitions"):
cache_device = TestRun.disks["cache"]
core_device = TestRun.disks["core"]
@@ -229,16 +215,16 @@ def test_ci_write_through_write():
with TestRun.step("Disable udev"):
Udev.disable()
with TestRun.step("Start cache in Write-Through mode"):
with TestRun.step("Start CAS Linux in Write Through mode"):
cache = casadm.start_cache(cache_dev=cache_device, cache_id=1, force=True,
cache_mode=CacheMode.WT)
casadm.add_core(cache, core_device)
with TestRun.step("Collect iostat before I/O"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
write_core_0 = iostat_core[0].total_writes
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
write_cache_0 = iostat_cache[0].total_writes
with TestRun.step("Insert data into the cache using writes"):
@@ -255,11 +241,11 @@ def test_ci_write_through_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
write_core_1 = iostat_core[0].total_writes
read_core_1 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
write_cache_1 = iostat_cache[0].total_writes
read_cache_1 = iostat_cache[0].total_reads
@@ -276,10 +262,10 @@ def test_ci_write_through_write():
dd.run()
with TestRun.step("Collect iostat"):
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device.get_device_id()])
iostat_core = IOstatBasic.get_iostat_list([core_device.parent_device])
read_core_2 = iostat_core[0].total_reads
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device.get_device_id()])
iostat_cache = IOstatBasic.get_iostat_list([cache_device.parent_device])
read_cache_2 = iostat_cache[0].total_reads
with TestRun.step("Stop cache"):

View File

@@ -1,121 +1,69 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2023-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm
from api.cas.cas_module import CasModule
from api.cas.cli_messages import check_stderr_msg, attach_not_enough_memory
from connection.utils.output import CmdException
from core.test_run import TestRun
from storage_devices.disk import DiskTypeSet, DiskType, DiskTypeLowerThan
from type_def.size import Unit, Size
from test_tools.os_tools import (drop_caches,
from test_utils.size import Unit
from test_utils.os_utils import (allocate_memory,
disable_memory_affecting_functions,
drop_caches,
get_mem_free,
is_kernel_module_loaded,
load_kernel_module,
unload_kernel_module,
)
from test_tools.memory import disable_memory_affecting_functions, get_mem_free, allocate_memory, \
get_mem_available, unmount_ramfs
@pytest.mark.os_dependent
def test_insufficient_memory_for_cas_module():
"""
title: Load CAS kernel module with insufficient memory
title: Negative test for the ability of CAS to load the kernel module with insufficient memory.
description: |
Negative test for the ability to load the CAS kernel module with insufficient memory.
Check that the CAS kernel module won’t be loaded if enough memory is not available
pass_criteria:
- CAS kernel module cannot be loaded with not enough memory.
- Loading CAS kernel module with not enough memory returns error.
- CAS module cannot be loaded with not enough memory.
- Loading CAS with not enough memory returns error.
"""
with TestRun.step("Disable caching and memory over-committing"):
disable_memory_affecting_functions()
drop_caches()
with TestRun.step("Measure memory usage without CAS kernel module"):
with TestRun.step("Measure memory usage without OpenCAS module"):
if is_kernel_module_loaded(CasModule.cache.value):
unload_kernel_module(CasModule.cache.value)
available_mem_before_cas = get_mem_free()
with TestRun.step("Load CAS kernel module"):
with TestRun.step("Load CAS module"):
load_kernel_module(CasModule.cache.value)
with TestRun.step("Measure memory usage with CAS kernel module"):
with TestRun.step("Measure memory usage with CAS module"):
available_mem_with_cas = get_mem_free()
memory_used_by_cas = available_mem_before_cas - available_mem_with_cas
TestRun.LOGGER.info(
f"CAS kernel module uses {memory_used_by_cas.get_value(Unit.MiB):.2f} MiB of DRAM."
f"OpenCAS module uses {memory_used_by_cas.get_value(Unit.MiB):.2f} MiB of DRAM."
)
with TestRun.step("Unload CAS kernel module"):
with TestRun.step("Unload CAS module"):
unload_kernel_module(CasModule.cache.value)
with TestRun.step("Allocate memory, leaving not enough memory for CAS module"):
memory_to_leave = get_mem_free() - (memory_used_by_cas * (3 / 4))
allocate_memory(memory_to_leave)
TestRun.LOGGER.info(
f"Memory left for CAS kernel module: {get_mem_free().get_value(Unit.MiB):0.2f} MiB."
f"Memory left for OpenCAS module: {get_mem_free().get_value(Unit.MiB):0.2f} MiB."
)
with TestRun.step(
"Try to load CAS kernel module and check if correct error message is printed on failure"
"Try to load OpenCAS module and check if correct error message is printed on failure"
):
output = load_kernel_module(CasModule.cache.value)
if output.stderr and output.exit_code != 0:
TestRun.LOGGER.info(f"Cannot load CAS kernel module as expected.\n{output.stderr}")
TestRun.LOGGER.info(f"Cannot load OpenCAS module as expected.\n{output.stderr}")
else:
TestRun.LOGGER.error("Loading CAS kernel module successfully finished, but should fail.")
@pytest.mark.require_disk("cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("cache2", DiskTypeSet([DiskType.nand, DiskType.optane]))
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_attach_cache_min_ram():
"""
title: Test attach cache with insufficient memory.
description: |
Check for valid message when attaching cache with insufficient memory.
pass_criteria:
- CAS attach operation fail due to insufficient RAM.
- No system crash.
"""
with TestRun.step("Prepare devices"):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(2, Unit.GibiByte)])
cache_dev = cache_dev.partitions[0]
cache_dev2 = TestRun.disks["cache2"]
core_dev = TestRun.disks["core"]
with TestRun.step("Start cache and add core"):
cache = casadm.start_cache(cache_dev, force=True)
cache.add_core(core_dev)
with TestRun.step("Detach cache"):
cache.detach()
with TestRun.step("Set RAM workload"):
disable_memory_affecting_functions()
allocate_memory(get_mem_available() - Size(100, Unit.MegaByte))
with TestRun.step("Try to attach cache"):
try:
TestRun.LOGGER.info(
f"There is {get_mem_available().unit.MebiByte.value} available memory left"
)
cache.attach(device=cache_dev2, force=True)
TestRun.LOGGER.error(
f"Cache attached not as expected."
f"{get_mem_available()} is enough memory to complete operation")
except CmdException as exc:
check_stderr_msg(exc.output, attach_not_enough_memory)
with TestRun.step("Unlock RAM memory"):
unmount_ramfs()
TestRun.LOGGER.error("Loading OpenCAS module successfully finished, but should fail.")

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -8,14 +8,14 @@ import pytest
import time
from core.test_run_utils import TestRun
from type_def.size import Size, Unit
from test_utils.size import Size, Unit
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.fio.fio import Fio
from test_tools.fio.fio_param import ReadWrite, IoEngine
from api.cas import casadm
from api.cas.cache_config import CacheMode, CleaningPolicy
from test_tools.udev import Udev
from test_utils.os_utils import Udev
@pytest.mark.CI
@@ -23,14 +23,14 @@ from test_tools.udev import Udev
@pytest.mark.require_disk("core", DiskTypeLowerThan("cache"))
def test_cleaning_policy():
"""
Title: Basic test for cleaning policy
Title: test_cleaning_policy
description: |
Verify cleaning behaviour after changing cleaning policy from NOP
to one that expects a flush.
The test is to see if dirty data will be removed from the Cache after changing the
cleaning policy from NOP to one that expects a flush.
pass_criteria:
- Cache is successfully populated with dirty data
- Cleaning policy is changed successfully
- There is no dirty data after the policy change
- Cache is successfully populated with dirty data
- Cleaning policy is changed successfully
- There is no dirty data after the policy change
"""
wait_time = 60

View File

@@ -1,61 +0,0 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas.cli_help_messages import *
from api.cas.cli_messages import check_stderr_msg, check_stdout_msg
from core.test_run import TestRun
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_help(shortcut):
"""
title: Test for 'help' command.
description: |
Verifies that running command with 'help' param displays correct message for each
available command.
pass_criteria:
- Proper help message is displayed for every command.
- Proper help message is displayed after running command with wrong param.
"""
check_list_cmd = [
(" -S", " --start-cache", start_cache_help),
(None, " --attach-cache", attach_cache_help),
(None, " --detach-cache", detach_cache_help),
(" -T", " --stop-cache", stop_cache_help),
(" -X", " --set-param", set_params_help),
(" -G", " --get-param", get_params_help),
(" -Q", " --set-cache-mode", set_cache_mode_help),
(" -A", " --add-core", add_core_help),
(" -R", " --remove-core", remove_core_help),
(None, " --remove-inactive", remove_inactive_help),
(None, " --remove-detached", remove_detached_help),
(" -L", " --list-caches", list_caches_help),
(" -P", " --stats", stats_help),
(" -Z", " --reset-counters", reset_counters_help),
(" -F", " --flush-cache", flush_cache_help),
(" -C", " --io-class", ioclass_help),
(" -V", " --version", version_help),
# (None, " --standby", standby_help),
(" -H", " --help", help_help),
(None, " --zero-metadata", zero_metadata_help),
]
help = " -H" if shortcut else " --help"
with TestRun.step("Run 'help' for every 'casadm' command and check output"):
for cmds in check_list_cmd:
cmd = cmds[0] if shortcut else cmds[1]
if cmd:
output = TestRun.executor.run("casadm" + cmd + help)
check_stdout_msg(output, cmds[-1])
with TestRun.step("Run 'help' for command that doesn`t exist and check output"):
cmd = " -Y" if shortcut else " --yell"
output = TestRun.executor.run("casadm" + cmd + help)
check_stderr_msg(output, unrecognized_stderr)
check_stdout_msg(output, unrecognized_stdout)

View File

@@ -0,0 +1,127 @@
#
# Copyright(c) 2020-2022 Intel Corporation
# Copyright(c) 2024 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import re
import pytest
from api.cas import casadm
from api.cas.casadm_params import OutputFormat
from api.cas.cli_help_messages import *
from api.cas.cli_messages import check_stderr_msg, check_stdout_msg
from core.test_run import TestRun
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_help(shortcut):
"""
title: Test for 'help' command.
description: Test if help for commands displays.
pass_criteria:
- Proper help displays for every command.
"""
TestRun.LOGGER.info("Run 'help' for every 'casadm' command.")
output = casadm.help(shortcut)
check_stdout_msg(output, casadm_help)
output = TestRun.executor.run("casadm" + (" -S" if shortcut else " --start-cache")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, start_cache_help)
output = TestRun.executor.run("casadm" + (" -T" if shortcut else " --stop-cache")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, stop_cache_help)
output = TestRun.executor.run("casadm" + (" -X" if shortcut else " --set-param")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, set_params_help)
output = TestRun.executor.run("casadm" + (" -G" if shortcut else " --get-param")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, get_params_help)
output = TestRun.executor.run("casadm" + (" -Q" if shortcut else " --set-cache-mode")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, set_cache_mode_help)
output = TestRun.executor.run("casadm" + (" -A" if shortcut else " --add-core")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, add_core_help)
output = TestRun.executor.run("casadm" + (" -R" if shortcut else " --remove-core")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, remove_core_help)
output = TestRun.executor.run("casadm" + " --remove-detached"
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, remove_detached_help)
output = TestRun.executor.run("casadm" + (" -L" if shortcut else " --list-caches")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, list_caches_help)
output = TestRun.executor.run("casadm" + (" -P" if shortcut else " --stats")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, stats_help)
output = TestRun.executor.run("casadm" + (" -Z" if shortcut else " --reset-counters")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, reset_counters_help)
output = TestRun.executor.run("casadm" + (" -F" if shortcut else " --flush-cache")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, flush_cache_help)
output = TestRun.executor.run("casadm" + (" -C" if shortcut else " --io-class")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, ioclass_help)
output = TestRun.executor.run("casadm" + (" -V" if shortcut else " --version")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, version_help)
output = TestRun.executor.run("casadm" + (" -H" if shortcut else " --help")
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, help_help)
output = TestRun.executor.run("casadm" + " --standby"
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, standby_help)
output = TestRun.executor.run("casadm" + " --zero-metadata"
+ (" -H" if shortcut else " --help"))
check_stdout_msg(output, zero_metadata_help)
output = TestRun.executor.run("casadm" + (" -Y" if shortcut else " --yell")
+ (" -H" if shortcut else " --help"))
check_stderr_msg(output, unrecognized_stderr)
check_stdout_msg(output, unrecognized_stdout)
@pytest.mark.parametrize("output_format", OutputFormat)
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_version(shortcut, output_format):
"""
title: Test for 'version' command.
description: Test if version displays.
pass_criteria:
- Proper OCL's components names displays in table with its versions.
"""
TestRun.LOGGER.info("Check OCL's version.")
output = casadm.print_version(output_format, shortcut).stdout
TestRun.LOGGER.info(output)
if not names_in_output(output) or not versions_in_output(output):
TestRun.fail("'Version' command failed.")
def names_in_output(output):
return ("CAS Cache Kernel Module" in output
and "CAS CLI Utility" in output)
def versions_in_output(output):
version_pattern = re.compile(r"(\d){2}\.(\d){2}\.(\d)\.(\d){4}.(\S)")
return len(version_pattern.findall(output)) == 2

View File

@@ -1,6 +1,5 @@
#
# Copyright(c) 2022 Intel Corporation
# Copyright(c) 2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -15,7 +14,7 @@ def test_cli_help_spelling():
title: Spelling test for 'help' command
description: Validates spelling of 'help' in CLI
pass criteria:
- No spelling mistakes are found
- no spelling mistakes are found
"""
cas_dictionary = os.path.join(TestRun.usr.repo_dir, "test", "functional", "resources")

View File

@@ -1,17 +1,16 @@
#
# Copyright(c) 2020-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
import pytest
from api.cas import casadm
from api.cas import casadm, casadm_parser
from core.test_run import TestRun
from test_tools.os_tools import sync
from test_utils.os_utils import sync
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from type_def.size import Unit, Size
from test_utils.size import Unit, Size
from test_tools.dd import Dd
@@ -20,11 +19,12 @@ from test_tools.dd import Dd
@pytest.mark.parametrize("purge_target", ["cache", "core"])
def test_purge(purge_target):
"""
title: Basic test for purge command
description: Check purge command behaviour with and without '--script' flag
title: Call purge without and with `--script` switch
description: |
Check if purge is called only when `--script` switch is used.
pass_criteria:
- Error returned when '--script' is missing
- Cache is wiped when purge command is used properly
- casadm returns an error when `--script` is missing
- cache is wiped when purge command is used properly
"""
with TestRun.step("Prepare devices"):
cache_device = TestRun.disks["cache"]
@@ -40,7 +40,7 @@ def test_purge(purge_target):
cache = casadm.start_cache(cache_device, force=True)
core = casadm.add_core(cache, core_device)
with TestRun.step("Trigger I/O to prepared cache instance"):
with TestRun.step("Trigger IO to prepared cache instance"):
dd = (
Dd()
.input("/dev/zero")
@@ -78,3 +78,8 @@ def test_purge(purge_target):
if cache.get_statistics().usage_stats.occupancy.get_value() != 0:
TestRun.fail(f"{cache.get_statistics().usage_stats.occupancy.get_value()}")
TestRun.fail(f"Purge {purge_target} should invalidate all cache lines!")
with TestRun.step(
f"Stop cache"
):
casadm.stop_all_caches()

View File

@@ -1,6 +1,6 @@
#
# Copyright(c) 2019-2022 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies
# Copyright(c) 2024 Huawei Technologies
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -13,11 +13,11 @@ from core.test_run import TestRun
from storage_devices.device import Device
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from test_tools.dd import Dd
from test_tools.fs_tools import Filesystem
from test_tools.disk_utils import Filesystem
from test_utils.filesystem.file import File
from test_tools.os_tools import sync
from connection.utils.output import CmdException
from type_def.size import Size, Unit
from test_utils.os_utils import sync
from test_utils.output import CmdException
from test_utils.size import Size, Unit
from api.cas.cli_messages import (
check_stderr_msg,
missing_param,
@@ -44,8 +44,8 @@ def test_standby_neg_cli_params():
"""
title: Verifying parameters for starting a standby cache instance
description: |
Try executing the standby init command with required arguments missing or
disallowed arguments present.
Try executing the standby init command with required arguments missing or
disallowed arguments present.
pass_criteria:
- The execution is unsuccessful for all improper argument combinations
- A proper error message is displayed for unsuccessful executions
@@ -120,12 +120,11 @@ def test_activate_neg_cli_params():
-The execution is unsuccessful for all improper argument combinations
-A proper error message is displayed for unsuccessful executions
"""
cache_id = 1
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(500, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cache_id = 1
with TestRun.step("Init standby cache"):
cache_dev = Device(cache_device.path)
@@ -202,8 +201,6 @@ def test_standby_neg_cli_management():
- The execution is successful for allowed management commands
- A proper error message is displayed for unsuccessful executions
"""
cache_id = 1
with TestRun.step("Prepare the device for the cache."):
device = TestRun.disks["cache"]
device.create_partitions([Size(500, Unit.MebiByte), Size(500, Unit.MebiByte)])
@@ -211,6 +208,7 @@ def test_standby_neg_cli_management():
core_device = device.partitions[1]
with TestRun.step("Prepare the standby instance"):
cache_id = 1
cache = casadm.standby_init(
cache_dev=cache_device, cache_id=cache_id,
cache_line_size=CacheLineSize.LINE_32KiB, force=True
@@ -274,19 +272,19 @@ def test_start_neg_cli_flags():
"""
title: Blocking standby start command with mutually exclusive flags
description: |
Try executing the standby start command with different combinations of mutually
exclusive flags.
Try executing the standby start command with different combinations of mutually
exclusive flags.
pass_criteria:
- The command execution is unsuccessful for commands with mutually exclusive flags
- A proper error message is displayed
"""
cache_id = 1
cache_line_size = 32
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(500, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cache_id = 1
cache_line_size = 32
with TestRun.step("Try to start standby cache with mutually exclusive parameters"):
init_required_params = f' --cache-device {cache_device.path}' \
@@ -329,19 +327,19 @@ def test_activate_without_detach():
"""
title: Activate cache without detach command.
description: |
Try to activate passive cache without detach command before activation.
Try activate passive cache without detach command before activation.
pass_criteria:
- The activation is not possible
- The cache remains in Standby state after unsuccessful activation
- The cache exported object is present after an unsuccessful activation
"""
cache_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Prepare the device for the cache."):
cache_dev = TestRun.disks["cache"]
cache_dev.create_partitions([Size(500, Unit.MebiByte)])
cache_dev = cache_dev.partitions[0]
cache_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Start cache instance."):
cache = casadm.start_cache(cache_dev=cache_dev, cache_id=cache_id)
@@ -392,18 +390,15 @@ def test_activate_without_detach():
@pytest.mark.require_disk("standby_cache", DiskTypeSet([DiskType.nand, DiskType.optane]))
def test_activate_neg_cache_line_size():
"""
title: Blocking cache with mismatching cache line size activation.
description: |
Try restoring cache operations from a replicated cache that was initialized
with different cache line size than the original cache.
pass_criteria:
- The activation is cancelled
- The cache remains in Standby detached state after an unsuccessful activation
- A proper error message is displayed
title: Blocking cache with mismatching cache line size activation.
description: |
Try restoring cache operations from a replicated cache that was initialized
with different cache line size than the original cache.
pass_criteria:
- The activation is cancelled
- The cache remains in Standby detached state after an unsuccessful activation
- A proper error message is displayed
"""
cache_id = 1
active_cls, standby_cls = CacheLineSize.LINE_4KiB, CacheLineSize.LINE_16KiB
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Prepare cache devices"):
active_cache_dev = TestRun.disks["active_cache"]
@@ -412,69 +407,73 @@ def test_activate_neg_cache_line_size():
standby_cache_dev = TestRun.disks["standby_cache"]
standby_cache_dev.create_partitions([Size(500, Unit.MebiByte)])
standby_cache_dev = standby_cache_dev.partitions[0]
cache_id = 1
active_cls, standby_cls = CacheLineSize.LINE_4KiB, CacheLineSize.LINE_16KiB
cache_exp_obj_name = f"cas-cache-{cache_id}"
with TestRun.step("Start active cache instance."):
active_cache = casadm.start_cache(cache_dev=active_cache_dev, cache_id=cache_id,
cache_line_size=active_cls)
with TestRun.step("Start active cache instance."):
active_cache = casadm.start_cache(cache_dev=active_cache_dev, cache_id=cache_id,
cache_line_size=active_cls)
with TestRun.step("Get metadata size"):
dmesg_out = TestRun.executor.run_expect_success("dmesg").stdout
md_size = dmesg.get_metadata_size_on_device(dmesg_out)
with TestRun.step("Create dump file with cache metadata"):
with TestRun.step("Get metadata size"):
dmesg_out = TestRun.executor.run_expect_success("dmesg").stdout
md_size = dmesg.get_metadata_size_on_device(dmesg_out)
with TestRun.step("Dump the metadata of the cache"):
dump_file_path = "/tmp/test_activate_corrupted.dump"
md_dump = File(dump_file_path)
md_dump.remove(force=True, ignore_errors=True)
dd_count = int(md_size / Size(1, Unit.MebiByte)) + 1
(
Dd().input(active_cache_dev.path)
.output(md_dump.full_path)
.block_size(Size(1, Unit.MebiByte))
.count(dd_count)
.run()
)
md_dump.refresh_item()
with TestRun.step("Stop cache instance."):
active_cache.stop()
with TestRun.step("Start standby cache instance."):
standby_cache = casadm.standby_init(cache_dev=standby_cache_dev, cache_id=cache_id,
cache_line_size=standby_cls,
force=True)
with TestRun.step("Verify if the cache exported object appeared in the system"):
output = TestRun.executor.run_expect_success(
f"ls -la /dev/ | grep {cache_exp_obj_name}"
)
if output.stdout[0] != "b":
TestRun.fail("The cache exported object is not a block device")
with TestRun.step("Detach standby cache instance"):
standby_cache.standby_detach()
with TestRun.step(f"Copy changed metadata to the standby instance"):
Dd().input(md_dump.full_path).output(standby_cache_dev.path).run()
sync()
with TestRun.step("Try to activate cache instance"):
with pytest.raises(CmdException) as cmdExc:
output = standby_cache.standby_activate(standby_cache_dev)
if not check_stderr_msg(output, cache_line_size_mismatch):
TestRun.LOGGER.error(
f'Expected error message in format '
f'"{cache_line_size_mismatch[0]}"'
f'Got "{output.stderr}" instead.'
with TestRun.step("Dump the metadata of the cache"):
dump_file_path = "/tmp/test_activate_corrupted.dump"
md_dump = File(dump_file_path)
md_dump.remove(force=True, ignore_errors=True)
dd_count = int(md_size / Size(1, Unit.MebiByte)) + 1
(
Dd().input(active_cache_dev.path)
.output(md_dump.full_path)
.block_size(Size(1, Unit.MebiByte))
.count(dd_count)
.run()
)
assert "Failed to activate standby cache." in str(cmdExc.value)
md_dump.refresh_item()
with TestRun.step("Verify if cache is in standby detached state after failed activation"):
cache_status = standby_cache.get_status()
if cache_status != CacheStatus.standby_detached:
TestRun.LOGGER.error(
f'Expected Cache state: "{CacheStatus.standby.value}" '
f'Got "{cache_status.value}" instead.'
with TestRun.step("Stop cache instance."):
active_cache.stop()
with TestRun.step("Start standby cache instance."):
standby_cache = casadm.standby_init(cache_dev=standby_cache_dev, cache_id=cache_id,
cache_line_size=standby_cls,
force=True)
with TestRun.step("Verify if the cache exported object appeared in the system"):
output = TestRun.executor.run_expect_success(
f"ls -la /dev/ | grep {cache_exp_obj_name}"
)
if output.stdout[0] != "b":
TestRun.fail("The cache exported object is not a block device")
with TestRun.step("Detach standby cache instance"):
standby_cache.standby_detach()
with TestRun.step(f"Copy changed metadata to the standby instance"):
Dd().input(md_dump.full_path).output(standby_cache_dev.path).run()
sync()
with TestRun.step("Try to activate cache instance"):
with pytest.raises(CmdException) as cmdExc:
output = standby_cache.standby_activate(standby_cache_dev)
if not check_stderr_msg(output, cache_line_size_mismatch):
TestRun.LOGGER.error(
f'Expected error message in format '
f'"{cache_line_size_mismatch[0]}"'
f'Got "{output.stderr}" instead.'
)
assert "Failed to activate standby cache." in str(cmdExc.value)
with TestRun.step("Verify if cache is in standby detached state after failed activation"):
cache_status = standby_cache.get_status()
if cache_status != CacheStatus.standby_detached:
TestRun.LOGGER.error(
f'Expected Cache state: "{CacheStatus.standby.value}" '
f'Got "{cache_status.value}" instead.'
)
@pytest.mark.CI
@@ -490,18 +489,17 @@ def test_standby_init_with_preexisting_metadata():
- initialize cache without force flag fails and informative error message is printed
- initialize cache with force flag succeeds and passive instance is present in system
"""
cache_line_size = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Prepare device for cache"):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(200, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cls = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Start standby cache instance"):
cache = casadm.standby_init(
cache_dev=cache_device,
cache_line_size=cache_line_size,
cache_line_size=cls,
cache_id=cache_id,
force=True,
)
@@ -514,7 +512,7 @@ def test_standby_init_with_preexisting_metadata():
standby_init_cmd(
cache_dev=cache_device.path,
cache_id=str(cache_id),
cache_line_size=str(int(cache_line_size.value.value / Unit.KibiByte.value)),
cache_line_size=str(int(cls.value.value / Unit.KibiByte.value)),
)
)
if not check_stderr_msg(output, start_cache_with_existing_metadata):
@@ -526,7 +524,7 @@ def test_standby_init_with_preexisting_metadata():
with TestRun.step("Try initialize cache with force flag"):
casadm.standby_init(
cache_dev=cache_device,
cache_line_size=cache_line_size,
cache_line_size=cls,
cache_id=cache_id,
force=True,
)
@@ -551,13 +549,12 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
- initialize cache without force flag fails and informative error message is printed
- initialize cache with force flag succeeds and passive instance is present in system
"""
cache_line_size = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Prepare device for cache"):
cache_device = TestRun.disks["cache"]
cache_device.create_partitions([Size(200, Unit.MebiByte)])
cache_device = cache_device.partitions[0]
cls = CacheLineSize.LINE_32KiB
cache_id = 1
with TestRun.step("Create filesystem on cache device partition"):
cache_device.create_filesystem(filesystem)
@@ -567,7 +564,7 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
standby_init_cmd(
cache_dev=cache_device.path,
cache_id=str(cache_id),
cache_line_size=str(int(cache_line_size.value.value / Unit.KibiByte.value)),
cache_line_size=str(int(cls.value.value / Unit.KibiByte.value)),
)
)
if not check_stderr_msg(output, standby_init_with_existing_filesystem):
@@ -579,7 +576,7 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
with TestRun.step("Try initialize cache with force flag"):
casadm.standby_init(
cache_dev=cache_device,
cache_line_size=cache_line_size,
cache_line_size=cls,
cache_id=cache_id,
force=True,
)
@@ -596,18 +593,13 @@ def test_standby_init_with_preexisting_filesystem(filesystem):
@pytest.mark.require_disk("core", DiskTypeLowerThan("caches"))
def test_standby_activate_with_corepool():
"""
title: Activate standby cache instance with core pool
title: Activate standby cache instance with corepool
description: |
Activation of standby cache with core taken from core pool
pass_criteria:
- During activate metadata on the device match with metadata in DRAM
- Core is in active state after activate
- During activate metadata on the device match with metadata in DRAM
- Core is in active state after activate
"""
cache_id = 1
core_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
cache_line_size = CacheLineSize.LINE_16KiB
with TestRun.step("Prepare cache and core devices"):
caches_dev = TestRun.disks["caches"]
caches_dev.create_partitions([Size(500, Unit.MebiByte), Size(500, Unit.MebiByte)])
@@ -617,8 +609,13 @@ def test_standby_activate_with_corepool():
core_dev.create_partitions([Size(200, Unit.MebiByte)])
core_dev = core_dev.partitions[0]
cache_id = 1
core_id = 1
cache_exp_obj_name = f"cas-cache-{cache_id}"
cls = CacheLineSize.LINE_16KiB
with TestRun.step("Start regular cache instance"):
cache = casadm.start_cache(cache_dev=active_cache_dev, cache_line_size=cache_line_size,
cache = casadm.start_cache(cache_dev=active_cache_dev, cache_line_size=cls,
cache_id=cache_id)
with TestRun.step("Add core to regular cache instance"):
@@ -632,7 +629,7 @@ def test_standby_activate_with_corepool():
with TestRun.step("Start standby cache instance."):
standby_cache = casadm.standby_init(cache_dev=standby_cache_dev, cache_id=cache_id,
cache_line_size=cache_line_size,
cache_line_size=cls,
force=True)
with TestRun.step(f"Copy changed metadata to the standby instance"):
@@ -655,12 +652,12 @@ def test_standby_activate_with_corepool():
@pytest.mark.parametrizex("cache_line_size", CacheLineSize)
def test_standby_start_stop(cache_line_size):
"""
title: Start and stop a standby cache instance.
description: Test if cache can be started in standby state and stopped without activation.
pass_criteria:
- A cache exported object appears after starting a cache in standby state
- The data written to the cache exported object committed on the underlying cache device
- The cache exported object disappears after stopping the standby cache instance
title: Start and stop a standby cache instance.
description: Test if cache can be started in standby state and stopped without activation.
pass_criteria:
- A cache exported object appears after starting a cache in standby state
- The data written to the cache exported object committed on the underlying cache device
- The cache exported object disappears after stopping the standby cache instance
"""
with TestRun.step("Prepare a cache device"):
cache_size = Size(500, Unit.MebiByte)

View File

@@ -1,6 +1,5 @@
#
# Copyright(c) 2019-2021 Intel Corporation
# Copyright(c) 2024-2025 Huawei Technologies Co., Ltd.
# SPDX-License-Identifier: BSD-3-Clause
#
@@ -11,7 +10,7 @@ from api.cas import casadm, casadm_parser, cli_messages
from api.cas.cli import start_cmd
from core.test_run import TestRun
from storage_devices.disk import DiskType, DiskTypeSet, DiskTypeLowerThan
from type_def.size import Unit, Size
from test_utils.size import Unit, Size
CACHE_ID_RANGE = (1, 16384)
CORE_ID_RANGE = (0, 4095)
@@ -21,12 +20,12 @@ CORE_ID_RANGE = (0, 4095)
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_start_stop_default_id(shortcut):
"""
title: Test for starting a cache with a default ID - short and long command
description: |
Start a new cache with a default ID and then stop this cache.
pass_criteria:
- The cache has successfully started with default ID
- The cache has successfully stopped
title: Test for starting a cache with a default ID - short and long command
description: |
Start a new cache with a default ID and then stop this cache.
pass_criteria:
- The cache has successfully started with default ID
- The cache has successfully stopped
"""
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks['cache']
@@ -62,12 +61,12 @@ def test_cli_start_stop_default_id(shortcut):
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_start_stop_custom_id(shortcut):
"""
title: Test for starting a cache with a custom ID - short and long command
description: |
Start a new cache with a random ID (from allowed pool) and then stop this cache.
pass_criteria:
- The cache has successfully started with a custom ID
- The cache has successfully stopped
title: Test for starting a cache with a custom ID - short and long command
description: |
Start a new cache with a random ID (from allowed pool) and then stop this cache.
pass_criteria:
- The cache has successfully started with a custom ID
- The cache has successfully stopped
"""
with TestRun.step("Prepare the device for the cache."):
cache_device = TestRun.disks['cache']
@@ -106,13 +105,13 @@ def test_cli_start_stop_custom_id(shortcut):
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_add_remove_default_id(shortcut):
"""
title: Test for adding and removing a core with a default ID - short and long command
description: |
Start a new cache and add a core to it without passing a core ID as an argument
and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
title: Test for adding and removing a core with a default ID - short and long command
description: |
Start a new cache and add a core to it without passing a core ID as an argument
and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
"""
with TestRun.step("Prepare the devices."):
cache_disk = TestRun.disks['cache']
@@ -157,13 +156,13 @@ def test_cli_add_remove_default_id(shortcut):
@pytest.mark.parametrize("shortcut", [True, False])
def test_cli_add_remove_custom_id(shortcut):
"""
title: Test for adding and removing a core with a custom ID - short and long command
description: |
Start a new cache and add a core to it with passing a random core ID
(from allowed pool) as an argument and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
title: Test for adding and removing a core with a custom ID - short and long command
description: |
Start a new cache and add a core to it with passing a random core ID
(from allowed pool) as an argument and then remove this core from the cache.
pass_criteria:
- The core is added to the cache with a default ID
- The core is successfully removed from the cache
"""
with TestRun.step("Prepare the devices."):
cache_disk = TestRun.disks['cache']