The integration tests use the RESTful API of podman to isolate BlueChi and the agents on multiple, containerized nodes. Therefore, a working installation of podman is required. Please refer to podman installation instructions.
NOTE: Integration tests can be run on Python 3.9 and newer, but we want to have Python 3.9 compatibility to support CentOS Stream 9, so please don't use features from newer Python versions.
When setting up CentOS Stream, please enable Code Ready Build and EPEL repositories:
sudo dnf install -y dnf-plugin-config-manager
sudo dnf config-manager -y --set-enabled crb
sudo dnf install -y epel-release
Then install the required packages:
dnf install \
black \
createrepo_c \
podman \
python3-isort \
python3-flake8 \
python3-paramiko \
python3-podman \
python3-pytest \
python3-pyyaml \
tmt \
tmt+report-junit \
-y
Please install Python 3.9 environment and pip using standard methods on your operating system.
All additional required dependencies are listed in the requirements.txt and can be installed using pip
:
pip install -U -r requirements.txt
Instead of installing the required packages directly, it is recommended to create a virtual environment. For example, the following snippet uses the built-in venv:
python -m venv ~/bluechi-env
source ~/bluechi-env/bin/activate
pip install -U -r requirements.txt
# ...
# exit the virtual env
deactivate
On Fedora 40 and newer podman 5 is installed and it uses new networking provider called pasta, but unfortunately it has some issue which prevents network connection between containers (more info in Connectivity problem with Podman containers). To bypass this issue it's required to switch to previous networking provider called slirp4netns using following steps:
-
Install slirp4netns provider
dnf install -y slirp4netns
-
Configure podman to use slirp4netns provider when executed under your username by creating
~/.config/containers/containers.conf
with following content:[Network] default_rootless_network_cmd = "slirp4netns"
Testing infrastructure uses socket access to podman, so it needs to be enabled:
systemctl --user enable podman.socket
systemctl --user start podman.socket
Integration tests are executed with tmt framework.
To run integration tests please execute below command in the tests directory:
tmt --feeling-safe run -v plan --name container
This will use latest BlueChi packages from bluechi-snapshot repository.
Note: The integration tests can be run in two modes - container and multi-host. For local execution it is advised to select container
mode (hence the plan --name container
).
To run integration tests with valgrind
, set WITH_VALGRIND
environment variable as follows:
tmt --feeling-safe run -v -eWITH_VALGRIND=1 plan --name container
If valgrind
detects a memory leak in a test, the test will fail, and the logs will be found in the test data
directory.
In order to run integration tests for your local BlueChi build, you need have BlueChi RPM packages built from your source code. The details about BlueChi development can be found at README.developer.md, the most important part for running integration tests is Packaging section.
In the following steps BlueChi source codes are located in ~/bluechi
directory.
The integration tests expect that local BlueChi RPMs are located in tests/bluechi-rpms
top level subdirectory.
In addition, since the tests run in CentOS-Stream9 based containers the RPMs must also be built for CentOS-Stream9.
To this end, a containerized build infrastructure is available.
The containerized build infrastructure depends on skipper, installed via the requirements.txt file
cd ~/bluechi
skipper make rpm
When done it's required to create DNF repository from those RPMs:
createrepo_c ~/bluechi/tests/bluechi-rpms
After that step integration tests can be executed using following command:
cd ~/bluechi/tests
tmt --feeling-safe run -v -eCONTAINER_USED=integration-test-local plan --name container
To be able to produce code coverage report from integration tests execution you need to build BlueChi RPMs with code coverage support:
cd ~/bluechi
skipper make rpm WITH_COVERAGE=1
createrepo_c ~/bluechi/tests/bluechi-rpms
When done, you need to run integration tests with code coverage report enabled:
tmt --feeling-safe run -v -eCONTAINER_USED=integration-test-local -eWITH_COVERAGE=1 plan --name container
After the integration tests finishes, the code coverage html result can be found in res
subdirectory inside the tmt
execution result directory, for example:
/var/tmp/tmt/run-001/plans/tier0/report/default-0/report
In some cases it might be necessary to adjust the default timeouts that are used in different steps of an integration tests execution cycle. The currently available environment variables as well as their default values are:
# in seconds
TIMEOUT_TEST_SETUP=20
TIMEOUT_TEST_RUN=45
TIMEOUT_COLLECT_TEST_RESULTS=20
These can be set either in the environment
section of the tmt plan or using the -e
option when running tmt, e.g. -eTIMEOUT_TEST_SETUP=40
.
In addition to the mentioned above timeouts, there's a mechanism allowing to set values in specific tests via environment variables, which can be used to adjust test-specific timeouts via the environment. The expected environment variable name would be assembled from the prefix TEST_
, test name and a provided suffix. These test-specific timeouts can be set in the tmt plan as described here.
Given a test named bluechi-generic-test
(i.e. the test script located in the directory bluechi-generic-test
), which uses a timeout WAIT_TIMEOUT
, defining it as follows:
WAIT_TIMEOUT = get_test_env_value_int("WAIT_TIMEOUT", 1000)
The variable would be assigned WAIT_TIMEOUT = 1000
, unless environment variable TEST_BLUECHI_GENERIC_TEST_WAIT_TIMEOUT
is set with an integer value, in which case this value would be passed to WAIT_TIMEOUT
.
Several tools are used in the project to validate code style:
- flake8 is used to enforce a unified code style.
- isort is used to enforce import ordering
- black is used to enforce code formatting
All source files formatting can be checked/fixed using following commands executed from the top level directory of the project:
flake8 tests
isort tests
black tests
By default BlueChi integration tests are using INFO
log level to display important information about the test run.
More detailed information can be displayed by setting log level to DEBUG
:
cd ~/bluechi/tests
tmt --feeling-safe run -v -eLOG_LEVEL=DEBUG plan --name container
The python bindings can be used in the integration tests to simplify writing them. However, it is not possible to use the bindings directly in the tests since they are leveraging the D-Bus API of BlueChi provided on the system D-Bus. A separate script has to be written, injected and executed within the container running the BlueChi controller. In order to keep the usage rather simple, the BluechiControllerContainer
class provides a function to abstract these details:
# run the file ./python/monitor.py located in the current test directory
# and get the exit code as well as the output (e.g. all print())
exit_code, output = ctrl.run_python("python/monitor.py")
A full example of how to use the python bindings can be found in the monitor open-close test.
Every test should be distinctly identified with a unique ID. Therefore, when adding a new test, please execute the following command to assign an ID to the new test:
$ cd ~/bluechi/tests
$ tmt test id .
New id 'UUID' added to test '/tests/path_to_your_new_test'.
...
In addition to having a unique ID, the summaries of tests should be descriptive and unique as well. The CI will perform appropriate linting. This can also be invoked locally:
$ cd ~/bluechi/tests
# requires tmt >= 1.35
$ tmt lint tests
Lint checks on all
fail G001 duplicate id "96aa0e17-5e23-4cc3-bc34-88368b8cc07b" in "/tests/tier0/bluechi-agent-connect-via-controller-address"
fail G001 duplicate id "96aa0e17-5e23-4cc3-bc34-88368b8cc07b" in "/tests/tier0/bluechi-agent-get-logtarget"
The integration tests rely on containers as separate compute entities. These containers are used to simulate BlueChi's functional behavior on a single runner.
Both, integration-test-local as well as integration-test-snapshot, are based on the integration-base image which contains core dependencies such as systemd and devel packages. The base image is published to https://quay.io/repository/bluechi/integration-test-base.
The base images can either be build and pushed locally or via a github workflow to the bluechi on quay.io organization and its repositories. If any updates are required, please reach out to the code owners.
The base images build-base and integration-test-base can be built and pushed to quay by using the Container Image Workflow. It can be found and triggered here in the Actions tab of the BlueChi repo.
The base images build-base and integration-test-base are built for multiple architectures (arm64 and amd64) using the build-containers.sh script. It'll build the images for the supported architectures as well as a manifest, which can then be pushed to the registry.
Building for multiple architectures, the following packages are required:
sudo dnf install -y podman buildah qemu-user-static
From the root directory of the project run the following commands:
# In order to build and directly push, login first
buildah login -u="someuser" -p="topsecret" quay.io
PUSH_MANIFEST=yes ./build-scripts/build-push-containers.sh build-base
# Only build locally
./build-scripts/build-push-containers.sh build-base
If you need to build only specific architecture for your local usage, you can specify it as the 2nd parameter:
./build-scripts/build-push-containers.sh build-base amd64