Using and understanding the basis of Ragger
testing capabilities
Ragger
is mostly used as a testing framework for Ledger application. To ease
the integration in such scenario, it includes pre-defined pytest
fixtures
and pre-made configuration. This section shows how to use them.
Note
On top of fixtures, options and other helpers brought by Ragger
, the
tests written following this page guidance inherit every pytest
base
mechanisms, CLI arguments and so on. Refer to its
documentation for further
information.
Fast testing: using the embedded ragger.conftest
module
Before creating any file, let’s first create a directory dedicated to the tests.
$ mkdir -p tests
Ragger
package contains a ragger.conftest
module, including:
a base_conftest.py module, defining
pytest
fixtures and CLI arguments,a configuration.py module, allowing to parametrize the previous fixtures.
As implied, Ragger
relies on pytest
to implement its testing
capabilities. Using these Ragger
features hence requires a pytest
setup.
In particular, pytest
will fetch its configuration from a conftest.py
file, so in this case one needs to create such a file, and import Ragger
capabilities in it. A minimal conftest.py
file would be:
1# This line includes the default `Ragger` configuration.
2# It can be modified to suit local needs
3from ragger.conftest import configuration
4
5# This line will be interpreted by `pytest` which will load the code from the
6# given modules, in this case `ragger.conftest.base_conftest`.
7# This module will define several fixtures, parametrized will the fields of
8# `configuration.OPTIONAL` variable.
9pytest_plugins = ("ragger.conftest.base_conftest", )
Now let’s write a very basic test which does not do much, but will at least allow to use our newly generated fixtures:
1# define the CLA of your application here
2CLA = 0xB0
3# define an instruction of your application here
4INS = 0x01
5
6def test_communicate(backend):
7 print(backend.exchange(cla=CLA, ins=INS))
This test will use the backend
fixture in order to run the application into Speculos. For that, we need
to access the application ELF file, which is expected to be stored in a specific
path. This path is in fact exactly where the default Ledger compilation
environment generates the ELF files, which is
<git root directory>/build/<device name>/bin/app.elf
.
Let’s say we are going to run the test on nanos
only. The file system should
at least be like:
$ tree .
.
├── build
│ └── nanos
│ └── bin
│ └── app.elf
└── tests
├── conftest.py
└── test_first.py
And now to run the tests:
$ pytest --device nanos tests/ -v
========================================= test session starts ===========================================
collected 1 item
tests/test_first.py::test_communicate[nanos 2.1] PASSED [100%]
=========================================== 1 passed in 0.80s ===========================================
What happened?
This very simple setup actually triggered some interesting events:
pytest
automatically loaded theragger.conftest.base_conftest
module, and generated several fixtures to be used in following tests.one of these fixtures,
backend
is configured with several parameters. We did not specified it in the command line, but its type here isSpeculosBackend
(the default type).This backend exchanges with an application running into the Speculos emulator. For the fixture to automatically start this emulator, it needs to know what device it should emulates. That is where comes the
--device nanos
parameter.The fixture also needs to access the application ELF. That’s why we have we stored it in
build/nanos/bin/app.elf
.So when the
backend
fixture is created, it knows it needs to start a NanoS simulator in which theapp.elf
application file will be loaded.pytest
finally discovers and runs thetest_communicate
test.The test receives the
backend
fixture, and uses it to exchange with the application running into the emulator. By default, thebackend
is configured to raise if the application replies with an error. In our case, the test passed, so the emulated application responded with a success status.
Out-of-the-box pytest
Ragger
tools
The previous tutorial explained some feature Ragger
brings for application
testing. But there is more!
CLI parameters
Ragger
defines several parameters usable from the pytest
CLI:
Controlling the backend (--backend
)
It is possible to change the backend on which the tests should run through a CLI
argument --backend
. Available backends are:
--backend speculos
, using theSpeculosBackend
(the default behavior),--backend ledgercomm
, using theLedgerCommBackend
,--backend ledgerwallet
, using theledgerWalletBackend
.
The two later options are physical backends, meaning they will try to connect to the application through the USB ports. So the application should be installed on a physical device, connected on the test computer through USB, and the application being started on the device, else the tests will not run.
Controlling the devices (--device
)
Running the tests on specific device is automatically integrated with the
--device
argument. Available devices are:
--device nanos
,--device nanox
,--device nanosp
,--device stax
,--device all
.
This last option can only work with the SpeculosBackend
(as other backends rely on a physical device,
they can only run on the connected one), but is very convenient in a CI to
perform test campaign on all the devices.
Showing the device interface during test execution (--display
)
Warning
Capability limited to the SpeculosBackend
With the SpeculosBackend
, it is
possible to display the Qt graphical interface of the device, and so to follow
the actions and displayed screen during the test is executed.
This can be enabled with the --display
CLI argument.
Controlling the seed (--seed
)
Warning
Capability limited to the SpeculosBackend
Warning
Remember not to share your production seed. This option should be used only with testing, disposable seeds.
By default, the SpeculosBackend
has
a fixed seed. It is possible to change its value with the --seed
CLI argument.
Recording the screens (--golden_run
)
Some tests using high-level Navigator
methods comparing snapshots can also
turn these methods into a “record mode”: instead of comparing snapshots, they
will store the captured snapshots, with the --golden_run
CLI argument.
This is convenient to automatically generate stock of golden snapshots.
Fixtures and decorators
Ragger
defines several fixtures and decorators to customize how the tests runs
or access runtime information:
Running a test only on a specific backend
Some tests should only run on a specific backend. Ragger
defines a
pytest
marker allowing to execute test only on the specified backend:
1import pytest
2
3CLA = 0xB0
4INS = 0x01
5
6# this will prevent this test from running,
7# except with the ``--backend ledgercomm`` argument
8@pytest.mark.use_on_backend("ledgercomm")
9def test_communicate(backend):
10 print(backend.exchange(cla=CLA, ins=INS))
$ pytest --device nanos --backend speculos tests/ -v
============================================ test session starts =============================================
collected 1 item
tests/test_first.py::test_communication[nanos 2.1] SKIPPED (skipped on this backend: "ledgercomm") [100%]
============================================= 1 skipped in 0.81s =============================================
Getting the value of any CLI argument
Most argument defined by Ragger
into pytest
can be reached through a
fixture, and used into any test:
--backend
is reachable with thebackend_name
fixture,--display
is reachable with thedisplay
fixture,--golden_run
is reachable with thegolden_run
fixture,--log_apdu_file
is reachable with thelog_apdu_file
fixture,--seed
is reachable with thebackend_cli_user_seed
fixture,
--device
is not immediately reachable through a fixture, but it can be found
with the backend
fixture: backend.firmware.device
.