QTools  7.3.4
Collection of Host-Based Tools
Loading...
Searching...
No Matches
QUTest Concepts

Installation & UseQUTest™ Tutorial

Run-to-Completion Processing

The central concept applied in QUTest is Run-to-Completion (RTC) processing, both in the test fixture (Target) and in the test script (Host). RTC processing means that the code progresses in discrete, uninterruptible steps and that new inputs (commands) are recognized only after the current RTC step completes.

Attention
RTC Processing is the key to understanding how much output to expect from any given input as well as when a given input will be processed.

Of course, it is not a coincidence that the RTC processing of QUTest exactly matches the RTC processing in event-driven systems of state machines. And the good news here is that for all interactions with state machines, the RTC output generated by a test fixture will correspond exactly to the RTC step in the state machine.

However, a bit more tricky parts are the system reset, test initialization, and general processing of commands issued by test scripts. The following sections explain these parts by means of annotated sequence diagrams.

Remarks
For simplicity, the sequence diagrams in this section omit the QSPY intermediary from the communication between a test fixture (Target) and a test script. It is understood that every command from the test script goes to QSPY first and then is forwarded to the Target, and that every output from the Target goes through QSPY to reach the test script.

Target Reset

Most individual tests in a test script start with a clean target reset. The following sequence diagram shows the details of this process. The explanation section following the diagram clarifies the interesting points (labeled with [xx]). :

Target reset

[0] A test script executes the test() command.

[1] By default, each test starts with calling an internal function reset() to reset the Target. This reset() function sends the ::QS_RX_RESET request to the test fixture. After this, the test script enters a wait state in which it waits for QS_TARGET_INFO reply from the Target.
The Target reset can be suppressed by the NORESET option given in the test() command, which is illustrated in the NORESET Tests sequence diagram. Please note, however, that the first test in a test script (test group) and any test immediately following an "assertion-test" must cleanly reset the Target (therefore it cannot use the NORESET option).

[2] The test fixture processes the ::QS_RX_RESET request immediately by calling the QS_onReset() callback inside the Target.
Embedded Targets reboot automatically after resetting. In case of a host executable, however, QUTest™ (qutest.py) launches it again.

[3] The Target starts executing the test fixture code from the beginning. After QS gets initialized (QS_INIT()), the test fixture sends the QS_TARGET_INFO reply to the test script.

[4] Upon reception of the awaited QS_TARGET_INFO reply, the test script attempts to execute the on_reset() procedure. If on_reset() is defined in the script, it runs at this time. (This scenario assumes that on_reset() is defined and runs until step [8]).

[5] A test fixture continues the initialization RTC step and typically produces some QS dictionaries.
The QS dictionaries are consumed by QSPY and are not forwarded to the test script.

[6] The test fixture might also produce some output that is forwarded to the test script.

[7] Any such output needs to be explicitly expected by the test script. The on_reset() procedure is the ideal place to handle such output.
The main purpose of the on_reset()` procedure is to consume any output generated during the reset RTC step as well as to perform any setup that should follow the Target reset. In principle, instead of coding on_reset(), you could place all this code directly at every test, but this would be repetitious. Defining on_reset() allows you to avoid such repetitions.

[8] The on_reset() procedure ends and the test script sends ::QS_RX_TEST_SETUP to the Target.

[9] ::QS_RX_TEST_SETUP typically arrives while the test fixture still runs the initialization RTC. Therefore, ::QS_RX_TEST_SETUP is not processed immediately and its processing is delayed until the end of the current RTC step.

[10] A test fixture continues the initialization RTC step and might still produce some QS dictionaries.

[11] Finally, the test fixture completes the initialization RTC by calling QF_run(). QF_run() runs an event loop, in which it processes commands that have accumulated from the test script.

[12] The first such command is ::QS_RX_TEST_SETUP, which has been waiting in the input buffer.

[13] The acknowledgment for the ::QS_RX_TEST_SETUP is sent back to the test script.

[14] Upon reception of Trg-Ack QS_RX_TEST_SETUP, the test script attempts to execute the on_setup() procedure. If on_setup() is defined in the script, it runs at this time.
The main purpose of the on_setup() procedure is to consume any output generated from the QS_onTestSetup() callback in the test fixture invoked in the next step [15]. Note also the QS_onTestSetup() runs in all tests, including NORESET tests.

[15] The test fixture calls the QS_onTestSetup() callback function in the Target.

[16] The test script proceeds with commands defined after the test() command. Processing of these commands is explained in sections Simple Commands and Complex Commands.

Pausing the Reset

As explained in the previous section, the initialization RTC step in the test fixture extends throughout main(), from the beginning till the final call to QF_run(). The test fixture is unable to process any commands from the test script until the end of this long RTC step, which can limit the flexibility of the test fixture.

For example, consider the test fixture in the DPP example for QUTest (directory qpc/examples/qutest/dpp/test). This test fixture reuses the main() function from the actual DPP application, which starts multiple active objects. To enable unit testing of a specific single active object, it would be very convenient if the test script could set up the QS Local Filter for the chosen active object component. Such a local filter would then select the output, such as initialization from a given AO. But the problem is that such a local filter requires the QS object dictionary to be already transmitted to QSPY. On the other hand, the local filter needs to take effect before the AOs are started. In other words, the initialization RTC step needs to be split into shorter pieces, right after sending the dictionaries, but before starting active objects.

For such situations, QUTest provides the QS_TEST_PAUSE() macro, which pauses the execution of an RTC step and enters an event loop within the test fixture. This, in turn, allows the test fixture to process any commands from the test script, before the RTC continues to completion (or to another QS_TEST_PAUSE(), if needed).

The following test fixture code illustrates the use of the QS_TEST_PAUSE() macro:

int main(int argc, char *argv[]) {
static QEvt const *tableQueueSto[N_PHILO];
static QEvt const *philoQueueSto[N_PHILO][N_PHILO];
~ ~ ~
QF_init(); /* initialize the framework and the underlying RT kernel */
BSP_init(argc, argv); /* NOTE: calls QS_INIT() */
/* object dictionaries... */
QS_OBJ_DICTIONARY(AO_Table);
QS_OBJ_DICTIONARY(AO_Philo[0]);
QS_OBJ_DICTIONARY(AO_Philo[1]);
QS_OBJ_DICTIONARY(AO_Philo[2]);
~ ~ ~
/* pause execution of the test and wait for the test script to continue */
/* initialize publish-subscribe... */
QF_psInit(subscrSto, Q_DIM(subscrSto));
/* initialize event pools... */
QF_poolInit(smlPoolSto, sizeof(smlPoolSto), sizeof(smlPoolSto[0]));
/* start the active objects... */
Philo_ctor(); /* instantiate all Philosopher active objects */
for (n = 0U; n < N_PHILO; ++n) {
QACTIVE_START(AO_Philo[n], /* AO to start */
(n + 1), /* QP priority of the AO */
philoQueueSto[n], /* event queue storage */
Q_DIM(philoQueueSto[n]), /* queue length [events] */
(void *)0, /* stack storage (not used) */
0U, /* size of the stack [bytes] */
(QEvt *)0); /* initialization event */
}
~ ~ ~
[2] return QF_run(); /* run the QF application */
}
#define QS_OBJ_DICTIONARY(obj_)
Definition qpc_qs.h:509
#define QS_TEST_PAUSE()
Definition qpc_qs.h:855

[1] The QS_TEST_PAUSE() macro pauses the initialization RTC after producing QS dictionaries, but before starting active objects.

[2] The QF_run() function completes the initialization RTC.

The following sequence diagram shows the details of pausing a test. The explanation section following the diagram clarifies the interesting points (labeled with [xx]):

Pausing a test

[1] The target reset proceeds as before and produces the QS_TARGET_INFO trace record.

[2] At some point, however, the test fixture executes QS_TEST_PAUSE(), which sends the QS_TEST_PAUSED record to the test script. At this point, the test fixture enters the event loop, so the initialization RTC finishes and the test fixture is now responsive to commands.

[3] At this point, the test script must be explicitly expecting QS_TEST_PAUSE by means of the expect_pause() command.
The best place to put expect_pause() is the on_reset() callback function, which should be defined in test scripts corresponding to test fixtures that call QS_TEST_PAUSE().

[4] The on_reset() callback can now execute commands that are processed immediately in the test fixture.

[5] Eventually thequtest_dsl:on_reset() "on_reset()" callback releases the test fixture from the pause by executing the continue_test() command. This command sends ::QS_RX_TEST_CONTINUE to the test fixture.

[6] Upon reception of ::QS_RX_TEST_CONTINUE, the test fixture continues the initialization in another RTC step.

[7] The on_reset() callback ends and the test script sends ::QS_RX_TEST_SETUP to the Target.

[8] The test proceeds as before.

The following test script code illustrates the use of the expect_pause() and continue_test() commands:

def on_reset():
[1] expect_pause()
[2] glb_filter(GRP_SM)
loc_filter(OBJ_SM_AO, "AO_Philo<2>")
[3] continue_test()
[4] expect("===RTC===> St-Init Obj=AO_Philo<2>,State=QHsm_top->Philo_thinking")
expect("===RTC===> St-Entry Obj=AO_Philo<2>,State=Philo_thinking")
expect("@timestamp Init===> Obj=AO_Philo<2>,State=Philo_thinking")
glb_filter(GRP_SM_AO, GRP_UA)
current_obj(OBJ_SM_AO, "AO_Philo<2>")
}
@ GRP_SM
Definition qspy.h:187

NORESET Tests

In some tests, you specifically don't want to reset the Target, but rather you want to pick up exactly where the previous test left off. For example, you wish to test a specific state of your state machine, which you reached by dispatching or posting a specific sequence of events to it in the previous tests.

For such tests, you can suppress the target reset by following the test() command with the ::NORESET option. Such tests are called NORESET Tests.

Note
A ::NORESET test is not allowed as the first test of a test group and also not after an Assertion Tests.

The following sequence diagram shows the details of this process. The explanation section following the diagram clarifies the interesting points (labeled with [xx]):

NORESET Test

[0] The test fixture is done processing commands from any previous test(s) and is running an event loop.

[1] The test script executes the test(..., NORESET) command.

[2] The test(..., NORESET) command sends the ::QS_RX_TEST_SETUP command to the test fixture.

[3] The test fixture processes ::QS_RX_TEST_SETUP immediately, because it is running the event loop.

[4] The test fixture responds with Trg-Ack ::QS_RX_TEST_SETUP.

[5] Upon reception of Trg-Ack ::QS_RX_TEST_SETUP, the test script attempts to execute the on_setup() callback. If on_setup() is defined in the script, it runs at this time.
The main purpose of the on_setup() callback is to consume any output generated from the QS_onTestSetup() callback in the test fixture invoked in the next step [6].

[6] The test fixture calls the QS_onTestSetup() callback function in the Target.

[7] The test script proceeds with commands defined after the qutest_dsl::test() "test()" command. Processing of these commands is explained in sections Simple Commands and Complex Commands.

Assertion Tests

The use of assertions in embedded code (and especially in safety-critical code) is considered one of the best practices and the QP frameworks provide assertion facilities specifically designed for deeply embedded systems.

Assuming that you are using QP assertions in your code, an assertion failure can happen during a unit test. When it happens, the test fixture will produce the non-maskable QS_ASSERT_FAIL trace record. When this record arrives during a regular test, it will not be expected, so the test will fail. This is exactly what you want, because a failing assertion represents an error which needs to be fixed.

Note
The QP assertion handler Q_onAssert() is defined in the QUTest Stub. This assertion handler is instrumented to produce the QS_ASSERT_FAIL trace record.

However, sometimes you specifically want to test the assertion code itself, so you intentionally force an assertion in your test. In that case an assertion failure is expected and the test passes when the assertion fails. Such tests are called "Assertion Tests" and QUTest™ has been specifically designed to support such tests.

Here is an example of an "Assertion Test":

test("TIMEOUT->Philo_thinking (ASSERT)")
probe("QActive_post_", 1)
dispatch("TIMEOUT_SIG")
expect("@timestamp Disp===> Obj=AO_Philo<2>,Sig=TIMEOUT_SIG,State=Philo_thinking")
expect("===RTC===> St-Exit Obj=AO_Philo<2>,State=Philo_thinking")
expect("@timestamp TstProbe Fun=QActive_post_,Data=1")
expect("@timestamp =ASSERT= Mod=qf_actq,Loc=110")

As you can see, the test ends with an explicit expectation of an assertion failure: expect('@timestamp =ASSERT= Mod=qf_actq,Loc=...'). This is very easy and natural in QUTest.

Note
The only special treatment required here is that a test immediately following such an "Assertion Test" must necessarily reset the Target (it cannot be a NORESET-Test).

Categories of QSPY Output

To write effective test scripts you need to understand the main categories of QSPY output, which are illustrated in the picture below:

Categories of QSPY output

[0] Information output generated internally by QSPY. This output is not sent to test scripts.

[1] Dictionary trace records generated by the Target. This output is not forwarded to test scripts.

[2] Acknowledgement trace records generated by the Target. This output is forwarded to test scripts, but is checked automatically and implicitly by the test commands.

[3] Trace records generated by the Target. This output is forwarded to test scripts and must be checked explicitly by test expectations.

Simple Commands

Simple test script commands do not produce any output from the Target, except for the Trg-Ack (acknowledgement). Examples of <SIMPLE-COMMAND> include glb_filter(), loc_filter() and current_obj().

Simple command processing

[1] A test script sends a <SIMPLE-COMMAND> to the test fixture.

[2] The test fixture receives the command and immediately starts processing it.

[3] Processing of a command triggers an RTC step and produces only the Trg-Ack <SIMPLE-COMMAND> (acknowledgement of the specific <SIMPLE-COMMAND>).

[4] Immediately after sending the <SIMPLE-COMMAND>, the test script enters an implicit expect state, in which it waits for the Trg-Ack <SIMPLE-COMMAND> output from the Target. The processing of the <SIMPLE-COMMAND> ends when the next output received from the Target exactly matches the expected output.

Complex Commands

Complex test script commands might produce some output from the Target, not just the Trg-Ack (acknowledgement). Examples of <COMPLEX-COMMAND> include command(), dispatch(), post() and tick().

Complex command processing

[1] A test script sends a <COMPLEX-COMMAND> to the test fixture.

[2] The test fixture receives the command and immediately starts processing it.

[3] Processing of a command triggers an RTC step and produces only the Trg-Ack <COMPLEX-COMMAND> (acknowledgement of the specific <COMPLEX-COMMAND>).

[4] The <COMPLEX-COMMAND> must be followed in the test script by the explicit expect() commands that consume any output produced by the command.

[5-6] The test fixture produces some output.

[7] Each such output is consumed by the matching expect() command.

[8] The test fixture sends the additional QS record Trg-Done <COMPLEX-COMMAND>, which explicitly delimits the output from this particular command.

[9] The test script must consume the Trg-Done <COMPLEX-COMMAND> record by an explicit expect() command.

Attention
Any events posted inside the Target in the course of processing a command (Simple or Complex) are also handled in the same RTC step. This extends the RTC step until all event queues of all Active Objects inside the Target are empty.

Installation & UseQUTest™ Tutorial