The Interaction Design Viewpoint provides an analysis of interactions within QP Framework elements. This design viewpoint frames the following concerns:
QA Application startup sequence
[0]
The QP Application startup sequence occurs in the main()
function.
[1]
The Active Object framework layer gets initialized, which also performs initialization of the real-time kernel.
[2]
The Board Support Package (BSP) initialization sets up the hardware, software tracing (if used), etc.
[3]
If this QP Application uses publish-subscribe, it is initialized with a call to QP::QActive::psInit().
subscrSto
), which is allocated statically in this case.[4]
The QP Application initializes all event pools that it is using by repeated calls to QP::QF::poolInit()
medPoolSto
), which is allocated statically in this case.[5]
The QP Application instantiates (by calling the constructor) and starts all Active Objects by calling QP::QActive::start(). Each Active Objects is assigned unique priorities, and provided with event-queue buffers and the stacks (if required). Also, this step involves executing the top-most initial transition in the Active Object (Figure SDS-START [5B]). Additionally, this step triggers the top-most initial transition in the Active Object's state machine, which might involve some interaction with the BSP to initialize the hardware controlled by this Active Object (Figure SDS-START [5C]).
aoA_QueueSto
) and the stack (if required by the underlying kernel) for each started Active Object.[6]
The QP Application transfers control to QF Active Object Framework by calling QP::QF::run(). QP::QF::run() begins by calling the QP::QF::onStartup() callback to configure and enable interrupts (Figure SDS-START [6A]), as the system is only now ready to receive them. After this, the QF Framework starts execution of Active Objects (Figure SDS-START [6B]), which typically involves transferring control to the underlying real-time kernel.
QP event posting sequence
[1]
Event posting begins with incrementing the reference counter of the event (for mutable events dynamically allocated from event pools), which happens within a critical section.
[2]
The event is posted to the event queue of the recipient Active Object. The default behavior of the queue is to assert internally that the queue does not overflow and can accept the event (this is part of the event delivery guarantee).
[2A]
Assuming that the recipient Active Object didn't have events before, adding an event to its queue makes the Active Object ready to run. However, the Active Object is not assigned the CPU just yet because the ISR has a higher priority and continues to completion. (NOTE: the same behavior would occur if the event was posted from an Active Object of a higher priority than the recipient.)
[3]
Only after the ISR completes, the received event is dispatched to the internal state machine of the recipient Active Object. The virtual dispatch()
function runs-to-completion (RTC) in the thread context of the recipient Active Object.
[4]
After the RTC step, the event is garbage-collected, which decrements the reference counter (for a mutable dynamic event). The event is recycled back to the original event pool only when the reference count drops to zero.
[1]
As before in Figure SDS-POST1, event posting begins with incrementing the reference counter of the event (for mutable events dynamically allocated from event pools), which happens within a critical section.
[2]
As before, the event is posted to the event queue of the recipient Active Object.
[2A]
However, assuming that the system executes under a preemptive, priority-based scheduler, the recipient Active Object immediately preempts the sender Active Object. This happens because the recipient has a higher priority than the sender and a preemptive scheduler must always give control to the highest-priority Active Object ready to run.
[3]
The recipient Active Object dispatches the event to its internal state machine.
[4]
The recipient Active Object calls the garbage-collector to decrement the reference count and recycles the event back to the original event pool.
[5]
The CPU is assigned back to the preempted sender Active Object. The sender completes its RTC step.
QP event publishing sequence
[1]
Event publishing begins with incrementing the reference counter of the event (for mutable events dynamically allocated from event pools), which happens within a critical section.
[2]
Next, the publish operation determines the highest-priority subscriber and selectively locks the scheduler up to that priority. (NOTE: selective scheduler locking is supported by modern real-time kernels. Older kernels support only indiscriminate, global scheduler locking).
[3]
Event is posted to the highest-priority subscriber Active Object, which increments the reference counter of the event (per SDS_QP_POST).
[3A]
Even if the priority of that subscriber is higher than the sender, preemption does not happen because of the scheduler lock.
[4]
Event is posted to the medium-priority subscriber Active Objects, which increments the reference counter of the event (per SDS_QP_POST).
[5]
Event is posted to the low-priority subscriber Active Objects, which increments the reference counter of the event (per SDS_QP_POST).
[4A-5A]
The events are only posted, but the recipients Active Objects don't run yet.
[6]
The publish operation unlocks the scheduler to the previous level before the publishing.
[7]
The highest-priority recipient Active Object immediately preempts the lower-priority publisher and dispatches the published event to its state machine.
[8]
During the processing, the highest-priority Active Object posts another event e1
to ActiveObjB
.
[8A]
The posted event e1
is only enqueued, but is not processed.
[9]
The highest-priority Active Object continues and garbage-collects the published event. This only decrements its reference counter, but does not recycle the published event.
[10]
The event publisher garbage-collects the original event. This decrements the reference counter incremented in step [1]
.
[10]
prevents event leak in case there are no subscribers to that event.[11]
ActiveObjB
dispatches the published event to its state machine.
[12]
The event publisher garbage-collects published event. This decrements the reference counter, but does not recycle the event yet.
[13]
ActiveObjB
dispatches the posted event e1
to its state machine.
ActiveObjB
processes events in the expected order (based on cause and effect): published event e
followed by posted event e1
(caused by the published event e
).[14]
ActiveObjB
garbage-collects the posted event e1
, which decrements its reference counter and recycles the event.
[15]
The lowest-priority ActiveObjA
dispatches the published event to its state machine.
[16]
The lowest-priority ActiveObjA
garbage-collects the original event, which decrements its reference counter. This time, the counter drops to zero, so the event is finally recycled to its original event pool.
[1]
As before, event publishing begins with incrementing the reference counter of the event (for mutable events dynamically allocated from event pools), which happens within a critical section.
[2]
The publish operation determines the highest-priority subscriber and posts the event to that Active Object. This increments the reference counter of the event (per SDS_QP_POST).
[3]
However, this time the scheduler is NOT locked, so the highest-priority Active Object immediately preempts the lower-priority publisher.
[4]
During the processing, the higest-priority Active Object posts another event e1
to ActiveObjB
.
[4A]
The posted event e1
is only enqueued, but is not processed.
[5]
The highest-priority Active Object continues and garbage-collects the published event. This only decrements its reference counter, but does not recycle the published event because its reference counter has been incremented in step [1]
[6]
The preemptive scheduler resumes the preempted publisher, which posts the original event to ActiveObjB
.
ActiveObjB
has two events, which are enqueued in the following order: e1
followed by e
. This is an unexpected order because event e1
is caused by event e
, yet it precedes event e
. This unexpected re-ordering of events is the result of NOT locking the scheduler.[6A]
The posted event is only enqueued, but is not processed.
[7]
The preemptive scheduler posts the original event to ActiveObjC
.
[7A]
The posted event is only enqueued, but is not processed.
[8]
The event publisher garbage-collects the original event. This decrements the reference counter incremented in step [1]
.
[8]
prevents event leak in case there are no subscribers to that event.[9]
The medium-priority ActiveObjB
dispatches the event e1
to its internal state machine.
[10]
The medium-priority ActiveObjB
garbage-collects event e1
, which causes recycling of that event.
[11]
The medium-priority ActiveObjB
dispatches the event e
to its internal state machine.
[12]
The medium-priority ActiveObjB
garbage-collects published event e
, which decrements its reference counter but does not recycle the event yet.
[13]
The lowest-priority ActiveObjA
dispatches the published event e
to its internal state machine.
[14]
The lowest-priority ActiveObjA
garbage-collects the published event e
, which is no longer referenced, so it is recycled.
ActiveObjB
in this scenario.The zero-copy event management is designed to be intuitive and transparent to the application-level code. However, for the zero-copy event management abstraction to behave exactly like true event copying, QP/C++ Application needs to obey specific event ownership rules similar to the rules of working with objects allocated with the C++ operator new
and summarized in the life cycle diagram of a mutable event (see Figure SDS-EVT-LIFE). In exchange, QP/C++ Framework can safely and deterministically deliver your mutable events with hard real-time performance, which does not complicate the RMS/RMA method and is superior to the copying entire events approach.
Mutable event ownership rules
Figure SDS-EVT-LIFE illustrates the mutable event life cycle and possible transfers of ownership rights to the event:
[0]
All mutable events are initially owned by QP/C++ Framework.
[1]
An event producer might gain ownership of a new event only by allocating it. At this point, the producer gains the ownership rights with the permissions to write to the event. Indeed, the purpose of this stage in the mutable event's life cycle is to initialize the event and fill it with data. The event producer might keep the event as long as it needs. For example, the producer (e.g., ISR) might fill the event with data over many invocations. Eventually, however, the producer must transfer the ownership back to the framework.
[2a,2b,2c]
Typically the producer posts [2a]
or publishes [2b]
the event. As a special case, the producer might decide that the event is not good, in which case the producer must explicitly recycle [2c]
the event. After any of these three operations, the producer immediately loses ownership of the event and can no longer access it. In particular, it is illegal to post, publish, or recycle the event again.
[3]
The recipient Active Object gains ownership of the current event upon the start of the RTC step. This time, the Active Object gains the read-only permissions to the current event.
[4a,4b]
During the RTC step, the recipient Active Object is allowed to re-post [4a]
or re-publish [4b]
the current event any number of times without losing ownership of the event.
[5a]
As a special case, the recipient Active Object may defer the current event. Event deferral extends the read-only ownership rights beyond the current RTC step.
[5b]
Eventually, however, the deferred event must be recalled, which self-posts the event into the Active Object's event queue (using the LIFO policy). Recalling ends the ownership of the original deferred event.
[6]
The end of the RTC step terminates the ownership of the current event. The Active Object cannot use the event in any way past the RTC step. In particular, if any data from that event is needed in the future, QP/C++ Application must save that data (typically in some attributes inside the Active Object).