mirror of
https://github.com/rtic-rs/rtic.git
synced 2024-11-23 12:12:50 +01:00
book wavedrom
This commit is contained in:
parent
9889ad5e74
commit
e1ef357d20
4 changed files with 178 additions and 18 deletions
|
@ -13,7 +13,10 @@ title = "Real-Time Interrupt-driven Concurrency"
|
|||
[preprocessor.mermaid]
|
||||
command = "mdbook-mermaid"
|
||||
|
||||
[preprocessor.wavedrom]
|
||||
command = "mdbook-wavedrom"
|
||||
|
||||
[output.html]
|
||||
git-repository-url = "https://github.com/rtic-rs/cortex-m-rtic"
|
||||
git-repository-icon = "fa-github"
|
||||
additional-js = ["mermaid.min.js", "mermaid-init.js"]
|
||||
additional-js = ["mermaid.min.js", "mermaid-init.js", "wavedrom.min.js", "wavedrome-default.js"]
|
||||
|
|
|
@ -33,7 +33,7 @@ The RTIC framework takes the outset from real-time systems research at Luleå Un
|
|||
|
||||
[Stack Resource Policy (SRP)][SRP] based concurrency and resource management is at heart of the RTIC framework. The SRP model itself extends on [Priority Inheritance Protocols], and provides a set of outstanding properties for single core scheduling. To name a few:
|
||||
|
||||
- preemptive deadlock and race-free scheduling
|
||||
- preemptive, deadlock and race-free scheduling
|
||||
- resource efficiency
|
||||
- tasks execute on a single shared stack
|
||||
- tasks run-to-completion with wait free access to shared resources
|
||||
|
@ -55,14 +55,14 @@ SRP based scheduling requires the set of static priority tasks and their access
|
|||
|
||||
### Example
|
||||
|
||||
Assume two tasks `A` (with priority `p(A) = 2`) and `B` (with priority `p(B) = 4`) both accessing the shared resource `R`. The static ceiling of `R` is 4 (computed from `𝝅(R) = max(p(A) = 2, p(B) = 4) = 4`).
|
||||
Assume two tasks `t1` (with priority `p(t1) = 2`) and `t2` (with priority `p(t2) = 4`) both accessing the shared resource `R`. The static ceiling of `R` is 4 (computed from `𝝅(R) = max(p(t1) = 2, p(t2) = 4) = 4`).
|
||||
|
||||
A graph representation of the example:
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A["p(A) = 2"] --> R
|
||||
B["p(A) = 4"] --> R
|
||||
t1["p(t1) = 2"] --> R
|
||||
t2["p(t2) = 4"] --> R
|
||||
R["𝝅(R) = 4"]
|
||||
```
|
||||
|
||||
|
@ -88,13 +88,13 @@ In order to map the SRP scheduling onto the hardware we need to take a closer lo
|
|||
|
||||
## Example
|
||||
|
||||
Assume the task model above. Starting from an idle system, 𝜫 is 0, (no task is holding any resource). Assume that `A` is requested for execution, it will immediately be scheduled. Assume that `A` claims (locks) the resource `R`. During the claim (lock of `R`) any request `B` will be blocked from starting (by 𝜫 = `max(𝝅(R) = 4) = 4`, `p(B) = 4`, thus SRP scheduling condition 1 is not met).
|
||||
Assume the task model above. Starting from an idle system, 𝜫 is 0, (no task is holding any resource). Assume that `t1` is requested for execution, it will immediately be scheduled. Assume that `t1` claims (locks) the resource `R`. During the claim (lock of `R`) any request `t2` will be blocked from starting (by 𝜫 = `max(𝝅(R) = 4) = 4`, `p(t2) = 4`, thus SRP scheduling condition 1 is not met).
|
||||
|
||||
## Mapping
|
||||
|
||||
The mapping of static priority SRP based scheduling to the Cortex M hardware is straightforward:
|
||||
|
||||
- each task `t` are mapped to an interrupt vector index `i` with a corresponding function `v[i].fn = t` and given the static priority `v[i].priority = p(t)`.
|
||||
- each task `t` is mapped to an interrupt vector index `i` with a corresponding function `v[i].fn = t` and given the static priority `v[i].priority = p(t)`.
|
||||
- the current system ceiling is mapped to the `BASEPRI` register or implemented through masking the interrupt enable bits accordingly.
|
||||
|
||||
## Example
|
||||
|
@ -103,20 +103,20 @@ For the running example, a snapshot of the ARM Cortex M [Nested Vectored Interru
|
|||
|
||||
| Index | Fn | Priority | Enabled | Pended |
|
||||
| ----- | --- | -------- | ------- | ------ |
|
||||
| 0 | A | 2 | true | true |
|
||||
| 1 | B | 4 | true | false |
|
||||
| 0 | t1 | 2 | true | true |
|
||||
| 1 | t2 | 4 | true | false |
|
||||
|
||||
[NVIC]: https://developer.arm.com/documentation/ddi0337/h/nested-vectored-interrupt-controller/about-the-nvic
|
||||
|
||||
(As discussed later, the assignment of interrupt and exception vectors is up to the user.)
|
||||
|
||||
|
||||
A claim (lock(r)) will change the current system ceiling (𝜫) and can be implemented as a *named* critical section:
|
||||
A resource claim (lock) will change the current system ceiling (𝜫) and can be implemented as a *named* critical section:
|
||||
- old_ceiling = 𝜫, 𝜫 = 𝝅(r)
|
||||
- execute code within critical section
|
||||
- old_ceiling = 𝜫
|
||||
|
||||
This amounts to a resource protection mechanism requiring only two machine instructions on enter and one on exit the critical section for managing the `BASEPRI` register. For architectures lacking `BASEPRI`, we can implement the system ceiling through a set of machine instructions for disabling/enabling interrupts on entry/exit for the named critical section. The number of machine instructions vary depending on the number of mask registers that needs to be updated (a single machine operation can operate on up to 32 interrupts, so for the M0/M0+ architecture a single instruction suffice). RTIC will determine the ceiling values and masking constants at compile time, thus all operations is in Rust terms zero-cost.
|
||||
This amounts to a resource protection mechanism requiring only two machine instructions on enter and one on exit the critical section for managing the `BASEPRI` register. For architectures lacking `BASEPRI`, we can implement the system ceiling through a set of machine instructions for disabling/enabling interrupts on entry/exit for the named critical section. The number of machine instructions vary depending on the number of mask registers that needs to be updated (a single machine operation can operate on up to 32 interrupts, so for the M0/M0+ architecture a single instruction suffice). RTIC will determine the ceiling values and masking constants at compile time, thus all operations are in Rust terms zero-cost.
|
||||
|
||||
In this way RTIC fuses SRP based preemptive scheduling with a zero-cost hardware accelerated implementation, resulting in "best in class" guarantees and performance.
|
||||
|
||||
|
@ -130,21 +130,21 @@ Asynchronous programming in various forms are getting increased popularity and l
|
|||
|
||||
The Rust standard library provides collections for dynamically allocated data-structures (useful to manage execution contexts at run-time. However, in the setting of resource constrained real-time systems, dynamic allocations are problematic (both regarding performance and reliability - Rust runs into a *panic* on an out-of-memory condition). Thus, static allocation is king!
|
||||
|
||||
RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC 2.0.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context allowing static allocation.
|
||||
RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC 2.0.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context, which in turn allows these contexts to be statically allocated.
|
||||
|
||||
From a modelling perspective `async/await` lifts the run-to-completion requirement of SRP, and each section of code between two yield points (`await`s) can be seen as an individual task. The compiler will reject any attempt to `await` while holding a resource (not doing so would break the strict LIFO requirement on resource usage under SRP).
|
||||
From a modelling perspective `async/await` lifts the run-to-completion requirement of SRP. Each section of code between two yield points (`await`s) can be seen as an individual task. The compiler will reject any attempt to `await` while holding a resource (not doing so would break the strict LIFO requirement on resource usage under SRP).
|
||||
|
||||
So with the technical stuff out of the way, what does `async/await` bring to the RTIC table?
|
||||
|
||||
The answer is - improved ergonomics! In cases you want a task to perform a sequence of requests (and await their results in order to progress). Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding (and manually progress by selecting sub-task). Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`.
|
||||
The answer is - improved ergonomics! In cases you want a task to perform a sequence of requests and await their results in order to progress. Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding to manually progress. Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`.
|
||||
|
||||
Rust `async`/`await` support is still incomplete and/or under development (e.g., there are no stable way to express `async` closures, precluding use in iterator patterns). Nevertheless, Rust `async`/`await` is production ready and covers most common use cases.
|
||||
Rust `async`/`await` support is still incomplete and/or under active development (e.g., there are no stable way to express `async` closures, precluding use in iterator patterns). Nevertheless, Rust `async`/`await` is production ready and covers many common use cases.
|
||||
|
||||
An important property is that futures are composable, thus you can await either, all, or any combination of possible futures (allowing e.g., timeouts and/or asynchronous errors to be promptly handled). For more details and examples see Section [todo].
|
||||
An important property is that futures are composable, thus you can await either, all, or any combination of possible futures. This allows e.g., timeouts and/or asynchronous errors to be promptly handled). For more details and examples see Section [todo].
|
||||
|
||||
## RTIC the model
|
||||
|
||||
An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task returning a set of resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority).
|
||||
An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task and returning a set of initialized resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority).
|
||||
|
||||
At compile time the task/resource model is analyzed under SRP and executable code generated with the following outstanding properties:
|
||||
|
||||
|
@ -152,10 +152,162 @@ At compile time the task/resource model is analyzed under SRP and executable cod
|
|||
- hardware task scheduling is performed directly by the hardware, and
|
||||
- software task scheduling is performed by auto generated async executors tailored to the application.
|
||||
|
||||
The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency.
|
||||
The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a well crafted hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency.
|
||||
|
||||
## RTIC Interrupt Driven Concurrency
|
||||
|
||||
We showcase the RTIC model on a set of increasingly complex examples.
|
||||
|
||||
### Example, Simple preemption
|
||||
|
||||
Assume a system with two *hardware* tasks, `t1` and `t2` with priorities `p(t1) = 2` and `p(t2) = 4`. `t1` and `t2` are bound to interrupts `INT1` and `INT2` accordingly.
|
||||
|
||||
A trace of the system might look like this:
|
||||
|
||||
The system is initially *idle* (no tasks running, thus the `Running Priority` is 0 in figure). At time 1 `INT1` is pended (arrives) and `t1` is dispatched for execution by the hardware, (A) in figure. During the execution of `t1`, `INT2` is pended at time 3. Since `INT2` has higher priority than the currently running interrupt handler (`INT1`) (`Running Priority` is 2 in figure) `t2` is dispatched for execution (preempting the currently running task), (B in figure). At time 5, `t2` is finished and returns, allowing `t1` to resume execution (C in figure).
|
||||
|
||||
The scheduling mechanisms are performed by the hardware without any overhead induced by the RTIC framework.
|
||||
|
||||
``` wavedrom
|
||||
{
|
||||
head:{
|
||||
text:'Two Hardware Tasks',
|
||||
tick:0,
|
||||
every:1
|
||||
},
|
||||
signal: [
|
||||
[ 'Pend',
|
||||
{
|
||||
name: 'INT2',
|
||||
wave: '0..H.l..',
|
||||
node: '...B....'
|
||||
},
|
||||
{
|
||||
name: 'INT1',
|
||||
wave: '0H.....l',
|
||||
node: '.A......'
|
||||
},
|
||||
],
|
||||
{
|
||||
name: 'Running Priority',
|
||||
wave: '66.6.6.6',
|
||||
|
||||
data: [0, 2, 4, 2, 0]
|
||||
},
|
||||
[ 'Vec',
|
||||
{
|
||||
name: 'INT2 (p = 4)',
|
||||
wave: '0..3.0..',
|
||||
node: '...D.E..',
|
||||
data: ['t2']
|
||||
},
|
||||
{
|
||||
name: 'INT1 (p = 2)',
|
||||
wave: '03.2.3.0',
|
||||
node: '.C...F..',
|
||||
data: ['t1','---', 't1']
|
||||
},
|
||||
],
|
||||
],
|
||||
edge: [
|
||||
'A~>C A',
|
||||
'B->D B',
|
||||
'E->F C',
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Example, Resource locking
|
||||
|
||||
Now we assume that tasks `t1` and `t2` share a resource `R`. According to SRP we compute the ceiling value `𝝅(R) = max(p(t1) = 2, p(t2) = 4) = 4`).
|
||||
|
||||
A trace of the system might look like this:
|
||||
|
||||
Initially the system is idle and both the `System Ceiling` and the current `Running Priority` is set to 0. At time 1, task `t1` is dispatched as it has highest priority among pended tasks and has a priority higher than both the `System Ceiling` and the current `Running Priority`. At time 2 `t1` locks the resource `R` and the `System Ceiling` is raised to 4 (the maximum ceiling value of all locked resources). At time 3, `INT2` is pended. Task `t2` now has the highest priority among pended tasks and has higher priority then the current `Running Priority` (which is now 2). However, `t2` has NOT higher priority than the `System Ceiling` (which is now 4), thus `t2` is blocked from dispatch (the grey area in the figure). `t1` continues to execute and releases the lock on `R` at time 4. The `System Ceiling` becomes 0 (no locked resources at this point), which allows the pending task `t2` to be dispatched.
|
||||
|
||||
This way SRP guarantees that no two running tasks have concurrent access to the same shared resource. SRP also guarantees that once a task has started to execute, all locks will be granted without awaiting other tasks to progress.
|
||||
|
||||
For this specific trace `t1` is blocked 1 time unit. The response time for `t2` is 3 time units (the time since arrival at time 3, until it finishes at time 6).
|
||||
|
||||
``` wavedrom
|
||||
{
|
||||
head:{
|
||||
text:'Two Hardware Tasks with a Shared Resource',
|
||||
tick:0,
|
||||
every:1
|
||||
},
|
||||
signal: [
|
||||
[ 'Pend',
|
||||
{
|
||||
name: 'INT2',
|
||||
wave: '0..H..l.',
|
||||
node: '...A...'
|
||||
},
|
||||
{
|
||||
name: 'INT1',
|
||||
wave: '0H.....l',
|
||||
// node: '.A......'
|
||||
},
|
||||
],
|
||||
|
||||
{
|
||||
name: 'System Ceiling',
|
||||
wave: '5.5.555.',
|
||||
// node: '..B...',
|
||||
data: [0, 4, 0, 4, 0]
|
||||
},
|
||||
|
||||
{
|
||||
name: 'Running Priority',
|
||||
wave: '66..6.66',
|
||||
// node: '..B...',
|
||||
data: [0, 2, 4, 2, 0]
|
||||
},
|
||||
|
||||
|
||||
|
||||
[ 'Vec',
|
||||
{
|
||||
name: 'INT2 (p = 4)',
|
||||
wave: '0..x340.',
|
||||
node: '....B...',
|
||||
data: ['t2', 'L(R)']
|
||||
},
|
||||
{
|
||||
name: 'INT1 (p = 2)',
|
||||
wave: '034.2.30',
|
||||
node: '.C...F.',
|
||||
data: ['t1','L(R)','---', 't1']
|
||||
},
|
||||
],
|
||||
],
|
||||
edge: [
|
||||
// 'A~>B A',
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
RTIC implements the SRP system ceiling by means of the BASEPRI register or by interrupt masking. In both cases the scheduling is performed directly by the hardware (the only overhead induced is due constant-time operations to update the BASEPRI/interrupt mask(s) on task dispatch and resource lock/unlock).
|
||||
|
||||
Furthermore, [SRP] comes with worst case guarantees allowing to (at compile time) compute the worst case response times for all possible traces of the system. Thus, unlike any mainstream RTOS, RTIC provides the means to implement systems with *hard* real-time requirements.
|
||||
|
||||
<!--
|
||||
|
||||
[ 'Vec',
|
||||
{
|
||||
name: 'INT2 (p = 2)',
|
||||
wave: '0..3.0.',
|
||||
node: '...C.E.',
|
||||
data: ["task2"]
|
||||
},
|
||||
{
|
||||
name: 'INT1 (p = 1)',
|
||||
wave: '03.2.30',
|
||||
node: '.D...F.',
|
||||
data: ['task1','---', 'task2']
|
||||
},
|
||||
],
|
||||
|
||||
For the documentation older versions, see;
|
||||
|
||||
* v1.0.x go [here](/1.0).
|
||||
|
|
2
book/en/wavedrom.min.js
vendored
Normal file
2
book/en/wavedrom.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
3
book/en/wavedrome-default.js
Normal file
3
book/en/wavedrome-default.js
Normal file
File diff suppressed because one or more lines are too long
Loading…
Reference in a new issue