This commit is contained in:
Jorge Aparicio 2019-08-21 10:17:27 +02:00
parent 0e146f8d11
commit 07b2b4d830
43 changed files with 628 additions and 437 deletions

View file

@ -31,9 +31,7 @@ A concurrency framework for building real time systems.
- **Highly efficient memory usage**: All the tasks share a single call stack and
there's no hard dependency on a dynamic memory allocator.
- **All Cortex-M devices are supported**. The core features of RTFM are
supported on all Cortex-M devices. The timer queue is currently only supported
on ARMv7-M devices.
- **All Cortex-M devices are fully supported**.
- This task model is amenable to known WCET (Worst Case Execution Time) analysis
and scheduling analysis techniques. (Though we haven't yet developed Rust

View file

@ -4,7 +4,7 @@
- [RTFM by example](./by-example.md)
- [The `app` attribute](./by-example/app.md)
- [Resources](./by-example/resources.md)
- [Tasks](./by-example/tasks.md)
- [Software tasks](./by-example/tasks.md)
- [Timer queue](./by-example/timer-queue.md)
- [Types, Send and Sync](./by-example/types-send-sync.md)
- [Starting a new project](./by-example/new.md)
@ -18,3 +18,5 @@
- [Ceiling analysis](./internals/ceilings.md)
- [Software tasks](./internals/tasks.md)
- [Timer queue](./internals/timer-queue.md)
- [Homogeneous multi-core support](./homogeneous.md)
- [Heterogeneous multi-core support](./heterogeneous.md)

View file

@ -28,22 +28,23 @@ not required to use the [`cortex_m_rt::entry`] attribute.
Within the pseudo-module the `app` attribute expects to find an initialization
function marked with the `init` attribute. This function must have signature
`fn(init::Context) [-> init::LateResources]`.
`fn(init::Context) [-> init::LateResources]` (the return type is not always
required).
This initialization function will be the first part of the application to run.
The `init` function will run *with interrupts disabled* and has exclusive access
to Cortex-M and device specific peripherals through the `core` and `device`
variables fields of `init::Context`. Not all Cortex-M peripherals are available
in `core` because the RTFM runtime takes ownership of some of them -- for more
details see the [`rtfm::Peripherals`] struct.
to Cortex-M and, optionally, device specific peripherals through the `core` and
`device` fields of `init::Context`.
`static mut` variables declared at the beginning of `init` will be transformed
into `&'static mut` references that are safe to access.
[`rtfm::Peripherals`]: ../../api/rtfm/struct.Peripherals.html
The example below shows the types of the `core` and `device` variables and
showcases safe access to a `static mut` variable.
The example below shows the types of the `core` and `device` fields and
showcases safe access to a `static mut` variable. The `device` field is only
available when the `peripherals` argument is set to `true` (it defaults to
`false`).
``` rust
{{#include ../../../../examples/init.rs}}
@ -64,7 +65,7 @@ signature `fn(idle::Context) - > !`.
When present, the runtime will execute the `idle` task after `init`. Unlike
`init`, `idle` will run *with interrupts enabled* and it's not allowed to return
so it runs forever.
so it must run forever.
When no `idle` function is declared, the runtime sets the [SLEEPONEXIT] bit and
then sends the microcontroller to sleep after running `init`.
@ -84,21 +85,67 @@ The example below shows that `idle` runs after `init`.
$ cargo run --example idle
{{#include ../../../../ci/expected/idle.run}}```
## `interrupt` / `exception`
## Hardware tasks
Just like you would do with the `cortex-m-rt` crate you can use the `interrupt`
and `exception` attributes within the `app` pseudo-module to declare interrupt
and exception handlers. In RTFM, we refer to interrupt and exception handlers as
*hardware* tasks.
To declare interrupt handlers the framework provides a `#[task]` attribute that
can be attached to functions. This attribute takes a `binds` argument whose
value is the name of the interrupt to which the handler will be bound to; the
function adornated with this attribute becomes the interrupt handler. Within the
framework these type of tasks are referred to as *hardware* tasks, because they
start executing in reaction to a hardware event.
The example below demonstrates the use of the `#[task]` attribute to declare an
interrupt handler. Like in the case of `#[init]` and `#[idle]` local `static
mut` variables are safe to use within a hardware task.
``` rust
{{#include ../../../../examples/interrupt.rs}}
{{#include ../../../../examples/hardware.rs}}
```
``` console
$ cargo run --example interrupt
{{#include ../../../../ci/expected/interrupt.run}}```
{{#include ../../../../ci/expected/hardware.run}}```
So far all the RTFM applications we have seen look no different that the
applications one can write using only the `cortex-m-rt` crate. In the next
section we start introducing features unique to RTFM.
applications one can write using only the `cortex-m-rt` crate. From this point
we start introducing features unique to RTFM.
## Priorities
The static priority of each handler can be declared in the `task` attribute
using the `priority` argument. Tasks can have priorities in the range `1..=(1 <<
NVIC_PRIO_BITS)` where `NVIC_PRIO_BITS` is a constant defined in the `device`
crate. When the `priority` argument is omitted the priority is assumed to be
`1`. The `idle` task has a non-configurable static priority of `0`, the lowest
priority.
When several tasks are ready to be executed the one with *highest* static
priority will be executed first. Task prioritization can be observed in the
following scenario: an interrupt signal arrives during the execution of a low
priority task; the signal puts the higher priority task in the pending state.
The difference in priority results in the higher priority task preempting the
lower priority one: the execution of the lower priority task is suspended and
the higher priority task is executed to completion. Once the higher priority
task has terminated the lower priority task is resumed.
The following example showcases the priority based scheduling of tasks.
``` rust
{{#include ../../../../examples/preempt.rs}}
```
``` console
$ cargo run --example interrupt
{{#include ../../../../ci/expected/preempt.run}}```
Note that the task `uart1` does *not* preempt task `uart2` because its priority
is the *same* as `uart2`'s. However, once `uart2` terminates the execution of
task `uart1` is prioritized over `uart0`'s due to its higher priority. `uart0`
is resumed only after `uart1` terminates.
One more note about priorities: choosing a priority higher than what the device
supports (that is `1 << NVIC_PRIO_BITS`) will result in a compile error. Due to
limitations in the language the error message is currently far from helpful: it
will say something along the lines of "evaluation of constant value failed" and
the span of the error will *not* point out to the problematic interrupt value --
we are sorry about this!

View file

@ -36,8 +36,7 @@ $ cargo add lm3s6965 --vers 0.1.3
$ rm memory.x build.rs
```
3. Add the `cortex-m-rtfm` crate as a dependency and, if you need it, enable the
`timer-queue` feature.
3. Add the `cortex-m-rtfm` crate as a dependency.
``` console
$ cargo add cortex-m-rtfm --allow-prerelease

View file

@ -1,22 +1,27 @@
## Resources
One of the limitations of the attributes provided by the `cortex-m-rt` crate is
that sharing data (or peripherals) between interrupts, or between an interrupt
and the `entry` function, requires a `cortex_m::interrupt::Mutex`, which
*always* requires disabling *all* interrupts to access the data. Disabling all
the interrupts is not always required for memory safety but the compiler doesn't
have enough information to optimize the access to the shared data.
The framework provides an abstraction to share data between any of the contexts
we saw in the previous section (task handlers, `init` and `idle`): resources.
The `app` attribute has a full view of the application thus it can optimize
access to `static` variables. In RTFM we refer to the `static` variables
declared inside the `app` pseudo-module as *resources*. To access a resource the
context (`init`, `idle`, `interrupt` or `exception`) one must first declare the
resource in the `resources` argument of its attribute.
Resources are data visible only to functions declared within the `#[app]`
pseudo-module. The framework gives the user complete control over which context
can access which resource.
In the example below two interrupt handlers access the same resource. No `Mutex`
is required in this case because the two handlers run at the same priority and
no preemption is possible. The `SHARED` resource can only be accessed by these
two handlers.
All resources are declared as a single `struct` within the `#[app]`
pseudo-module. Each field in the structure corresponds to a different resource.
Resources can optionally be given an initial value using the `#[init]`
attribute. Resources that are not given an initial value are referred to as
*late* resources and are covered in more detail in a follow up section in this
page.
Each context (task handler, `init` or `idle`) must declare the resources it
intends to access in its corresponding metadata attribute using the `resources`
argument. This argument takes a list of resource names as its value. The listed
resources are made available to the context under the `resources` field of the
`Context` structure.
The example application shown below contains two interrupt handlers that share
access to a resource named `shared`.
``` rust
{{#include ../../../../examples/resource.rs}}
@ -26,40 +31,39 @@ two handlers.
$ cargo run --example resource
{{#include ../../../../ci/expected/resource.run}}```
## Priorities
Note that the `shared` resource cannot accessed from `idle`. Attempting to do
so results in a compile error.
The priority of each handler can be declared in the `interrupt` and `exception`
attributes. It's not possible to set the priority in any other way because the
runtime takes ownership of the `NVIC` peripheral thus it's also not possible to
change the priority of a handler / task at runtime. Thanks to this restriction
the framework has knowledge about the *static* priorities of all interrupt and
exception handlers.
## `lock`
Interrupts and exceptions can have priorities in the range `1..=(1 <<
NVIC_PRIO_BITS)` where `NVIC_PRIO_BITS` is a constant defined in the `device`
crate. The `idle` task has a priority of `0`, the lowest priority.
In the presence of preemption critical sections are required to mutate shared
data in a data race free manner. As the framework has complete knowledge over
the priorities of tasks and which tasks can access which resources it enforces
that critical sections are used where required for memory safety.
Resources that are shared between handlers that run at different priorities
require critical sections for memory safety. The framework ensures that critical
sections are used but *only where required*: for example, no critical section is
required by the highest priority handler that has access to the resource.
The critical section API provided by the RTFM framework (see [`Mutex`]) is
based on dynamic priorities rather than on disabling interrupts. The consequence
is that these critical sections will prevent *some* handlers, including all the
ones that contend for the resource, from *starting* but will let higher priority
handlers, that don't contend for the resource, run.
Where a critical section is required the framework hands out a resource proxy
instead of a reference. This resource proxy is a structure that implements the
[`Mutex`] trait. The only method on this trait, [`lock`], runs its closure
argument in a critical section.
[`Mutex`]: ../../api/rtfm/trait.Mutex.html
[`lock`]: ../../api/rtfm/trait.Mutex.html#method.lock
The critical section created by the `lock` API is based on dynamic priorities:
it temporarily raises the dynamic priority of the context to a *ceiling*
priority that prevents other tasks from preempting the critical section. This
synchronization protocol is known as the [Immediate Ceiling Priority Protocol
(ICPP)][icpp].
[icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
In the example below we have three interrupt handlers with priorities ranging
from one to three. The two handlers with the lower priorities contend for the
`SHARED` resource. The lowest priority handler needs to [`lock`] the
`SHARED` resource to access its data, whereas the mid priority handler can
directly access its data. The highest priority handler is free to preempt
the critical section created by the lowest priority handler.
[`lock`]: ../../api/rtfm/trait.Mutex.html#method.lock
`shared` resource. The lowest priority handler needs to `lock` the
`shared` resource to access its data, whereas the mid priority handler can
directly access its data. The highest priority handler, which cannot access
the `shared` resource, is free to preempt the critical section created by the
lowest priority handler.
``` rust
{{#include ../../../../examples/lock.rs}}
@ -69,27 +73,17 @@ the critical section created by the lowest priority handler.
$ cargo run --example lock
{{#include ../../../../ci/expected/lock.run}}```
One more note about priorities: choosing a priority higher than what the device
supports (that is `1 << NVIC_PRIO_BITS`) will result in a compile error. Due to
limitations in the language the error message is currently far from helpful: it
will say something along the lines of "evaluation of constant value failed" and
the span of the error will *not* point out to the problematic interrupt value --
we are sorry about this!
## Late resources
Unlike normal `static` variables, which need to be assigned an initial value
when declared, resources can be initialized at runtime. We refer to these
runtime initialized resources as *late resources*. Late resources are useful for
*moving* (as in transferring ownership) peripherals initialized in `init` into
interrupt and exception handlers.
Late resources are resources that are not given an initial value at compile
using the `#[init]` attribute but instead are initialized are runtime using the
`init::LateResources` values returned by the `init` function.
Late resources are declared like normal resources but that are given an initial
value of `()` (the unit value). `init` must return the initial values of all
late resources packed in a `struct` of type `init::LateResources`.
Late resources are useful for *moving* (as in transferring the ownership of)
peripherals initialized in `init` into interrupt handlers.
The example below uses late resources to stablish a lockless, one-way channel
between the `UART0` interrupt handler and the `idle` function. A single producer
between the `UART0` interrupt handler and the `idle` task. A single producer
single consumer [`Queue`] is used as the channel. The queue is split into
consumer and producer end points in `init` and then each end point is stored
in a different resource; `UART0` owns the producer resource and `idle` owns
@ -105,22 +99,32 @@ the consumer resource.
$ cargo run --example late
{{#include ../../../../ci/expected/late.run}}```
## `static` resources
## Only shared access
`static` variables can also be used as resources. Tasks can only get `&`
(shared) references to these resources but locks are never required to access
their data. You can think of `static` resources as plain `static` variables that
can be initialized at runtime and have better scoping rules: you can control
which tasks can access the variable, instead of the variable being visible to
all the functions in the scope it was declared in.
By default the framework assumes that all tasks require exclusive access
(`&mut-`) to resources but it is possible to specify that a task only requires
shared access (`&-`) to a resource using the `&resource_name` syntax in the
`resources` list.
In the example below a key is loaded (or created) at runtime and then used from
two tasks that run at different priorities.
The advantage of specifying shared access (`&-`) to a resource is that no locks
are required to access the resource even if the resource is contended by several
tasks running at different priorities. The downside is that the task only gets a
shared reference (`&-`) to the resource, limiting the operations it can perform
on it, but where a shared reference is enough this approach reduces the number
of required locks.
Note that in this release of RTFM it is not possible to request both exclusive
access (`&mut-`) and shared access (`&-`) to the *same* resource from different
tasks. Attempting to do so will result in a compile error.
In the example below a key (e.g. a cryptographic key) is loaded (or created) at
runtime and then used from two tasks that run at different priorities without
any kind of lock.
``` rust
{{#include ../../../../examples/static.rs}}
{{#include ../../../../examples/only-shared-access.rs}}
```
``` console
$ cargo run --example static
{{#include ../../../../ci/expected/static.run}}```
$ cargo run --example only-shared-access
{{#include ../../../../ci/expected/only-shared-access.run}}```

View file

@ -1,22 +1,23 @@
# Software tasks
RTFM treats interrupt and exception handlers as *hardware* tasks. Hardware tasks
are invoked by the hardware in response to events, like pressing a button. RTFM
also supports *software* tasks which can be spawned by the software from any
execution context.
In addition to hardware tasks, which are invoked by the hardware in response to
hardware events, RTFM also supports *software* tasks which can be spawned by the
application from any execution context.
Software tasks can also be assigned priorities and are dispatched from interrupt
handlers. RTFM requires that free interrupts are declared in an `extern` block
when using software tasks; these free interrupts will be used to dispatch the
software tasks. An advantage of software tasks over hardware tasks is that many
tasks can be mapped to a single interrupt handler.
Software tasks can also be assigned priorities and, under the hood, are
dispatched from interrupt handlers. RTFM requires that free interrupts are
declared in an `extern` block when using software tasks; some of these free
interrupts will be used to dispatch the software tasks. An advantage of software
tasks over hardware tasks is that many tasks can be mapped to a single interrupt
handler.
Software tasks are declared by applying the `task` attribute to functions. To be
able to spawn a software task the name of the task must appear in the `spawn`
argument of the context attribute (`init`, `idle`, `interrupt`, etc.).
Software tasks are also declared using the `task` attribute but the `binds`
argument must be omitted. To be able to spawn a software task from a context
the name of the task must appear in the `spawn` argument of the context
attribute (`init`, `idle`, `task`, etc.).
The example below showcases three software tasks that run at 2 different
priorities. The three tasks map to 2 interrupts handlers.
priorities. The three software tasks are mapped to 2 interrupts handlers.
``` rust
{{#include ../../../../examples/task.rs}}
@ -44,15 +45,17 @@ $ cargo run --example message
## Capacity
Task dispatchers do *not* use any dynamic memory allocation. The memory required
to store messages is statically reserved. The framework will reserve enough
space for every context to be able to spawn each task at most once. This is a
sensible default but the "inbox" capacity of each task can be controlled using
the `capacity` argument of the `task` attribute.
RTFM does *not* perform any form of heap-based memory allocation. The memory
required to store messages is statically reserved. By default the framework
minimizes the memory footprint of the application so each task has a message
"capacity" of 1: meaning that at most one message can be posted to the task
before it gets a chance to run. This default can be overridden for each task
using the `capacity` argument. This argument takes a positive integer that
indicates how many messages the task message buffer can hold.
The example below sets the capacity of the software task `foo` to 4. If the
capacity is not specified then the second `spawn.foo` call in `UART0` would
fail.
fail (panic).
``` rust
{{#include ../../../../examples/capacity.rs}}
@ -61,3 +64,54 @@ fail.
``` console
$ cargo run --example capacity
{{#include ../../../../ci/expected/capacity.run}}```
## Error handling
The `spawn` API returns the `Err` variant when there's no space to send the
message. In most scenarios spawning errors are handled in one of two ways:
- Panicking, using `unwrap`, `expect`, etc. This approach is used to catch the
programmer error (i.e. bug) of selecting a capacity that was too small. When
this panic is encountered during testing choosing a bigger capacity and
recompiling the program may fix the issue but sometimes it's necessary to dig
deeper and perform a timing analysis of the application to check if the
platform can deal with peak payload or if the processor needs to be replaced
with a faster one.
- Ignoring the result. In soft real time and non real time applications it may
be OK to occasionally lose data or fail to respond to some events during event
bursts. In those scenarios silently letting a `spawn` call fail may be
acceptable.
It should be noted that retrying a `spawn` call is usually the wrong approach as
this operation will likely never succeed in practice. Because there are only
context switches towards *higher* priority tasks retrying the `spawn` call of a
lower priority task will never let the scheduler dispatch said task meaning that
its message buffer will never be emptied. This situation is depicted in the
following snippet:
``` rust
#[rtfm::app(..)]
const APP: () = {
#[init(spawn = [foo, bar])]
fn init(cx: init::Context) {
cx.spawn.foo().unwrap();
cx.spawn.bar().unwrap();
}
#[task(priority = 2, spawn = [bar])]
fn foo(cx: foo::Context) {
// ..
// the program will get stuck here
while cx.spawn.bar(payload).is_err() {
// retry the spawn call if it failed
}
}
#[task(priority = 1)]
fn bar(cx: bar::Context, payload: i32) {
// ..
}
};
```

View file

@ -1,37 +1,43 @@
# Timer queue
When the `timer-queue` feature is enabled the RTFM framework includes a *global
timer queue* that applications can use to *schedule* software tasks to run at
some time in the future.
In contrast with the `spawn` API, which immediately spawns a software task onto
the scheduler, the `schedule` API can be used to schedule a task to run some
time in the future.
> **NOTE**: The timer-queue feature can't be enabled when the target is
> `thumbv6m-none-eabi` because there's no timer queue support for ARMv6-M. This
> may change in the future.
To use the `schedule` API a monotonic timer must be first defined using the
`monotonic` argument of the `#[app]` attribute. This argument takes a path to a
type that implements the [`Monotonic`] trait. The associated type, `Instant`, of
this trait represents a timestamp in arbitrary units and it's used extensively
in the `schedule` API -- it is suggested to model this type after [the one in
the standard library][std-instant].
> **NOTE**: When the `timer-queue` feature is enabled you will *not* be able to
> use the `SysTick` exception as a hardware task because the runtime uses it to
> implement the global timer queue.
Although not shown in the trait definition (due to limitations in the trait /
type system) the subtraction of two `Instant`s should return some `Duration`
type (see [`core::time::Duration`]) and this `Duration` type must implement the
`TryInto<u32>` trait. The implementation of this trait must convert the
`Duration` value, which uses some arbitrary unit of time, into the "system timer
(SYST) clock cycles" time unit. The result of the conversion must be a 32-bit
integer. If the result of the conversion doesn't fit in a 32-bit number then the
operation must return an error, any error type.
To be able to schedule a software task the name of the task must appear in the
`schedule` argument of the context attribute. When scheduling a task the
[`Instant`] at which the task should be executed must be passed as the first
argument of the `schedule` invocation.
[`Monotonic`]: ../../api/rtfm/trait.Monotonic.html
[std-instant]: https://doc.rust-lang.org/std/time/struct.Instant.html
[`core::time::Duration`]: https://doc.rust-lang.org/core/time/struct.Duration.html
[`Instant`]: ../../api/rtfm/struct.Instant.html
For ARMv7+ targets the `rtfm` crate provides a `Monotonic` implementation based
on the built-in CYCle CouNTer (CYCCNT). Note that this is a 32-bit timer clocked
at the frequency of the CPU and as such it is not suitable for tracking time
spans in the order of seconds.
The RTFM runtime includes a monotonic, non-decreasing, 32-bit timer which can be
queried using the `Instant::now` constructor. A [`Duration`] can be added to
`Instant::now()` to obtain an `Instant` into the future. The monotonic timer is
disabled while `init` runs so `Instant::now()` always returns the value
`Instant(0 /* clock cycles */)`; the timer is enabled right before the
interrupts are re-enabled and `idle` is executed.
[`Duration`]: ../../api/rtfm/struct.Duration.html
To be able to schedule a software task from a context the name of the task must
first appear in the `schedule` argument of the context attribute. When
scheduling a task the (user-defined) `Instant` at which the task should be
executed must be passed as the first argument of the `schedule` invocation.
The example below schedules two tasks from `init`: `foo` and `bar`. `foo` is
scheduled to run 8 million clock cycles in the future. Next, `bar` is scheduled
to run 4 million clock cycles in the future. `bar` runs before `foo` since it
was scheduled to run first.
to run 4 million clock cycles in the future. Thus `bar` runs before `foo` since
it was scheduled to run first.
> **IMPORTANT**: The examples that use the `schedule` API or the `Instant`
> abstraction will **not** properly work on QEMU because the Cortex-M cycle
@ -41,12 +47,19 @@ was scheduled to run first.
{{#include ../../../../examples/schedule.rs}}
```
Running the program on real hardware produces the following output in the console:
Running the program on real hardware produces the following output in the
console:
``` text
{{#include ../../../../ci/expected/schedule.run}}
```
When the `schedule` API is being used the runtime internally uses the `SysTick`
interrupt handler and the system timer peripheral (`SYST`) so neither can be
used by the application. This is accomplished by changing the type of
`init::Context.core` from `cortex_m::Peripherals` to `rtfm::Peripherals`. The
latter structure contains all the fields of the former minus the `SYST` one.
## Periodic tasks
Software tasks have access to the `Instant` at which they were scheduled to run
@ -80,9 +93,10 @@ the task. Depending on the priority of the task and the load of the system the
What do you think will be the value of `scheduled` for software tasks that are
*spawned* instead of scheduled? The answer is that spawned tasks inherit the
*baseline* time of the context that spawned it. The baseline of hardware tasks
is `start`, the baseline of software tasks is `scheduled` and the baseline of
`init` is `start = Instant(0)`. `idle` doesn't really have a baseline but tasks
spawned from it will use `Instant::now()` as their baseline time.
is their `start` time, the baseline of software tasks is their `scheduled` time
and the baseline of `init` is the system start time or time zero
(`Instant::zero()`). `idle` doesn't really have a baseline but tasks spawned
from it will use `Instant::now()` as their baseline time.
The example below showcases the different meanings of the *baseline*.

View file

@ -2,10 +2,21 @@
## Generics
Resources shared between two or more tasks implement the `Mutex` trait in *all*
contexts, even on those where a critical section is not required to access the
data. This lets you easily write generic code that operates on resources and can
be called from different tasks. Here's one such example:
Resources may appear in contexts as resource proxies or as unique references
(`&mut-`) depending on the priority of the task. Because the same resource may
appear as *different* types in different contexts one cannot refactor a common
operation that uses resources into a plain function; however, such refactor is
possible using *generics*.
All resource proxies implement the `rtfm::Mutex` trait. On the other hand,
unique references (`&mut-`) do *not* implement this trait (due to limitations in
the trait system) but one can wrap these references in the [`rtfm::Exclusive`]
newtype which does implement the `Mutex` trait. With the help of this newtype
one can write a generic function that operates on generic resources and call it
from different tasks to perform some operation on the same set of resources.
Here's one such example:
[`rtfm::Exclusive`]: ../../api/rtfm/struct.Exclusive.html
``` rust
{{#include ../../../../examples/generics.rs}}
@ -15,17 +26,15 @@ be called from different tasks. Here's one such example:
$ cargo run --example generics
{{#include ../../../../ci/expected/generics.run}}```
This also lets you change the static priorities of tasks without having to
rewrite code. If you consistently use `lock`s to access the data behind shared
resources then your code will continue to compile when you change the priority
of tasks.
Using generics also lets you change the static priorities of tasks during
development without having to rewrite a bunch code every time.
## Conditional compilation
You can use conditional compilation (`#[cfg]`) on resources (`static [mut]`
items) and tasks (`fn` items). The effect of using `#[cfg]` attributes is that
the resource / task will *not* be available through the corresponding `Context`
`struct` if the condition doesn't hold.
You can use conditional compilation (`#[cfg]`) on resources (the fields of
`struct Resources`) and tasks (the `fn` items). The effect of using `#[cfg]`
attributes is that the resource / task will *not* be available through the
corresponding `Context` `struct` if the condition doesn't hold.
The example below logs a message whenever the `foo` task is spawned, but only if
the program has been compiled using the `dev` profile.
@ -34,6 +43,12 @@ the program has been compiled using the `dev` profile.
{{#include ../../../../examples/cfg.rs}}
```
``` console
$ cargo run --example cfg --release
$ cargo run --example cfg
{{#include ../../../../ci/expected/cfg.run}}```
## Running tasks from RAM
The main goal of moving the specification of RTFM applications to attributes in
@ -70,25 +85,13 @@ One can look at the output of `cargo-nm` to confirm that `bar` ended in RAM
``` console
$ cargo nm --example ramfunc --release | grep ' foo::'
{{#include ../../../../ci/expected/ramfunc.grep.foo}}```
{{#include ../../../../ci/expected/ramfunc.grep.foo}}
```
``` console
$ cargo nm --example ramfunc --release | grep ' bar::'
{{#include ../../../../ci/expected/ramfunc.grep.bar}}```
## `binds`
You can give hardware tasks more task-like names using the `binds` argument: you
name the function as you wish and specify the name of the interrupt / exception
in the `binds` argument. Types like `Spawn` will be placed in a module named
after the function, not the interrupt / exception. Example below:
``` rust
{{#include ../../../../examples/binds.rs}}
{{#include ../../../../ci/expected/ramfunc.grep.bar}}
```
``` console
$ cargo run --example binds
{{#include ../../../../ci/expected/binds.run}}```
## Indirection for faster message passing
@ -100,10 +103,10 @@ instead of sending the buffer by value, one can send an owning pointer into the
buffer.
One can use a global allocator to achieve indirection (`alloc::Box`,
`alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.34.0,
`alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.37.0,
or one can use a statically allocated memory pool like [`heapless::Pool`].
[`heapless::Pool`]: https://docs.rs/heapless/0.4.3/heapless/pool/index.html
[`heapless::Pool`]: https://docs.rs/heapless/0.5.0/heapless/pool/index.html
Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes.
@ -111,7 +114,7 @@ Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes.
{{#include ../../../../examples/pool.rs}}
```
``` console
$ cargo run --example binds
$ cargo run --example pool
{{#include ../../../../ci/expected/pool.run}}```
## Inspecting the expanded code
@ -131,33 +134,18 @@ $ cargo build --example foo
$ rustfmt target/rtfm-expansion.rs
$ tail -n30 target/rtfm-expansion.rs
$ tail target/rtfm-expansion.rs
```
``` rust
#[doc = r" Implementation details"]
const APP: () = {
#[doc = r" Always include the device crate which contains the vector table"]
use lm3s6965 as _;
#[no_mangle]
unsafe fn main() -> ! {
unsafe extern "C" fn main() -> ! {
rtfm::export::interrupt::disable();
let mut core = rtfm::export::Peripherals::steal();
let late = init(
init::Locals::new(),
init::Context::new(rtfm::Peripherals {
CBP: core.CBP,
CPUID: core.CPUID,
DCB: core.DCB,
DWT: core.DWT,
FPB: core.FPB,
FPU: core.FPU,
ITM: core.ITM,
MPU: core.MPU,
SCB: &mut core.SCB,
SYST: core.SYST,
TPIU: core.TPIU,
}),
);
let mut core: rtfm::export::Peripherals = core::mem::transmute(());
core.SCB.scr.modify(|r| r | 1 << 1);
rtfm::export::interrupt::enable();
loop {
@ -175,5 +163,5 @@ crate and print the output to the console.
``` console
$ # produces the same output as before
$ cargo expand --example smallest | tail -n30
$ cargo expand --example smallest | tail
```

View file

@ -1,8 +1,8 @@
# Types, Send and Sync
The `app` attribute injects a context, a collection of variables, into every
function. All these variables have predictable, non-anonymous types so you can
write plain functions that take them as arguments.
Every function within the `APP` pseudo-module has a `Context` structure as its
first parameter. All the fields of these structures have predictable,
non-anonymous types so you can write plain functions that take them as arguments.
The API reference specifies how these types are generated from the input. You
can also generate documentation for you binary crate (`cargo doc --bin <name>`);
@ -20,8 +20,8 @@ The example below shows the different types generates by the `app` attribute.
[`Send`] is a marker trait for "types that can be transferred across thread
boundaries", according to its definition in `core`. In the context of RTFM the
`Send` trait is only required where it's possible to transfer a value between
tasks that run at *different* priorities. This occurs in a few places: in message
passing, in shared `static mut` resources and in the initialization of late
tasks that run at *different* priorities. This occurs in a few places: in
message passing, in shared resources and in the initialization of late
resources.
[`Send`]: https://doc.rust-lang.org/core/marker/trait.Send.html
@ -30,7 +30,7 @@ The `app` attribute will enforce that `Send` is implemented where required so
you don't need to worry much about it. It's more important to know where you do
*not* need the `Send` trait: on types that are transferred between tasks that
run at the *same* priority. This occurs in two places: in message passing and in
shared `static mut` resources.
shared resources.
The example below shows where a type that doesn't implement `Send` can be used.
@ -39,9 +39,11 @@ The example below shows where a type that doesn't implement `Send` can be used.
```
It's important to note that late initialization of resources is effectively a
send operation where the initial value is sent from `idle`, which has the lowest
priority of `0`, to a task with will run with a priority greater than or equal
to `1`. Thus all late resources need to implement the `Send` trait.
send operation where the initial value is sent from the background context,
which has the lowest priority of `0`, to a task, which will run at a priority
greater than or equal to `1`. Thus all late resources need to implement the
`Send` trait, except for those exclusively accessed by `idle`, which runs at a
priority of `0`.
Sharing a resource with `init` can be used to implement late initialization, see
example below. For that reason, resources shared with `init` must also implement
@ -56,14 +58,14 @@ the `Send` trait.
Similarly, [`Sync`] is a marker trait for "types for which it is safe to share
references between threads", according to its definition in `core`. In the
context of RTFM the `Sync` trait is only required where it's possible for two,
or more, tasks that run at different priority to hold a shared reference to a
resource. This only occurs with shared `static` resources.
or more, tasks that run at different priorities and may get a shared reference
(`&-`) to a resource. This only occurs with shared access (`&-`) resources.
[`Sync`]: https://doc.rust-lang.org/core/marker/trait.Sync.html
The `app` attribute will enforce that `Sync` is implemented where required but
it's important to know where the `Sync` bound is not required: in `static`
resources shared between tasks that run at the *same* priority.
it's important to know where the `Sync` bound is not required: shared access
(`&-`) resources contended by tasks that run at the *same* priority.
The example below shows where a type that doesn't implement `Sync` can be used.

View file

@ -0,0 +1,6 @@
# Heterogeneous multi-core support
This section covers the *experimental* heterogeneous multi-core support provided
by RTFM behind the `heterogeneous` Cargo feature.
**Content coming soon**

View file

@ -0,0 +1,6 @@
# Homogeneous multi-core support
This section covers the *experimental* homogeneous multi-core support provided
by RTFM behind the `homogeneous` Cargo feature.
**Content coming soon**

View file

@ -21,7 +21,7 @@ This makes it impossible for the user code to refer to these static variables.
Access to the resources is then given to each task using a `Resources` struct
whose fields correspond to the resources the task has access to. There's one
such struct per task and the `Resources` struct is initialized with either a
mutable reference (`&mut`) to the static variables or with a resource proxy (see
unique reference (`&mut-`) to the static variables or with a resource proxy (see
section on [critical sections](critical-sections.html)).
The code below is an example of the kind of source level transformation that

View file

@ -16,61 +16,65 @@ that has a logical priority of `0` whereas `init` is completely omitted from the
analysis -- the reason for that is that `init` never uses (or needs) critical
sections to access static variables.
In the previous section we showed that a shared resource may appear as a mutable
reference or behind a proxy depending on the task that has access to it. Which
version is presented to the task depends on the task priority and the resource
ceiling. If the task priority is the same as the resource ceiling then the task
gets a mutable reference to the resource memory, otherwise the task gets a
proxy -- this also applies to `idle`. `init` is special: it always gets a
mutable reference to resources.
In the previous section we showed that a shared resource may appear as a unique
reference (`&mut-`) or behind a proxy depending on the task that has access to
it. Which version is presented to the task depends on the task priority and the
resource ceiling. If the task priority is the same as the resource ceiling then
the task gets a unique reference (`&mut-`) to the resource memory, otherwise the
task gets a proxy -- this also applies to `idle`. `init` is special: it always
gets a unique reference (`&mut-`) to resources.
An example to illustrate the ceiling analysis:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
// accessed by `foo` (prio = 1) and `bar` (prio = 2)
// CEILING = 2
static mut X: u64 = 0;
struct Resources {
// accessed by `foo` (prio = 1) and `bar` (prio = 2)
// -> CEILING = 2
#[init(0)]
x: u64,
// accessed by `idle` (prio = 0)
// CEILING = 0
static mut Y: u64 = 0;
// accessed by `idle` (prio = 0)
// -> CEILING = 0
#[init(0)]
y: u64,
}
#[init(resources = [X])]
#[init(resources = [x])]
fn init(c: init::Context) {
// mutable reference because this is `init`
let x: &mut u64 = c.resources.X;
// unique reference because this is `init`
let x: &mut u64 = c.resources.x;
// mutable reference because this is `init`
let y: &mut u64 = c.resources.Y;
// unique reference because this is `init`
let y: &mut u64 = c.resources.y;
// ..
}
// PRIORITY = 0
#[idle(resources = [Y])]
#[idle(resources = [y])]
fn idle(c: idle::Context) -> ! {
// mutable reference because priority (0) == resource ceiling (0)
let y: &'static mut u64 = c.resources.Y;
// unique reference because priority (0) == resource ceiling (0)
let y: &'static mut u64 = c.resources.y;
loop {
// ..
}
}
#[interrupt(binds = UART0, priority = 1, resources = [X])]
#[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
// resource proxy because task priority (1) < resource ceiling (2)
let x: resources::X = c.resources.X;
let x: resources::x = c.resources.x;
// ..
}
#[interrupt(binds = UART1, priority = 2, resources = [X])]
#[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) {
// mutable reference because task priority (2) == resource ceiling (2)
let x: &mut u64 = c.resources.X;
// unique reference because task priority (2) == resource ceiling (2)
let x: &mut u64 = c.resources.x;
// ..
}

View file

@ -1,12 +1,12 @@
# Critical sections
When a resource (static variable) is shared between two, or more, tasks that run
at different priorities some form of mutual exclusion is required to access the
at different priorities some form of mutual exclusion is required to mutate the
memory in a data race free manner. In RTFM we use priority-based critical
sections to guarantee mutual exclusion (see the [Immediate Priority Ceiling
Protocol][ipcp]).
sections to guarantee mutual exclusion (see the [Immediate Ceiling Priority
Protocol][icpp]).
[ipcp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
[icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
The critical section consists of temporarily raising the *dynamic* priority of
the task. While a task is within this critical section all the other tasks that
@ -25,7 +25,7 @@ a data race the *lower priority* task must use a critical section when it needs
to modify the shared memory. On the other hand, the higher priority task can
directly modify the shared memory because it can't be preempted by the lower
priority task. To enforce the use of a critical section on the lower priority
task we give it a *resource proxy*, whereas we give a mutable reference
task we give it a *resource proxy*, whereas we give a unique reference
(`&mut-`) to the higher priority task.
The example below shows the different types handed out to each task:
@ -33,12 +33,15 @@ The example below shows the different types handed out to each task:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
static mut X: u64 = 0;
struct Resources {
#[init(0)]
x: u64,
}
#[interrupt(binds = UART0, priority = 1, resources = [X])]
#[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
// resource proxy
let mut x: resources::X = c.resources.X;
let mut x: resources::x = c.resources.x;
x.lock(|x: &mut u64| {
// critical section
@ -46,9 +49,9 @@ const APP: () = {
});
}
#[interrupt(binds = UART1, priority = 2, resources = [X])]
#[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) {
let mut x: &mut u64 = c.resources.X;
let mut x: &mut u64 = c.resources.x;
*x += 1;
}
@ -69,14 +72,14 @@ fn bar(c: bar::Context) {
}
pub mod resources {
pub struct X {
pub struct x {
// ..
}
}
pub mod foo {
pub struct Resources {
pub X: resources::X,
pub x: resources::x,
}
pub struct Context {
@ -87,7 +90,7 @@ pub mod foo {
pub mod bar {
pub struct Resources<'a> {
pub X: rtfm::Exclusive<'a, u64>, // newtype over `&'a mut u64`
pub x: &'a mut u64,
}
pub struct Context {
@ -97,9 +100,9 @@ pub mod bar {
}
const APP: () = {
static mut X: u64 = 0;
static mut x: u64 = 0;
impl rtfm::Mutex for resources::X {
impl rtfm::Mutex for resources::x {
type T = u64;
fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
@ -111,7 +114,7 @@ const APP: () = {
unsafe fn UART0() {
foo(foo::Context {
resources: foo::Resources {
X: resources::X::new(/* .. */),
x: resources::x::new(/* .. */),
},
// ..
})
@ -121,7 +124,7 @@ const APP: () = {
unsafe fn UART1() {
bar(bar::Context {
resources: bar::Resources {
X: rtfm::Exclusive(&mut X),
x: &mut x,
},
// ..
})
@ -158,7 +161,7 @@ In this particular example we could implement the critical section as follows:
> **NOTE:** this is a simplified implementation
``` rust
impl rtfm::Mutex for resources::X {
impl rtfm::Mutex for resources::x {
type T = u64;
fn lock<R, F>(&mut self, f: F) -> R
@ -170,7 +173,7 @@ impl rtfm::Mutex for resources::X {
asm!("msr BASEPRI, 192" : : : "memory" : "volatile");
// run user code within the critical section
let r = f(&mut implementation_defined_name_for_X);
let r = f(&mut x);
// end of critical section: restore dynamic priority to its static value (`1`)
asm!("msr BASEPRI, 0" : : : "memory" : "volatile");
@ -183,23 +186,23 @@ impl rtfm::Mutex for resources::X {
Here it's important to use the `"memory"` clobber in the `asm!` block. It
prevents the compiler from reordering memory operations across it. This is
important because accessing the variable `X` outside the critical section would
important because accessing the variable `x` outside the critical section would
result in a data race.
It's important to note that the signature of the `lock` method prevents nesting
calls to it. This is required for memory safety, as nested calls would produce
multiple mutable references (`&mut-`) to `X` breaking Rust aliasing rules. See
multiple unique references (`&mut-`) to `x` breaking Rust aliasing rules. See
below:
``` rust
#[interrupt(binds = UART0, priority = 1, resources = [X])]
#[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
// resource proxy
let mut res: resources::X = c.resources.X;
let mut res: resources::x = c.resources.x;
res.lock(|x: &mut u64| {
res.lock(|alias: &mut u64| {
//~^ error: `res` has already been mutably borrowed
//~^ error: `res` has already been uniquely borrowed (`&mut-`)
// ..
});
});
@ -223,18 +226,22 @@ Consider this program:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
static mut X: u64 = 0;
static mut Y: u64 = 0;
struct Resources {
#[init(0)]
x: u64,
#[init(0)]
y: u64,
}
#[init]
fn init() {
rtfm::pend(Interrupt::UART0);
}
#[interrupt(binds = UART0, priority = 1, resources = [X, Y])]
#[interrupt(binds = UART0, priority = 1, resources = [x, y])]
fn foo(c: foo::Context) {
let mut x = c.resources.X;
let mut y = c.resources.Y;
let mut x = c.resources.x;
let mut y = c.resources.y;
y.lock(|y| {
*y += 1;
@ -259,12 +266,12 @@ const APP: () = {
})
}
#[interrupt(binds = UART1, priority = 2, resources = [X])]
#[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) {
// ..
}
#[interrupt(binds = UART2, priority = 3, resources = [Y])]
#[interrupt(binds = UART2, priority = 3, resources = [y])]
fn baz(c: foo::Context) {
// ..
}
@ -279,13 +286,13 @@ The code generated by the framework looks like this:
// omitted: user code
pub mod resources {
pub struct X<'a> {
pub struct x<'a> {
priority: &'a Cell<u8>,
}
impl<'a> X<'a> {
impl<'a> x<'a> {
pub unsafe fn new(priority: &'a Cell<u8>) -> Self {
X { priority }
x { priority }
}
pub unsafe fn priority(&self) -> &Cell<u8> {
@ -293,7 +300,7 @@ pub mod resources {
}
}
// repeat for `Y`
// repeat for `y`
}
pub mod foo {
@ -303,34 +310,35 @@ pub mod foo {
}
pub struct Resources<'a> {
pub X: resources::X<'a>,
pub Y: resources::Y<'a>,
pub x: resources::x<'a>,
pub y: resources::y<'a>,
}
}
const APP: () = {
use cortex_m::register::basepri;
#[no_mangle]
unsafe fn UART0() {
unsafe fn UART1() {
// the static priority of this interrupt (as specified by the user)
const PRIORITY: u8 = 1;
const PRIORITY: u8 = 2;
// take a snashot of the BASEPRI
let initial: u8;
asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
let initial = basepri::read();
let priority = Cell::new(PRIORITY);
foo(foo::Context {
resources: foo::Resources::new(&priority),
bar(bar::Context {
resources: bar::Resources::new(&priority),
// ..
});
// roll back the BASEPRI to the snapshot value we took before
asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile");
basepri::write(initial); // same as the `asm!` block we saw before
}
// similarly for `UART1`
// similarly for `UART0` / `foo` and `UART2` / `baz`
impl<'a> rtfm::Mutex for resources::X<'a> {
impl<'a> rtfm::Mutex for resources::x<'a> {
type T = u64;
fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
@ -342,26 +350,24 @@ const APP: () = {
if current < CEILING {
// raise dynamic priority
self.priority().set(CEILING);
let hw = logical2hw(CEILING);
asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
basepri::write(logical2hw(CEILING));
let r = f(&mut X);
let r = f(&mut y);
// restore dynamic priority
let hw = logical2hw(current);
asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
basepri::write(logical2hw(current));
self.priority().set(current);
r
} else {
// dynamic priority is high enough
f(&mut X)
f(&mut y)
}
}
}
}
// repeat for `Y`
// repeat for resource `y`
};
```
@ -373,38 +379,38 @@ fn foo(c: foo::Context) {
// NOTE: BASEPRI contains the value `0` (its reset value) at this point
// raise dynamic priority to `3`
unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") }
unsafe { basepri::write(160) }
// the two operations on `Y` are merged into one
Y += 2;
// the two operations on `y` are merged into one
y += 2;
// BASEPRI is not modified to access `X` because the dynamic priority is high enough
X += 1;
// BASEPRI is not modified to access `x` because the dynamic priority is high enough
x += 1;
// lower (restore) the dynamic priority to `1`
unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") }
unsafe { basepri::write(224) }
// mid-point
// raise dynamic priority to `2`
unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") }
unsafe { basepri::write(192) }
X += 1;
x += 1;
// raise dynamic priority to `3`
unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") }
unsafe { basepri::write(160) }
Y += 1;
y += 1;
// lower (restore) the dynamic priority to `2`
unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") }
unsafe { basepri::write(192) }
// NOTE: it would be sound to merge this operation on X with the previous one but
// NOTE: it would be sound to merge this operation on `x` with the previous one but
// compiler fences are coarse grained and prevent such optimization
X += 1;
x += 1;
// lower (restore) the dynamic priority to `1`
unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") }
unsafe { basepri::write(224) }
// NOTE: BASEPRI contains the value `224` at this point
// the UART0 handler will restore the value to `0` before returning
@ -425,7 +431,10 @@ handler through preemption. This is best observed in the following example:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
static mut X: u64 = 0;
struct Resources {
#[init(0)]
x: u64,
}
#[init]
fn init() {
@ -444,11 +453,11 @@ const APP: () = {
// this function returns to `idle`
}
#[task(binds = UART1, priority = 2, resources = [X])]
#[task(binds = UART1, priority = 2, resources = [x])]
fn bar() {
// BASEPRI is `0` (dynamic priority = 2)
X.lock(|x| {
x.lock(|x| {
// BASEPRI is raised to `160` (dynamic priority = 3)
// ..
@ -470,7 +479,7 @@ const APP: () = {
}
}
#[task(binds = UART2, priority = 3, resources = [X])]
#[task(binds = UART2, priority = 3, resources = [x])]
fn baz() {
// ..
}
@ -493,8 +502,7 @@ const APP: () = {
const PRIORITY: u8 = 2;
// take a snashot of the BASEPRI
let initial: u8;
asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
let initial = basepri::read();
let priority = Cell::new(PRIORITY);
bar(bar::Context {
@ -503,7 +511,7 @@ const APP: () = {
});
// BUG: FORGOT to roll back the BASEPRI to the snapshot value we took before
// asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile");
basepri::write(initial);
}
};
```

View file

@ -12,7 +12,7 @@ configuration is done before the `init` function runs.
This example gives you an idea of the code that the RTFM framework runs:
``` rust
#[rtfm::app(device = ..)]
#[rtfm::app(device = lm3s6965)]
const APP: () = {
#[init]
fn init(c: init::Context) {
@ -39,8 +39,7 @@ The framework generates an entry point that looks like this:
unsafe fn main() -> ! {
// transforms a logical priority into a hardware / NVIC priority
fn logical2hw(priority: u8) -> u8 {
// this value comes from the device crate
const NVIC_PRIO_BITS: u8 = ..;
use lm3s6965::NVIC_PRIO_BITS;
// the NVIC encodes priority in the higher bits of a bit
// also a bigger numbers means lower priority

View file

@ -11,21 +11,22 @@ initialize late resources.
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
// late resource
static mut X: Thing = {};
struct Resources {
x: Thing,
}
#[init]
fn init() -> init::LateResources {
// ..
init::LateResources {
X: Thing::new(..),
x: Thing::new(..),
}
}
#[task(binds = UART0, resources = [X])]
#[task(binds = UART0, resources = [x])]
fn foo(c: foo::Context) {
let x: &mut Thing = c.resources.X;
let x: &mut Thing = c.resources.x;
x.frob();
@ -50,7 +51,7 @@ fn foo(c: foo::Context) {
// Public API
pub mod init {
pub struct LateResources {
pub X: Thing,
pub x: Thing,
}
// ..
@ -58,7 +59,7 @@ pub mod init {
pub mod foo {
pub struct Resources<'a> {
pub X: &'a mut Thing,
pub x: &'a mut Thing,
}
pub struct Context<'a> {
@ -70,7 +71,7 @@ pub mod foo {
/// Implementation details
const APP: () = {
// uninitialized static
static mut X: MaybeUninit<Thing> = MaybeUninit::uninit();
static mut x: MaybeUninit<Thing> = MaybeUninit::uninit();
#[no_mangle]
unsafe fn main() -> ! {
@ -81,7 +82,7 @@ const APP: () = {
let late = init(..);
// initialization of late resources
X.write(late.X);
x.as_mut_ptr().write(late.x);
cortex_m::interrupt::enable(); //~ compiler fence
@ -94,8 +95,8 @@ const APP: () = {
unsafe fn UART0() {
foo(foo::Context {
resources: foo::Resources {
// `X` has been initialized at this point
X: &mut *X.as_mut_ptr(),
// `x` has been initialized at this point
x: &mut *x.as_mut_ptr(),
},
// ..
})

View file

@ -13,24 +13,20 @@ are discouraged from directly invoking an interrupt handler.
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
static mut X: u64 = 0;
#[init]
fn init(c: init::Context) { .. }
#[interrupt(binds = UART0, resources = [X])]
#[interrupt(binds = UART0)]
fn foo(c: foo::Context) {
let x: &mut u64 = c.resources.X;
static mut X: u64 = 0;
*x = 1;
let x: &mut u64 = X;
// ..
//~ `bar` can preempt `foo` at this point
*x = 2;
if *x == 2 {
// something
}
// ..
}
#[interrupt(binds = UART1, priority = 2)]
@ -40,15 +36,15 @@ const APP: () = {
}
// this interrupt handler will invoke task handler `foo` resulting
// in mutable aliasing of the static variable `X`
// in aliasing of the static variable `X`
unsafe { UART0() }
}
};
```
The RTFM framework must generate the interrupt handler code that calls the user
defined task handlers. We are careful in making these handlers `unsafe` and / or
impossible to call from user code.
defined task handlers. We are careful in making these handlers impossible to
call from user code.
The above example expands into:

View file

@ -19,7 +19,7 @@ task.
The ready queue is a SPSC (Single Producer Single Consumer) lock-free queue. The
task dispatcher owns the consumer endpoint of the queue; the producer endpoint
is treated as a resource shared by the tasks that can `spawn` other tasks.
is treated as a resource contended by the tasks that can `spawn` other tasks.
## The task dispatcher
@ -244,7 +244,7 @@ const APP: () = {
baz_INPUTS[index as usize].write(message);
lock(self.priority(), RQ1_CEILING, || {
// put the task in the ready queu
// put the task in the ready queue
RQ1.split().1.enqueue_unchecked(Ready {
task: T1::baz,
index,

View file

@ -47,7 +47,7 @@ mod foo {
}
const APP: () = {
use rtfm::Instant;
type Instant = <path::to::user::monotonic::timer as rtfm::Monotonic>::Instant;
// all tasks that can be `schedule`-d
enum T {
@ -158,15 +158,14 @@ way it will run at the right priority.
handler; basically, `enqueue_unchecked` delegates the task of setting up a new
timeout interrupt to the `SysTick` handler.
## Resolution and range of `Instant` and `Duration`
## Resolution and range of `cyccnt::Instant` and `cyccnt::Duration`
In the current implementation the `DWT`'s (Data Watchpoint and Trace) cycle
counter is used as a monotonic timer. `Instant::now` returns a snapshot of this
timer; these DWT snapshots (`Instant`s) are used to sort entries in the timer
queue. The cycle counter is a 32-bit counter clocked at the core clock
frequency. This counter wraps around every `(1 << 32)` clock cycles; there's no
interrupt associated to this counter so nothing worth noting happens when it
wraps around.
RTFM provides a `Monotonic` implementation based on the `DWT`'s (Data Watchpoint
and Trace) cycle counter. `Instant::now` returns a snapshot of this timer; these
DWT snapshots (`Instant`s) are used to sort entries in the timer queue. The
cycle counter is a 32-bit counter clocked at the core clock frequency. This
counter wraps around every `(1 << 32)` clock cycles; there's no interrupt
associated to this counter so nothing worth noting happens when it wraps around.
To order `Instant`s in the queue we need to compare two 32-bit integers. To
account for the wrap-around behavior we use the difference between two
@ -264,11 +263,11 @@ The ceiling analysis would go like this:
## Changes in the `spawn` implementation
When the "timer-queue" feature is enabled the `spawn` implementation changes a
bit to track the baseline of tasks. As you saw in the `schedule` implementation
there's an `INSTANTS` buffers used to store the time at which a task was
scheduled to run; this `Instant` is read in the task dispatcher and passed to
the user code as part of the task context.
When the `schedule` API is used the `spawn` implementation changes a bit to
track the baseline of tasks. As you saw in the `schedule` implementation there's
an `INSTANTS` buffers used to store the time at which a task was scheduled to
run; this `Instant` is read in the task dispatcher and passed to the user code
as part of the task context.
``` rust
const APP: () = {

View file

@ -14,6 +14,6 @@ There is a translation of this book in [Russian].
**HEADS UP** This is an **alpha** pre-release; there may be breaking changes in
the API and semantics before a proper release is made.
{{#include ../../../README.md:5:46}}
{{#include ../../../README.md:5:44}}
{{#include ../../../README.md:52:}}
{{#include ../../../README.md:50:}}

2
ci/expected/cfg.run Normal file
View file

@ -0,0 +1,2 @@
foo has been called 1 time
foo has been called 2 times

5
ci/expected/preempt.run Normal file
View file

@ -0,0 +1,5 @@
UART0 - start
UART2 - start
UART2 - end
UART1
UART0 - end

View file

@ -1,3 +1 @@
20000100 B bar::FREE_QUEUE::lk14244m263eivix
200000dc B bar::INPUTS::mi89534s44r1mnj1
20000000 T bar::ns9009yhw2dc2y25
20000000 t ramfunc::bar::h9d6714fe5a3b0c89

View file

@ -1,3 +1 @@
20000100 B foo::FREE_QUEUE::ujkptet2nfdw5t20
200000dc B foo::INPUTS::thvubs85b91dg365
000002c6 T foo::sidaht420cg1mcm8
00000162 t ramfunc::foo::h30e7789b08c08e19

View file

@ -1,3 +1,5 @@
foo
foo - start
foo - middle
baz
foo - end
bar

View file

@ -99,13 +99,14 @@ main() {
local exs=(
idle
init
interrupt
hardware
preempt
binds
resource
lock
late
static
only-shared-access
task
message
@ -117,6 +118,7 @@ main() {
shared-with-init
generics
cfg
pool
ramfunc
)
@ -160,7 +162,11 @@ main() {
fi
arm_example "run" $ex "debug" "" "1"
arm_example "run" $ex "release" "" "1"
if [ $ex = types ]; then
arm_example "run" $ex "release" "" "1"
else
arm_example "build" $ex "release" "" "1"
fi
done
local built=()

View file

@ -13,18 +13,18 @@ use panic_semihosting as _;
#[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = {
#[init(spawn = [foo])]
fn init(c: init::Context) {
hprintln!("init(baseline = {:?})", c.start).unwrap();
fn init(cx: init::Context) {
hprintln!("init(baseline = {:?})", cx.start).unwrap();
// `foo` inherits the baseline of `init`: `Instant(0)`
c.spawn.foo().unwrap();
cx.spawn.foo().unwrap();
}
#[task(schedule = [foo])]
fn foo(c: foo::Context) {
fn foo(cx: foo::Context) {
static mut ONCE: bool = true;
hprintln!("foo(baseline = {:?})", c.scheduled).unwrap();
hprintln!("foo(baseline = {:?})", cx.scheduled).unwrap();
if *ONCE {
*ONCE = false;
@ -36,11 +36,11 @@ const APP: () = {
}
#[task(binds = UART0, spawn = [foo])]
fn uart0(c: uart0::Context) {
hprintln!("UART0(baseline = {:?})", c.start).unwrap();
fn uart0(cx: uart0::Context) {
hprintln!("UART0(baseline = {:?})", cx.start).unwrap();
// `foo` inherits the baseline of `UART0`: its `start` time
c.spawn.foo().unwrap();
cx.spawn.foo().unwrap();
}
extern "C" {

View file

@ -5,6 +5,7 @@
#![no_main]
#![no_std]
use cortex_m_semihosting::debug;
#[cfg(debug_assertions)]
use cortex_m_semihosting::hprintln;
use panic_semihosting as _;
@ -17,28 +18,36 @@ const APP: () = {
count: u32,
}
#[init]
fn init(_: init::Context) {
// ..
#[init(spawn = [foo])]
fn init(cx: init::Context) {
cx.spawn.foo().unwrap();
cx.spawn.foo().unwrap();
}
#[task(priority = 3, resources = [count], spawn = [log])]
fn foo(_c: foo::Context) {
#[idle]
fn idle(_: idle::Context) -> ! {
debug::exit(debug::EXIT_SUCCESS);
loop {}
}
#[task(capacity = 2, resources = [count], spawn = [log])]
fn foo(_cx: foo::Context) {
#[cfg(debug_assertions)]
{
*_c.resources.count += 1;
*_cx.resources.count += 1;
_c.spawn.log(*_c.resources.count).ok();
_cx.spawn.log(*_cx.resources.count).unwrap();
}
// this wouldn't compile in `release` mode
// *resources.count += 1;
// *_cx.resources.count += 1;
// ..
}
#[cfg(debug_assertions)]
#[task]
#[task(capacity = 2)]
fn log(_: log::Context, n: u32) {
hprintln!(
"foo has been called {} time{}",

View file

@ -29,6 +29,7 @@ const APP: () = {
hprintln!("UART0(STATE = {})", *STATE).unwrap();
// second argument has type `resources::shared`
advance(STATE, c.resources.shared);
rtfm::pend(Interrupt::UART1);
@ -45,14 +46,16 @@ const APP: () = {
// just to show that `shared` can be accessed directly
*c.resources.shared += 0;
// second argument has type `Exclusive<u32>`
advance(STATE, Exclusive(c.resources.shared));
}
};
// the second parameter is generic: it can be any type that implements the `Mutex` trait
fn advance(state: &mut u32, mut shared: impl Mutex<T = u32>) {
*state += 1;
let (old, new) = shared.lock(|shared| {
let (old, new) = shared.lock(|shared: &mut u32| {
let old = *shared;
*shared += *state;
(old, *shared)

View file

@ -1,4 +1,4 @@
//! examples/interrupt.rs
//! examples/hardware.rs
#![deny(unsafe_code)]
#![deny(warnings)]
@ -15,7 +15,7 @@ const APP: () = {
fn init(_: init::Context) {
// Pends the UART0 interrupt but its handler won't run until *after*
// `init` returns because interrupts are disabled
rtfm::pend(Interrupt::UART0);
rtfm::pend(Interrupt::UART0); // equivalent to NVIC::pend
hprintln!("init").unwrap();
}

View file

@ -11,14 +11,14 @@ use panic_semihosting as _;
#[rtfm::app(device = lm3s6965, peripherals = true)]
const APP: () = {
#[init]
fn init(c: init::Context) {
fn init(cx: init::Context) {
static mut X: u32 = 0;
// Cortex-M peripherals
let _core: cortex_m::Peripherals = c.core;
let _core: cortex_m::Peripherals = cx.core;
// Device specific peripherals
let _device: lm3s6965::Peripherals = c.device;
let _device: lm3s6965::Peripherals = cx.device;
// Safe access to local `static mut` variable
let _x: &'static mut u32 = X;

View file

@ -8,6 +8,7 @@
use cortex_m_semihosting::{debug, hprintln};
use heapless::{
consts::*,
i,
spsc::{Consumer, Producer, Queue},
};
use lm3s6965::Interrupt;
@ -23,12 +24,9 @@ const APP: () = {
#[init]
fn init(_: init::Context) -> init::LateResources {
// NOTE: we use `Option` here to work around the lack of
// a stable `const` constructor
static mut Q: Option<Queue<u32, U4>> = None;
static mut Q: Queue<u32, U4> = Queue(i::Queue::new());
*Q = Some(Queue::new());
let (p, c) = Q.as_mut().unwrap().split();
let (p, c) = Q.split();
// Initialization of late resources
init::LateResources { p, c }

View file

@ -26,12 +26,12 @@ const APP: () = {
debug::exit(debug::EXIT_SUCCESS);
}
#[task(resources = [shared])]
#[task(resources = [&shared])]
fn foo(c: foo::Context) {
let _: &NotSync = c.resources.shared;
}
#[task(resources = [shared])]
#[task(resources = [&shared])]
fn bar(c: bar::Context) {
let _: &NotSync = c.resources.shared;
}

View file

@ -24,14 +24,15 @@ const APP: () = {
}
#[task(binds = UART0, resources = [&key])]
fn uart0(c: uart0::Context) {
hprintln!("UART0(key = {:#x})", c.resources.key).unwrap();
fn uart0(cx: uart0::Context) {
let key: &u32 = cx.resources.key;
hprintln!("UART0(key = {:#x})", key).unwrap();
debug::exit(debug::EXIT_SUCCESS);
}
#[task(binds = UART1, priority = 2, resources = [&key])]
fn uart1(c: uart1::Context) {
hprintln!("UART1(key = {:#x})", c.resources.key).unwrap();
fn uart1(cx: uart1::Context) {
hprintln!("UART1(key = {:#x})", cx.resources.key).unwrap();
}
};

View file

@ -15,16 +15,16 @@ const PERIOD: u32 = 8_000_000;
#[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = {
#[init(schedule = [foo])]
fn init(c: init::Context) {
c.schedule.foo(Instant::now() + PERIOD.cycles()).unwrap();
fn init(cx: init::Context) {
cx.schedule.foo(Instant::now() + PERIOD.cycles()).unwrap();
}
#[task(schedule = [foo])]
fn foo(c: foo::Context) {
fn foo(cx: foo::Context) {
let now = Instant::now();
hprintln!("foo(scheduled = {:?}, now = {:?})", c.scheduled, now).unwrap();
hprintln!("foo(scheduled = {:?}, now = {:?})", cx.scheduled, now).unwrap();
c.schedule.foo(c.scheduled + PERIOD.cycles()).unwrap();
cx.schedule.foo(cx.scheduled + PERIOD.cycles()).unwrap();
}
extern "C" {

37
examples/preempt.rs Normal file
View file

@ -0,0 +1,37 @@
//! examples/preempt.rs
#![no_main]
#![no_std]
use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use panic_semihosting as _;
use rtfm::app;
#[app(device = lm3s6965)]
const APP: () = {
#[init]
fn init(_: init::Context) {
rtfm::pend(Interrupt::UART0);
}
#[task(binds = UART0, priority = 1)]
fn uart0(_: uart0::Context) {
hprintln!("UART0 - start").unwrap();
rtfm::pend(Interrupt::UART2);
hprintln!("UART0 - end").unwrap();
debug::exit(debug::EXIT_SUCCESS);
}
#[task(binds = UART1, priority = 2)]
fn uart1(_: uart1::Context) {
hprintln!(" UART1").unwrap();
}
#[task(binds = UART2, priority = 2)]
fn uart2(_: uart2::Context) {
hprintln!(" UART2 - start").unwrap();
rtfm::pend(Interrupt::UART1);
hprintln!(" UART2 - end").unwrap();
}
};

View file

@ -23,29 +23,31 @@ const APP: () = {
rtfm::pend(Interrupt::UART1);
}
// `shared` cannot be accessed from this context
#[idle]
fn idle(_: idle::Context) -> ! {
fn idle(_cx: idle::Context) -> ! {
debug::exit(debug::EXIT_SUCCESS);
// error: `shared` can't be accessed from this context
// shared += 1;
// error: no `resources` field in `idle::Context`
// _cx.resources.shared += 1;
loop {}
}
// `shared` can be access from this context
// `shared` can be accessed from this context
#[task(binds = UART0, resources = [shared])]
fn uart0(c: uart0::Context) {
*c.resources.shared += 1;
fn uart0(cx: uart0::Context) {
let shared: &mut u32 = cx.resources.shared;
*shared += 1;
hprintln!("UART0: shared = {}", c.resources.shared).unwrap();
hprintln!("UART0: shared = {}", shared).unwrap();
}
// `shared` can be access from this context
// `shared` can be accessed from this context
#[task(binds = UART1, resources = [shared])]
fn uart1(c: uart1::Context) {
*c.resources.shared += 1;
fn uart1(cx: uart1::Context) {
*cx.resources.shared += 1;
hprintln!("UART1: shared = {}", c.resources.shared).unwrap();
hprintln!("UART1: shared = {}", cx.resources.shared).unwrap();
}
};

View file

@ -13,16 +13,16 @@ use rtfm::cyccnt::{Instant, U32Ext as _};
#[rtfm::app(device = lm3s6965, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = {
#[init(schedule = [foo, bar])]
fn init(c: init::Context) {
fn init(cx: init::Context) {
let now = Instant::now();
hprintln!("init @ {:?}", now).unwrap();
// Schedule `foo` to run 8e6 cycles (clock cycles) in the future
c.schedule.foo(now + 8_000_000.cycles()).unwrap();
cx.schedule.foo(now + 8_000_000.cycles()).unwrap();
// Schedule `bar` to run 4e6 cycles in the future
c.schedule.bar(now + 4_000_000.cycles()).unwrap();
cx.schedule.bar(now + 4_000_000.cycles()).unwrap();
}
#[task]

View file

@ -1,7 +1,5 @@
//! examples/smallest.rs
#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

View file

@ -17,16 +17,20 @@ const APP: () = {
#[task(spawn = [bar, baz])]
fn foo(c: foo::Context) {
hprintln!("foo").unwrap();
hprintln!("foo - start").unwrap();
// spawns `bar` onto the task scheduler
// `foo` and `bar` have the same priority so `bar` will not run until
// after `foo` terminates
c.spawn.bar().unwrap();
hprintln!("foo - middle").unwrap();
// spawns `baz` onto the task scheduler
// `baz` has higher priority than `foo` so it immediately preempts `foo`
c.spawn.baz().unwrap();
hprintln!("foo - end").unwrap();
}
#[task]

View file

@ -7,7 +7,7 @@
use cortex_m_semihosting::debug;
use panic_semihosting as _;
use rtfm::cyccnt::Instant;
use rtfm::cyccnt;
#[rtfm::app(device = lm3s6965, peripherals = true, monotonic = rtfm::cyccnt::CYCCNT)]
const APP: () = {
@ -17,38 +17,39 @@ const APP: () = {
}
#[init(schedule = [foo], spawn = [foo])]
fn init(c: init::Context) {
let _: Instant = c.start;
let _: rtfm::Peripherals = c.core;
let _: lm3s6965::Peripherals = c.device;
let _: init::Schedule = c.schedule;
let _: init::Spawn = c.spawn;
fn init(cx: init::Context) {
let _: cyccnt::Instant = cx.start;
let _: rtfm::Peripherals = cx.core;
let _: lm3s6965::Peripherals = cx.device;
let _: init::Schedule = cx.schedule;
let _: init::Spawn = cx.spawn;
debug::exit(debug::EXIT_SUCCESS);
}
#[task(binds = SVCall, schedule = [foo], spawn = [foo])]
fn svcall(c: svcall::Context) {
let _: Instant = c.start;
let _: svcall::Schedule = c.schedule;
let _: svcall::Spawn = c.spawn;
#[idle(schedule = [foo], spawn = [foo])]
fn idle(cx: idle::Context) -> ! {
let _: idle::Schedule = cx.schedule;
let _: idle::Spawn = cx.spawn;
loop {}
}
#[task(binds = UART0, resources = [shared], schedule = [foo], spawn = [foo])]
fn uart0(c: uart0::Context) {
let _: Instant = c.start;
let _: resources::shared = c.resources.shared;
let _: uart0::Schedule = c.schedule;
let _: uart0::Spawn = c.spawn;
fn uart0(cx: uart0::Context) {
let _: cyccnt::Instant = cx.start;
let _: resources::shared = cx.resources.shared;
let _: uart0::Schedule = cx.schedule;
let _: uart0::Spawn = cx.spawn;
}
#[task(priority = 2, resources = [shared], schedule = [foo], spawn = [foo])]
fn foo(c: foo::Context) {
let _: Instant = c.scheduled;
let _: &mut u32 = c.resources.shared;
let _: foo::Resources = c.resources;
let _: foo::Schedule = c.schedule;
let _: foo::Spawn = c.spawn;
fn foo(cx: foo::Context) {
let _: cyccnt::Instant = cx.scheduled;
let _: &mut u32 = cx.resources.shared;
let _: foo::Resources = cx.resources;
let _: foo::Schedule = cx.schedule;
let _: foo::Spawn = cx.spawn;
}
extern "C" {