6 KiB
Tips & tricks
Generics
Resources may appear in contexts as resource proxies or as unique references
(&mut-
) depending on the priority of the task. Because the same resource may
appear as different types in different contexts one cannot refactor a common
operation that uses resources into a plain function; however, such refactor is
possible using generics.
All resource proxies implement the rtfm::Mutex
trait. On the other hand,
unique references (&mut-
) do not implement this trait (due to limitations in
the trait system) but one can wrap these references in the rtfm::Exclusive
newtype which does implement the Mutex
trait. With the help of this newtype
one can write a generic function that operates on generic resources and call it
from different tasks to perform some operation on the same set of resources.
Here's one such example:
{{#include ../../../../examples/generics.rs}}
$ cargo run --example generics
{{#include ../../../../ci/expected/generics.run}}```
Using generics also lets you change the static priorities of tasks during
development without having to rewrite a bunch code every time.
## Conditional compilation
You can use conditional compilation (`#[cfg]`) on resources (the fields of
`struct Resources`) and tasks (the `fn` items). The effect of using `#[cfg]`
attributes is that the resource / task will *not* be available through the
corresponding `Context` `struct` if the condition doesn't hold.
The example below logs a message whenever the `foo` task is spawned, but only if
the program has been compiled using the `dev` profile.
``` rust
{{#include ../../../../examples/cfg.rs}}
$ cargo run --example cfg --release
$ cargo run --example cfg
{{#include ../../../../ci/expected/cfg.run}}```
## Running tasks from RAM
The main goal of moving the specification of RTFM applications to attributes in
RTFM v0.4.0 was to allow inter-operation with other attributes. For example, the
`link_section` attribute can be applied to tasks to place them in RAM; this can
improve performance in some cases.
> **IMPORTANT**: In general, the `link_section`, `export_name` and `no_mangle`
> attributes are very powerful but also easy to misuse. Incorrectly using any of
> these attributes can cause undefined behavior; you should always prefer to use
> safe, higher level attributes around them like `cortex-m-rt`'s `interrupt` and
> `exception` attributes.
>
> In the particular case of RAM functions there's no
> safe abstraction for it in `cortex-m-rt` v0.6.5 but there's an [RFC] for
> adding a `ramfunc` attribute in a future release.
[RFC]: https://github.com/rust-embedded/cortex-m-rt/pull/100
The example below shows how to place the higher priority task, `bar`, in RAM.
``` rust
{{#include ../../../../examples/ramfunc.rs}}
Running this program produces the expected output.
$ cargo run --example ramfunc
{{#include ../../../../ci/expected/ramfunc.run}}```
One can look at the output of `cargo-nm` to confirm that `bar` ended in RAM
(`0x2000_0000`), whereas `foo` ended in Flash (`0x0000_0000`).
``` console
$ cargo nm --example ramfunc --release | grep ' foo::'
{{#include ../../../../ci/expected/ramfunc.grep.foo}}
$ cargo nm --example ramfunc --release | grep ' bar::'
{{#include ../../../../ci/expected/ramfunc.grep.bar}}
Indirection for faster message passing
Message passing always involves copying the payload from the sender into a
static variable and then from the static variable into the receiver. Thus
sending a large buffer, like a [u8; 128]
, as a message involves two expensive
memcpy
s. To minimize the message passing overhead one can use indirection:
instead of sending the buffer by value, one can send an owning pointer into the
buffer.
One can use a global allocator to achieve indirection (alloc::Box
,
alloc::Rc
, etc.), which requires using the nightly channel as of Rust v1.37.0,
or one can use a statically allocated memory pool like heapless::Pool
.
Here's an example where heapless::Pool
is used to "box" buffers of 128 bytes.
{{#include ../../../../examples/pool.rs}}
$ cargo run --example pool
{{#include ../../../../ci/expected/pool.run}}```
## Inspecting the expanded code
`#[rtfm::app]` is a procedural macro that produces support code. If for some
reason you need to inspect the code generated by this macro you have two
options:
You can inspect the file `rtfm-expansion.rs` inside the `target` directory. This
file contains the expansion of the `#[rtfm::app]` item (not your whole program!)
of the *last built* (via `cargo build` or `cargo check`) RTFM application. The
expanded code is not pretty printed by default so you'll want to run `rustfmt`
over it before you read it.
``` console
$ cargo build --example foo
$ rustfmt target/rtfm-expansion.rs
$ tail target/rtfm-expansion.rs
#[doc = r" Implementation details"]
const APP: () = {
#[doc = r" Always include the device crate which contains the vector table"]
use lm3s6965 as _;
#[no_mangle]
unsafe extern "C" fn main() -> ! {
rtfm::export::interrupt::disable();
let mut core: rtfm::export::Peripherals = core::mem::transmute(());
core.SCB.scr.modify(|r| r | 1 << 1);
rtfm::export::interrupt::enable();
loop {
rtfm::export::wfi()
}
}
};
Or, you can use the cargo-expand
subcommand. This subcommand will expand
all the macros, including the #[rtfm::app]
attribute, and modules in your
crate and print the output to the console.
$ # produces the same output as before
$ cargo expand --example smallest | tail
Resource de-structure-ing
When having a task taking multiple resources it can help in readability to split up the resource struct. Here're two examples on how this can be done:
{{#include ../../../../examples/destructure.rs}}