Current implmentation always reads and writes BASEPRI on entry/exit of an interrupt (this is done by the `cortex-m-rtfm/src/export::run` which is a trampoline to execute the actual task).
Using this approch, we are reading BASEPRI if and only if we are actually changing BASEPRI.
On restoring BASEPRI (in `lock`) we chose to restore the original BASEPRI value if we at the outmost nesting level (initial priority of the task). In this way, we can avoid unnecessary BASEPRI accesses, and reduce register pressure.
We extend `cortex-m-rtfm/src/export::Priority` with additional fields to store `init_logic` (priority of the task) and `old_basepri_hw`. The latter field is initially `None` on creation.
The highest priority is achieved through an `interrupt_free` and does not at all affect the `BASEPRI`. Thus it manipulates only the "logic" priority (used to optimize out locks).
For the normal case, on enter we check if the BASEPRI register has been read, if not we read it and update `old_basepri_hw`. On exit we check if we should restore a logical priority (inside a nested lock) or to restore the BASEPRI (previously stored in `old_basepri_hw`).
We can safely `unwrap` the `get_old_basepri_hw: Option<u8>` as the path leading up to the `unwrap` passes an update to `Some` or was already `Some`. Updating `get_old_basepri_hw` is monotonic, the API offers no way of making `get_old_basepri_hw` into `None` (besides `new`).
Moreover `new` is the only public function of `Priority`, thus we are exposing nothing dangerous to the user. (Externally changing `old_basepri_hw` could lead to memory unsafety, as an incorrect BASEPRI value may allow starting a task that should have been blocked, and once started access to resources with the same ceiling (or lower) is directly granted under SRP).
Implementation mainly regards two files, the `rtfm/src/export.rs` (discussed above) and `macros/src/codegen/hardware_tasks.rs`. For the latter the task dispatcher is updated as follows:
Basically we create `Priority` (on stack) and use that to create a `Context`. The beauty is that LLVM is completely optimizing out the data structure (and related code), but taking into account its implications to control flow. Thus, the locks AND initial reading of BASEPRI will be optimized at compile time at Zero cost.
Overall, using this approach, we don't need a trampoline (`run`). We reduce the overhead by at least two machine instructions (additional reading/writing of BASEPRI) for each interrupt. It also reduces the register pressure (as less information needs to be stored).
For GPIOB, there is a single read of BASEPRI (stored in `old_basepri_hw`) and just two writes, one for entering critical section, one for exiting. On exit we detect that we are indeed at the initial priority for the task, thus we restore the `old_basepri_hw` instead of a logic priority.