Rust on an STM32 microcontroller

Marco Amann
Digital Frontiers — Das Blog
8 min readMay 31, 2022

Rust is a great language but is it a good fit for embedded development? This post will guide you through the set-up of the embedded toolchain and we will have a look at one surprising aspect of Rust in the embedded realm to start you off for your own projects.

The Hardware

In this post, we use an STM32F411RE as the reference hardware. Luckily the Rust community has a whole lot of supported processors. To check whether your particular model is supported, you need to make sure the following conditions are met:

  • The processor architecture has to be supported by the rust compiler. You can check whether this is the case by searching for your architecture here.
  • Although support for the target architecture is technically enough to run your code, you probably want to have a few abstractions on top of you manipulating raw registers. A good starting point is to look out for a crate that was generated based on the vendor-supplied SVD-Files. For example one like this.
  • To further simplify using the processor, an implementation of the embedded hal greatly reduces overhead. For the processor used here, this would be this crate.
  • If you want to go even a step further, you can search for or write your own board support crate that abstracts away things like buttons and LEDs for the specific board. An example that only needs a bit of adjustment would be this crate for the STM32F411RE-NUCLEO pictured above.

To make sure that the hardware is OK and you do not waste time looking for errors in your Rust setup, quickly check that the vendor-supplied program is running and probably even flash the chip with a vendor-supplied firmware and toolchain setup, just to make sure that the board is recognized correctly by your OS.

The Rust Toolchain

If you are on Linux or OSX, the setup is trivial and outlined below. If you are on Windows, good luck, there be dragons. You need to install the ST drivers and some parts of the VS BuildTools but don’t trust the tutorials, (including the ones I posted online) they are all partially wrong.

On Linux and OSX, you can get away by simply having a working Rust installation using rustup. If you want to use gdb as your debugger, consider installing gdb-multiarch or on Mac armmbed/formulae/arm-none-eabi-gcc.

For all OSes, you need to have the correct target installed and need a few tools that can be installed with cargo:

rustup update
rustup component add llvm-tools-preview
rustup target add thumbv7em-none-eabihf
cargo install cargo-binutils cargo-embed cargo-flash cargo-expand

Make sure to actually have the newest version of rust by running in rustup updateadvance, otherwise you can watch your computer compile the other tools twice.

Thanks to probe.rs (installed with cargo-embed as above), we got pretty much all we need to start writing code.

Project Setup

In the sample repo, there is a sample project you can cd into and run to see whether your setup works. To do so, simply run cargo embed and watch it do its magic. You should see the LED start to blink and Hello, world! be printed on your terminal from the controller and a bunch of increasing numbers.

Let’s quickly go through the components used in the project so that you can create your own projects without copying a template. We will have a look at the actual code in main.rs at the end, since this is the most interesting part.

The file memory.x contains the desired memory layout for our chip. You have to look this one up once and forget about it until you wish to change the hardware platform.

The Cargo.toml contains a few dependencies, that are needed for the code to run. Despite the platform-specific ones, there is an rtt-target crate, allowing us to easily communicate between host and controller via RTT. This is responsible for the hello world print earlier. Then there is a panic-halt crate, that defines the panic handler for our code. In the case we panic, this simply halts the processor. Not great, not terrible. Other ready-made panic handlers are available but require a bit of setup.

The platform-specific crates contain all the magic we are going to use, from the implementation of the HAL to a linker script. Since this topic is quite involved, we will skip over them for now.

The Code

We will use the sample code from here. But ignore all the boilerplate for now.

The annotations are to tell Rust we do not have a main function (in fact we do have one but we will get to that later on) and we do not want to use the standard library. This mode is hence called no_std and offers limited functionality. The reason for refusing to use std can be easily demonstrated if you think about threads. Your operating system is responsible to create, isolate and run threads. On the controller, we do not have an OS. So to make std work, we would have to provide an implementation for all such functionality.

The #[entry] annotation denotes that function as the entry point of our code. We could name the function however we wanted, the macro would take care of placing it at the correct position in the controller's memory. This explains the #[no_main] earlier.

After a bit of boilerplate code, we use the GPIO port A and split it into individual pins, select the one to which the LED is connected and name it accordingly. We further create a delay that is based on the system clock. Horribly inefficient but simple.

The macro rprintln! is a drop-in replacement for println! and uses RTT, thereby allowing us to print on the host's shell. In the loop, we use it again to format-print the current value of the counter. Further, the loop increments the counter, toggles the state of the led pin, and waits a few milliseconds.

Photo by Kilian Karger on Unsplash

Safety

Note that in the above code, we did not use any unsafe code. All the unsafe code is contained in libraries and (hopefully) well tested. This is one of the greatest features of Rust: even if we choose to use no_std, the safety features like the ownership system are still active. The same goes for components facilitating easy interaction with the ownership system like the Option type or a RefCell.

Please be aware that there is a lot going on under the hood to enable you to use a safe interface to interact with the hardware. Consider following this pattern by implementing unsafe features in encapsulated parts of the code that are well-tested and hard to use wrong.

A tricky situation

It is a recurring theme in my blogposts to ask, whether Rust is too complicated for the given scenario without giving an answer. I will do the same for this post, with the controller being in a tricky situation: An interrupt is happening.

Let me motivate the scenario: You press a button on the board (the green one in the rendering at the beginning of the post) and the LED should invert its state. We decide to approach this by using an interrupt. Once the button is pressed, the interrupt fires, and our controller interrupts its current processing (given that the interrupt is currently allowed and enabled) to execute the interrupt handler (or ISR). In our case, this interrupt handler is a simple function. Now we have a bit of a problem: We need to share state between the main code and the interrupt handler. This is either the LED itself, that was created and configured in the main function or some state indicating the On/Off state of the LED, that is modified in the ISR and can be used in a loop in the main function to toggle the LED. If you already want to dive into the code, these and several other approaches are shown here.

I used the “forbidden concept” here: shared mutable state. In Rust, we try to avoid that as much as we can but sometimes we cannot get around it. In such cases, we use a selection of Mutex, RefCell and/or Channel.

But wait a moment, do we even need a Mutex, if we are limited to one core? Even more: only one single process executing on it? Sadly, we do: imagine you write something to a shared memory location when all of a sudden an interrupt fires, interrupting your write and reading partial data. This is the same problem we would face in a single-core processor with preemptive scheduling!

Luckily, the coretx_m crate provides us with a special implementation of a Mutex. This Mutex works by requiring us to provide it a reference to a CriticalSection, a struct that can only be (safely) obtained if we execute the code in a closure running in an interrupt-free context. That way, we can make sure that while the Mutex is unlocked, no interrupt may interfere.

This is an incredibly valuable security feature: Instead of relying on the programmer remembering to do something like this or sheer luck, Rust forces us to use these primitives. The relevant pieces of code are shown below:

The interrupt handler in EXTI15_10 uses quite a bit of syntax. You can reduce the verbosity of your code here, by using a macro. Since the LED is never again used somewhere else, we make reasoning about the code simpler, if we make use of a feature of the #[interrupt] macro: It allows us to define “scoped” global static variables, that are safe to use in the ISR. That way, we can move the LED from the G_LED to a “local global static”. This is shown here. Pay attention here: By doing so we do not require a interrupt-free context to use the variable, so this works best for state that is only “transferred” once from the main function to its ISR.

Oh, before you turn down Rust for embedded, you can also use atomics here, allowing for cleaner code. Frameworks like RTIC do many of these things for you behind the scenes, so your code is much less convoluted. This is a topic for another post.

Summary

Using Rust for embedded development is great: It enforces the same safety guarantees you are used to but does allow you to use it on a wide variety of devices. While the code may quickly become convoluted if working with shared mutable state, you can (rather have to) do so in a safe and sound way. To keep the complexity of your code at bay, you can make use of encapsulation, macros or frameworks like RTIC.

If you have experience with embedded development with C-toolchains, you will find the setup experience refreshingly simple and well-documented.

Thanks for reading! If you have any questions, suggestions, or critique regarding the topic, feel free to respond or contact me. You might be interested in the other posts published in the Digital Frontiers blog, announced on our Twitter account.

--

--