A Framework for LED Animations

Sam Friedman
Latch Engineering Blog
9 min readMar 30, 2022

At Latch, we take pride in our award-winning designs. For the user interface on the Latch Lens, our design team has created elegant animations from simple hardware consisting of 19 independently dimmable LEDs. It is the responsibility of the firmware team to bring these designs to life.

But writing user interfaces in C can be daunting. The code is often repetitive, difficult to read, and prone to errors. Minor adjustments can result in major bugs. For these reasons, modern user interfaces aren’t usually written in languages like C or Java, but instead are built using frameworks that employ declarative languages, like HTML for websites and XML for mobile apps.

What these UI frameworks have in common is that they split the UI into an engine that knows how to display things (e.g. a web browser), and data that specifies what to display (e.g. HTML). This reduces the code to a smaller, more testable, and easier to maintain module, and restricts the visual information to a well-defined set of possible values. And while languages like HTML are overkill for many low power, embedded devices, we can still benefit from applying the same philosophy to our LED animations.

Describing Animations as Data

To separate the what from the how, we store our animations as data, rather than writing them as code. That begins by breaking them down into pieces. The fundamental unit of an animation is an action, which is the fading of an LED from one brightness to another, linearly, over some duration, with an optional delay before beginning the fade. We describe an action using a C struct:

struct anim_action
{
led_id_e led;
int8_t start;
int8_t end;
int duration_ms;
int delay_ms;
};

If we put several actions together into an array, we get a sequence of actions that describes an animation. For example, here is the sequence for the animation that plays when a user unlocks their device successfully.

static const struct animation_sequence unlock_anim =
{
.num_actions = 26,
.actions =
{
{.led = 0, .start = 45, .end = -1, .duration_ms = -1, .delay_ms = 0},
{.led = 1, .start = 45, .end = -1, .duration_ms = -1, .delay_ms = 0},
{.led = 2, .start = 45, .end = -1, .duration_ms = -1, .delay_ms = 0},
{.led = 3, .start = 45, .end = -1, .duration_ms = -1, .delay_ms = 0},
{.led = 4, .start = 45, .end = -1, .duration_ms = -1, .delay_ms = 0},
{.led = 5, .start = 45, .end = -1, .duration_ms = -1, .delay_ms = 0},
{.led = 6, .start = 45, .end = -1, .duration_ms = -1, .delay_ms = 0},
{.led = 7, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 8, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 9, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 10, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 11, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 12, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 13, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 14, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 15, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 16, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 17, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 18, .start = 45, .end = 0, .duration_ms = 200, .delay_ms = 0},
{.led = 1, .start = 45, .end = 0, .duration_ms = 300, .delay_ms = 900},
{.led = 2, .start = 45, .end = 0, .duration_ms = 150, .delay_ms = 1150},
{.led = 3, .start = 45, .end = 0, .duration_ms = 72, .delay_ms = 1232},
{.led = 4, .start = 45, .end = 0, .duration_ms = 72, .delay_ms = 1314},
{.led = 5, .start = 45, .end = 0, .duration_ms = 72, .delay_ms = 1396},
{.led = 6, .start = 45, .end = 0, .duration_ms = 72, .delay_ms = 1478},
{.led = 0, .start = 45, .end = 0, .duration_ms = 72, .delay_ms = 1560},
}
};

You may notice that some actions have -1 set as their end brightness and duration. This indicates that the LED should be immediately set to the start brightness value, rather than faded over time.

Finally, we need to know when to play the sequence. Our firmware is built on an event-driven model where tasks post events to topics on a publish-subscribe bus. Each animation plays in response to one or more topics, and so we just need to store a mapping from topics to sequences. Again using the unlock animation as an example:

static const struct animation_event unlock_success =
{
.topic = “unlock/success”,
.sequence = &unlock_anim
};

Together, these two structs — the animation event and the animation sequence — make up a complete animation in our framework.

Storing animations as data doesn’t just simplify the code and speed development. It also allows us to change our animations at runtime rather easily — we simply update a pointer to point to a new set of animations. The Latch Lens is designed to go into a variety of physical lock products. If these products have different UX requirements, we can store multiple animation sets and choose between them based on product configuration. We can even load new animations at runtime, either over Bluetooth in the field, or over a serial connection during the prototype and development phase.

The Animation Engine

Now that we’ve described our animations as data, we need to write the code that will run and display the animations. To accomplish this, we split the framework into two parts: a dispatcher that waits on events, and the animation engine that plays the sequence of actions in an animation. The dispatcher subscribes to each topic at initialization and then waits for events to come in. When the dispatcher receives an event, it compares the topic that the event was published to with the list of animations, and sends the corresponding sequence to the animation engine.

The animation engine receives and stores a pointer to the sequence, and starts a refresh timer for the LEDs. When the timer fires, the engine iterates over each action in the sequence, checks the elapsed time against the action’s start delay and duration, and then sets the LED’s brightness to a linear interpolation between the start and end brightnesses. The process is roughly as follows.

animation_start(sequence)
{
cur_sequence = sequence
start_time = get_current_time()
start_refresh_timer()
}
animation_refresh()
{
elapsed_time = get_current_time() — start_time
for each action in cur_sequence
{
// Check if this action is active
if ((elapsed_time >= action.delay_ms) &&
(elapsed_time < action.delay_ms + action.duration_ms))
{
// Linearly interpolate brightness
range = action.end — action.start
delta = elapsed_time — action.delay_ms
bright = action.start + (delta * range) /
(action.duration_ms)
// Set LED brightness
set_led_brightness(action.led, bright)
}
}
}

The simplicity of the animation engine follows from the design of the sequence data. This data is intentionally structured to be easy for a computer to process. Each action is independent of every other, so the framework does not need to maintain any state when iterating through the list of actions. And each action specifies only a single LED on which to act so there is only one loop to iterate.

While the actual code is a little bit more complex in order to account for time-aliasing due to the refresh rate, and to handle the special case where we set an LED instead of fading it, the overall structure remains the same. The actual C code running the animations is now a simple program that takes well-defined input, and testing the engine against common and edge-case values is straightforward. Together with pre-validation of the data describing the animations, we can reduce the likelihood of bugs significantly. And most importantly, the complexity and testability of the framework code is constant — no matter how many animations we add to the system.

A Domain-Specific Language for Animations

As mentioned, the animation sequence data is intentionally structured so as to keep the animation engine code as simple as possible. However, it’s not the easiest format for a human to digest or edit. For example, it’s easier for humans to reason about a sequence of events in relation to each other rather than in isolation: do action A, 200 milliseconds later perform action B, 50 milliseconds after that perform action C, etc. And it’s cleaner to be able to group LEDs together when they’re all performing the same action (e.g. “fade all of the number LEDs to 0 brightness” instead of “fade LED 0, fade LED 1, fade LED 2, etc.”).

To allow humans to interact with animations in the format that’s easiest for them, we specify animations in a domain-specific language based on YAML. At build time, a transpiler written in Python will read in the YAML specification and output a C file with the data structures described above, which is then compiled into the firmware binary. This format is so much easier to work with that we don’t actually ever write or modify the C struct definitions for the animations directly. The unlock animation we showed you above wasn’t written by hand, it was generated from the following YAML:

- name: unlock
topic: unlock/success
sequence:
— set: {LEDs: [IND_0, IND_1, IND_2, IND_3, IND_4, IND_5, IND_6], brightness: 45}
— fade: {LEDs: [NUM_0, NUM_1, NUM_2, NUM_3, NUM_4, NUM_5, NUM_6, NUM_7, NUM_8, NUM_9, ENTER, BACK], start: 45, end: 0, duration: 200}
— pause: 900
— fade: {LEDs: IND_1, start: 45, end: 0, duration: 300}
— pause: 250
— fade: {LEDs: IND_2, start: 45, end: 0, duration: 150}
— pause: 82
— fade: {LEDs: IND_3, start: 45, end: 0, duration: 72}
— pause: 82
— fade: {LEDs: IND_4, start: 45, end: 0, duration: 72}
— pause: 82
— fade: {LEDs: IND_5, start: 45, end: 0, duration: 72}
— pause: 82
— fade: {LEDs: IND_6, start: 45, end: 0, duration: 72}
— pause: 82
— fade: {LEDs: IND_0, start: 45, end: 0, duration: 72}

There are several differences between the YAML representation of an animation and the corresponding C struct. Setting an LED to a brightness value is semantically separate from fading it. LEDs performing the same action can be grouped together. There is a new pause directive that delays the start of the following actions, and actions’ start times are not specified explicitly. Taken together, this representation is much easier for a human to reason through. We can see that this animation involves turning on all of the indicator LEDs, fading out the number LEDs, pausing for 900 milliseconds, and then fading out the indicator LEDs one by one at an accelerating rate.

It’s also much easier to make modifications in this format. If we receive feedback that the numbers don’t fade out fast enough, we can adjust the duration of that fade for the entire group of LEDs instead of individually. If we want to extend the time between fading out the numbers and beginning the fade out of the indicator LEDs, we can increase the value of the pause, and the timing of all the actions that come after will be automatically adjusted — we don’t need to manually calculate the new start times for each action.

Simulating Animations

Storing the raw data in YAML opens a range of other possibilities. We do range and value checking during the transpilation so we don’t have to do it in the embedded framework. More interestingly, we can build a simulator using Python and tkinter that takes in animations specified in YAML and displays them on the host PC. Because the simulator uses the exact same YAML source as the firmware binary, these animations have full fidelity to the animations played on the device. The ongoing maintenance costs of the simulator are very low; any additions or modifications to the animations on the device are immediately reflected in the simulator as well. Here, we simulate the unlock animation:

A simulation of the unlock animation
The unlock animation in the desktop simulator

The simulator, combined with the easy-to-use domain-specific animation language, allows us to completely short-circuit the back-and-forth between engineers and designers during development. Now, designers and product managers can create and edit animations on their own, see the results simulated in real time, and send the updated YAML back to the engineering team to be integrated directly into the build.

--

--