The State of Technology in Embedded Systems

Jun 22, 2016 · 7 min read

In this lofty-titled article we’ll take a look at tech used in software design for embedded systems, what’s missing and what the future holds. (maybe)

What is Embedded?

An embedded system is defined as a “computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints”. But what distinguishes embedded systems from other “computer systems” is not so much the context in which they operate, as the limitations incurred by it. Those are:

  • Limited Resources (Low ROM, RAM & CPU Performance)
  • High Reliability

Embedded systems range from ones tasked with blinking an LED to autonomous robots to control systems inside modern cars. That’s a wide range, so its hard to make sweeping generalizations, but I’ll try. Specifically we’ll look at the lower — medium slice as it’s the one where limitations are the most apparent.

Hardware Improvements


There’s a steady trend of consolidation of silicon functionality into singular chips. An example of this trend can be seen when comparing Nokia phones with a similar feature set a decade apart:

Similar changes have happened to other kinds of embedded systems. An IC which includes a lot of features is called an SoC (System on Chip). Common examples of SoCs are nRF51 (BLE + MCU on single core), CC2640 (BLE + separate MCU), CC3200 (Wi-Fi + separate MCU).

This kind of consolidation results in two things:

  • Less hardware for embedded systems is developed in-house
  • Software and overall system complexity is highly reduced


Embedded systems, being a part of the whole hardware industry, have enjoyed a steady growth in silicon performance. Proliferation of 32-bit microcontrollers was a part of that trend. A less-noticeable, but arguably more important part of that has been the switch from the 8051 core to ARM Cortex-M.


8051, designed by Intel in the 80s has been a popular choice for microcontroller manufacturers, who used it as a basis for building their own, optimized versions of it. This meant that the industry had a wealth of 8051-flavored cores and no standardization across manufacturers.

Upgrade with a modern design has brought higher performance, which allowed for doing work which would before require a fully-fledged microprocessor, thus enabling a whole new range of applications. It has also increased the efficiency and thus lowered power consumption. Modern design has brought other less obvious advantages like better sleep modes, simpler memory interfacing and overall ease of development.


Cortex-M hasn’t been the first 32-bit core on the embedded market, but over the years it has become more and more popular of a choice for microcontroller manufacturers, most of whom now offer at least one product family featuring it, along with older 8-bit and proprietary 32-bit versions.

The consolidation of the industry around Cortex-M is important for several reasons:

  • Silicon design reuse lowers MCU costs
  • Ability to use a wider range of software toolchains

Software Development Tools


As mentioned previously, for a long time most embedded systems were based on 8-bit microcontrollers with proprietary cores, many of which only supported IAR/Keil toolchains or, at the worst, those supplied by the manufacturer. And since manufacturers are/were at their core hardware companies, the quality of those tools was below the level accepted in the software world. More specifically, the problem with those tools was that they were closed-source, windows-centric, heavily GUI-based and very expensive (>$5k for a yearly license).

This all has changed with the arrival of ARM. While they do provide their own toolchain for the Cortex-M series, there are plenty of free options, most popular of which are GCC-based.

Beyond Toolchains

Perhaps the most noticeable thing which happened to the industry is Arduino, which served as both the entry point for people new to the field, and a useful prototyping tool.

Starting as a toolchain (avr-gcc + IDE) for an 8-bit AVR and a basic development board with a mission to enable “non-engineers to create digital projects”, it has grown into a wide ecosystem, inspiring many others to create similar ones, compatible or not and a whole industry around hobbyist electronics.


Orthogonal to the trend with ARM, another technology with a potential of bringing modern software development practices to the embedded world is LLVM.

LLVM is a compiler infrastructure agnostic to the choice of a language and a target. On a high-level, LLVM and a typical C toolchain have a similar compilation algorithm:

Source Code → Internal Representation → Binary

The difference lies in the choice of the internal representation language: GCC uses Assembly, which is inherently platform-dependent, while LLVM uses a more abstract language (LLVM IR). And this is a huge deal, because this makes the toolchain much more modular and allows to combine different front-ends and back-ends (at least in theory).

With the standard approach, porting N languages to M targets would mean writing N * M compilers and assemblers, since those aren’t interchangeable between languages and targets. With LLVM you’d have to write N front-ends and M back-ends, roughly equaling N + M toolchains.

The reason this is important for embedded is the same as for the rest of the industry — creation and adoption of new languages is much easier, the only difference being that the embedded world has a lot less choice when it comes to languages.



C is the lingua franca of embedded systems for a reason — it fits the problem domain really well. It is:

  • Compiled & statically-typed
  • High-performance
  • Flexible (C allows you to do dangerous but necessary things like direct memory access, which is necessary for peripheral control)
  • GC free (hence deterministic, which is important for real-time systems, which many of embedded systems are)

These intrinsic features of the language led to it’s initial popularity which led to all the compilers for MCUs being written for C which meant that experimentation with other languages would mean writing your own compiler for it — a very high barrier for entry.

And since everyone just uses C, silicon vendors supply their peripheral libraries in it as well — another barrier, although not as high as the previous.


There have been many attempts to use other languages like Python, JavaScript and even Haskell in embedded, but none of them have been widely adopted because their execution model doesn’t fit resource-constrained systems as well as C’s.


Rust is a new (stable as of spring 2015) open-source systems programming language backed by Mozilla.

Rust is exciting because it a modern truly systems- language. Similar to C++ in terms of being both close-to-the-metal and high-level, it does everything you’d want use C for, it offers additional benefits of bullet-proof memory safety and programmer productivity. Its also simply more enjoyable than C++.


Established Players

There are roughly as many Real-Time Operating Systems (RTOS) for embedded as there were people who wanted to write one (more than a hundred), but the most popular are FreeRTOS and µC/OS-II. Most of them are equivalent in their feature set and provide things like:

  • Scheduler
  • Peripheral Control
  • Filesystem
  • Software Timers
  • Mutexes and Semaphores
  • Message Passing

The features are useful not only for real-time constrained systems, but provide increased developer productivity for most medium-to-high-complexity tasks, which is why they’re used in 69%[1] of embedded systems.

New Players


Brillo is Google’s OS for “IOT”. Its built on Android and uses Weave as a high-level communication protocol. Not a lot of on it details are available yet, but from the looks of it it doesn’t look like a direct competitor to the established players, but more of a Google-specific tool for building on embedded devices.


Zephyr came out from a commercial RTOS — Rocket by Wind River (of VxWorks, a popular RTOS for the higher-end of embedded systems). The project has been open-sourced and is now under the umbrella of Linux Foundation.

Despite affiliation, it has nothing to do with the Linux kernel, and has a custom microkernel+nanokernel architecture, which makes it possible to use it in the whole range of embedded devices, and hence is a contender to the traditional RTOSes. It is however still in it’s infancy and has very limited hardware support.


While not strictly an RTOS, the lowering cost of extendable performant hardware in combination with the addition of real-time capabilities to the kernel mean that linux-based computers are overtaking many of the roles traditionally held by embedded systems.


Artificial barriers in embedded systems (tooling, arcane architectures and languages) are slowly going away, while inherent constraints (limited resources, reliability) are here to stay.

Stay tuned for a more in-depth exploration of some of the topics mentioned in this article.

[1]: 2014 UBM Embedded Market Study


Written by


Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade