Why and how did we create the Shadow Ghost and Boxes?

David Ngo
Shadow Tech Blog
Published in
4 min readFeb 21, 2023
Our beloved Shadow Boxes & Ghosts

For years, Shadow Ghosts and Boxes were part of our hardware ecosystem, our very own in fact. Since then, the software part has evolved a lot, while in essence the hardware part stayed still. The feature coverage was way different compared to the desktop’s (macOS, Linux, and Windows). The libraries also were less and less compatible, which requires us to perform many hacks. In order to be focused on value added, a decision has been made : we will stop direct support at the end of this year; you can read more about the decision on the dedicated blog post. TL;DR: we will spare this precious time for all teams to dedicate it to our desktop and mobile applications. But let’s rewind to the genesis.

Shadow’s mission has always been to bring a high-end PC to anyone, regardless of their local hardware performance. With the Shadow Boxes and the Ghost, we built a plug-and-play solution. Basically, instead of having a computer with its own energy consumption, we managed to bring to our customers a small device, USB-charged with only the necessary inputs: USB ports, RJ45, jack, power supply. What does it bring? They get the device, plug the device in, connect the device to the Internet (RJ45 or WiFi), and let’s go: they authenticate and can access their Shadow PC. Sounds easy right? But how did we manage to make it happen smoothly? Some steps were mandatory:

  • Provide an operating system
  • Set our own application
  • Make this application start automatically
  • Have our application in the foreground without being able to escape from it
  • Keep up-to-date our external components

Here’s the answer to all those requirements: we need to build our very own distribution (aka distro) with all those requirements.

How did we do?

It starts with the hardware, as we design the board and its components. First keyword: BSP (it stands for Board Support Package), bringing a hardware-specific layer of software, such as boot firmware (basically the minimum expected entry to use the power supply and bring up the board and its components) and device drivers (interconnection between hardware and software).

Let’s skip those previous steps since it’s very kernel space and there’s so much to talk about!

Let’s say from this point: we have the board running with a Linux kernel, but nothing else. In fact, it’s the minimum operating system, the first command line (found in /proc/cmdline) could simply start our application. This application should need 3rd-party libraries, we could also load them in the command line. In terms of scalability, handling everything in one file is very complicated. That’s why we would need an operating system (From Wikipedia: “An operating system (OS) is a system software that manages computer hardware, software resources, and provides common services for computer programs”).

Linux being completely related to C and C++, those languages need to be built (they’re not scripts, contrary to Python or Javascript for instance).

When we learn those languages, with a classic helloworld program, we build the code directly on our local computer. This program is very linked to our local architecture. Let me explain: we build the program to run on this computer, but with a different architecture, it can’t work since the instruction set architecture is completely different. To make the comparison, to show something is door 12 for architecture A, but for architecture B it’s door 4, so we would need to know the map, to use the proper instructions.

Directly building within the board

Who knows better the instruction set architecture than the board itself? It could sound interesting, but in terms of performance, it’s completely downscaled. Working with embedded systems rhymes (literally) with a limited resources system (in French, we say it’s a “rime suffisante” ).

The keyword is cross-compilation: A cross-compiler is a compiler able to create executable code for a platform other than the one on which the compiler is running. It’s part of the toolchain, along with the linker, libraries, and a debugger.

Now, we know how to work locally via the host to bring our application to the board. As said previously, it’s complicated to have everything from scratch, and build systems do exist (there is an answer for everything on Linux).

As the name strongly suggests, it helps to assemble a system. There are several of them: buildroot, openWRT for routers, cisco’s RDK-B, and … Yocto!

Why is Yocto, the last on the list?

*drum roll*. It fits all our needs:

  • It builds toolchains
  • It builds bootloaders
  • It builds kernel (we only embed the necessary dependencies)
  • It builds root filesystems
  • It builds an optimal experience with only needed components with specific versions for specific hardware, it could decrease for instance memory footprint
  • It uses shell and python (2 skills we have)
  • Documentation is huge
  • There is a plugin for Eclipse (just kidding)
  • It supports the SoC we were using (and other families such as Raspi, did you know Shadow is running on Raspi ?)
  • For CI/CD purposes, it would build its own tool (unlike Buildroot which would reuse the local one)
  • It’s module-oriented with inheritance principles (easy to manage several boards)
  • It can easily integrate external components.

The main consideration about Yocto would be the learning curve, it’s not really straightforward:

  • No UI (but it’s ok, we love configuration files)
  • Some ‘bitbake’ knowledge like bblayers or local.conf
  • Some do_* functions to master
  • Being familiar with Makefile

But we managed to overcome those “issues” (we’re smart, aren’t we ?)

Now that you know the ‘why’ and the ‘how’? We’ll figure it out in the next article !

--

--