Continuous Deployment in the Internet of Things

Neil Ashford
AginicX
Published in
6 min readAug 14, 2018

Over at AginicX, my job description is basically web development. So I was quite surprised when a fresh IoT project ended up in my backlog. Surprised, and a little concerned. The state of IoT development tooling doesn’t quite measure up to things like cloud consoles and sentry. So I set out to configure Continuous Integration / Deployment just to make it feel a little more like home (and so that I could be more productive as a developer). Here’s how it went.

The Situation

I have a code base — written in rust, linking against two C dependencies. I have a Raspberry Pi 3B+, which my code is supposed to run on. The last thing I have is a desire to follow best practices in my development. I want to merge a pull request from my dev branch to my master branch, and I want my new code to start running automatically when that happens. Seems simple, right?

Part One — The Environment on the Pi

In most of my other development, everything I do ends up running in a docker container. All system dependencies are explicitly written out in a Dockerfile that I can track in version control, everything is sand-boxed, and the build instructions section of my readme turns into docker build . — sure docker imposes a little bit of overhead on the machine but I figured I’d give this a shot. To get my app running in docker, running on the Pi, I needed two things — a host operating system and a docker image.

Part One A — The Host

I didn’t do too much digging around here, so there’s no guarantee the decision I made was perfect. I found hypriot, a project designed for providing a host operating system for Raspberry Pis running docker containers, and decided to run with it. So far, it has gone pretty well for me.

Part One B — The Image

The end goal was to pull from a base image, copy over my actual application from whatever build environment I use, and then deploy. The main task here was finding the base image to build on. This should have been pretty simple, as one of the selling points of rust is the lightweight run-time environment and portability. However, even though the run-time is pretty slim, there are still some things that my basic rust app needs in order to get off the ground. I did some research and experimentation into the startup process of my executable binary, and came up with the following list of requirements for my final image.

  • glibc — though musl exists, and can be statically linked to get some really slim docker images, I use some of the extra Linux specific calls in glibc within my app, so a dynamically linked glibc makes the list of things to install.
  • ld.so — this was something I’d been aware of to an extent, but hadn’t really understood until its absence broke my application. This is (part of) what gets run in between your call to execv and the start of your fn main() , and its responsible for finding all the dynamic libraries your application links against and setting them up.
  • libpcap — this is a C library that I use within my application. The rust bindings to it currently only support dynamic linking, so the shared object needs to be dragged into my final built image.
  • cool-aginicx-application-1 — the app itself. Everything else, from root CA certificates to jemalloc , is contained in here.

Mostly pretty simple stuff, glibc and ld.so are base components of most any Linux distribution, and my application is just a build artifact. The catch is that they all need to be compiled for arm so that they’ll run on a Pi, and “compiled for arm” ends up being pretty complicated.

The first step was finding a base docker image which contains the basic features I need — glibc and ld.so for arm. Cross platform docker images are an interesting space right now, with two different patterns cropping up.

The recommended way of doing things is with docker manifests — you create a separate image per architecture you want to support, and then one big “meta-image” that just stores a map from CPU architecture to actual docker image. This is great, because everyone pulls from the same debian:stretch image and magically gets something that just works on their machine. Unfortunately it’s less great when you want to pull from debian:stretch and get something that works on a different machine to yours, because you’re cross compiling. Such a pull is currently only possible through an experimental option in the CLI / API, and not possible at all from a Dockerfile build that’s supposed to be reproducible.

The old way of doing things is to create a docker image per architecture, and find a way with tags or organisations to differentiate between the images. To this end you’ll see things like the arm32v7 organisation on Dockerhub, which maintains a lovely debian image that will run on a Raspberry Pi 3B+ happily. It’s reasonably lightweight, and contains the things I need for a base image, so I decided to use it as my base image.

Part Two — Building the Image

While I could obviously just run docker build . on the Pi itself, and write a regular Dockerfile, that’s not a feasible solution when you want continuous deployment — turns out CircleCI doesn’t have a “Raspberry Pi” option in the build host list. After some research, I found a bunch of tutorials discussing ways to install Qemu and link it to the internals of docker through various forms of black magic. This seemed to require a decent amount of configuration on the host machine, which defeats the point of reproducible builds in docker. I went back to the drawing board, and worked to come up with a more elegant solution.

At its most fundamental level, a docker image doesn’t care what architecture it runs on. The image itself is just a file system and the name of a command to run. What this means is that there is no requirement for a virtual machine, or for the docker build to occur on the same architecture that the docker image is going to run. This leads to the new build process — build the docker image on an x86 CPU (or any platform, really) and then run it on the Pi.

How to structure a Dockerfile so that it can be cross compiled? The only difference vs a normal build is that RUN statements must be avoided — this is because the binaries within the image that you’d be running are arm binaries that may not work on your host architecture. Without RUN statements, all you’re really left with is COPY commands, so you just break the file up into a multi stage build and you’re good to go.

The Final Dockerfile

####################################################################FROM rust:1.27 AS build# system libraries (for arm)RUN dpkg --add-architecture armhfRUN apt-get updateRUN apt-get install -y libpcap0.8-dev:armhf# cross compile toolchainRUN rustup target add armv7-unknown-linux-gnueabihfRUN apt-get install -y gcc-arm-linux-gnueabihfCOPY cargo-config $HOME/.cargo/config# copy project acrossWORKDIR /opt/COPY src/ src/COPY Cargo.toml .COPY Cargo.lock .# buildRUN cargo build --release --target=armv7-unknown-linux-gnueabihf######################################################################################################################################### this image is all for ARM - COPY commands are fine but don't RUN anything hereFROM arm32v7/debian:stretch AS run# system librariesCOPY --from=build /usr/lib/arm-linux-gnueabihf/libpcap.so.0.8 /lib/libpcap.so.0.8# copy project acrossWORKDIR /opt/COPY --from=build /opt/target/armv7-unknown-linux-gnueabihf/release/program .# runENTRYPOINT /opt/program####################################################################

Part Three — Turning Continuous Integration into Continuous Deployment

By this point, everything is set up so that a push to bitbucket corresponds to a new build artifact that can be run on a Raspberry Pi. From here, the next logical step is to get a deployed Raspberry Pi to automatically pull that docker image and start running it. To achieve this, I turned to Watchtower. This is a small program that automatically prompts docker to download newer versions of images at regular intervals. The plan moving forward is to either find or make a Raspberry Pi compatible docker image for this program, and add it to the docker-compose file.

Conclusion

So where is the project now? A complete cross compilation build is set up in CI, and happens automatically when new code is pushed. A plan is in place to get the Raspberry Pi to then download and run that code automatically. And the only Raspberry Pi in the pipeline is the one we run the compiled artifacts on, so everything else can be ported to whatever architecture your pipeline would normally use.

--

--