About a Matrix Creator, a Raspberry PI and Azure.

How I built an end-to-end release pipeline for Raspberry PI using Azure DevOps and elbow grease.

Idea.

A couple months ago I’ve figured that I had way too much free time to spend. Of course, that’s notwithstanding the fact that my commute is nearly 3 hours a day plus all the things I love and i’m involved into (my two sons, semi marathon training, scouts, helping an inventor with his project, etc.).

So I purchased a Matrix Creator because I liked the idea of fiddling with sensors, a bunch of very bright leds and 8 microphones. I got it from Element14 here.

Oooh… shiny! (Notice it’s my first time taking pictures of hardware)

This thing is sick, it has sensors for temperature (twice!), pressure, humidity, UV, altitude, IMU and it has a bunch of communication components such as Bluetooth, Zigbee, IR, NFC and ZWave. It’s compatible with Raspberry PI through its GPIO and MatrixOS.

I’ve never done this type of stuff before and I have absolutely no idea what to do with this.

Start.

I do for a living software architecture, development and DevOps on Azure. I mostly code on NodeJS, C#, GO and god forbids, Python (not thats it’s a bad language, I absolutely love it, it’s just I’ve never had to put anything I made in Python into production). I develop micro-services, React single page apps, backend integrations and sometimes some mobile. I know Azure not inside out but nearly and I’ve been setting up DevOps pipelines on Azure DevOps (formely known as Visual Studio Team Services or VSTS) for the past couple of years. Lately, I’ve become fond of Docker and Kubernetes and I’ve taken more than my share of crash courses in the form of “I know you could deploy this on Azure Kubernetes Service so let me figure it out for you as a proof of concept” type of thing. Yet, I’ve never had to push code on a Raspberry PI, less in an automated fashion.

I code, I know Azure, Azure DevOps, Docker and I would like to push apps on a Raspberry PI device automatically.

Plan.

I’m fully aware that know-how is quite valuable, especially if you share it around. I’ve figured I might as well setup a robust DevOps pipeline for this baby until I have a strike of genius. Added bonus for you because I’m writing this article in the same time. Everybody Wins!

This pipeline should scale if this strike of genius ever happens and I need to setup shop for thousands of devices to finally swim in money happily ever after.

Thus my plan is to build NodeJS and C++ apps, automate containers build and deployment on devices using Azure IoT Hub and Azure DevOps, write about it on Medium and make money (somehow).

In all honesty, I had a plan to begin with but I wasn’t so sure it would float. I figured that all I know and learned over the last couple of years could be useful however. Here it is in its grand entirety:

  1. Code app: NodeJS or C++
  2. Build a Docker container for the app
  3. Deploy container on Raspberry PI

Obviously, this is way oversimplified and it implies a lot of assumptions (please keep in mind, I’ve never done this stuff before):

  1. Docker runs on Raspberry PI.
  2. NodeJS runs on Raspberry PI.
  3. Matrix offers facilities and SDKs to harness the device using NodeJS.
  4. Running containers can access the GPIO and the attached HAT, bonus points for SDK support.
  5. Azure DevOps Hosted Agents can build ARM32v7 based containers.

For points 1,2,3 & 4 quick Binging (No, I, Don’t, Google) returned the answers I needed. First of all, Docker runs really well on Raspberry PI, I’ve even seen a some examples of Docker Swarm orchestrating containers on several PIs. I’ve also quickly found Docker images for Node on ARM32v7 architectures. As for point 3, Matrix’s documentation is great and through the use of ZeroMQ, it’s a breeze to interact with the sensors in NodeJS but for any other advanced features, it implies the use of more hardcore tools and C++ Ninja. As for point 4, some quick pointers showed it’s quite indeed doable and finally, point 5, remained somewhat nebulous for which I’ve figured 3 options, one that didn’t work, one I didn’t like and finally I got that goldilocks moment. Follow along and you’ll see what.

Most stuff on my plan I’ve figured out technically how to do.
PowerPoint Mastery Demo pt. 1
I build my apps in NodeJS or C++ in VSCode, do the build and push of container images on Azure DevOps and deploy them on my device in a release pipeline, Azure Function and Azure IoT Hub.

Let me digress for a quick moment…

Woah there partner, what’s all this gibberish you’re talkin’ about? I hear you. What’s Azure DevOps? What’s the purpose of an Azure DevOps agent? What is Docker? I’ll quickly answer this if you like, otherwise, just skip this and go to the next section.

  • Azure DevOps is a complete DevOps suite. It offers tools to manage teams, code, work items, continuous integration/deployment in buckets called projects. It’s free, easy and fun.
  • Build Pipelines are made of tasks that when run, generate a deployable build artifact.
  • Release Pipelines are made of stages of release with the intent of deploying applications (build artifacts) in target environments.
  • Azure DevOps Build Agents are either public or private servers or PCs (Mac, Linux or Windows) used to execute tasks of a pipeline (Build or Release). You can use those made available by Azure DevOps or roll your own. A pool is simply a collection of agents grouped by similarity/purpose.
  • Docker & Containers. Years ago we’ve discovered how fun it was to create virtual machine. All the OS to maintain, patches, anti-virus, ahhhh… nightmares. Docker simply virtualizes the OS instead of the hardware: run different independant applications on the same OS while keeping each unaware of its other co-tenants. This really simplifies and improves scalability of deployments by magnitudes and maximizes the hardware.
Roughly speaking, a container is equivalent to an app, just that the app comes bundled with its OS and configuration.
  • Azure IoT Hub is a PaaS which simplifies device management and data collection through standardized communication protocols such as AMQP and MQTT, on Microsoft Azure. It acts as the backbone of any sizeable IoT project and although it’s quite possible to do IoT without, scalability quickly becomes an issue and it gets more complicated to extract data useful for any Machine Learning wizardry.

Code.

Starting in my comfort zone, I just wanted to be able to access the sensor’s data and report them back to Azure IoT Hub. I’ve got some data really quickly and built a quick Excel spreadsheet to show the data.

Woohoo! Data graphed in Excel FTW!

Matrix Creator documentation is pretty complete on how to read the sensors data using Matrix Core. Core essentially pushes data to a ZeroMQ service and the app reads from it. ZeroMQ is a high-performance asynchronous messaging library, aimed at use in distributed or concurrent applications.

My first app is a simple sensor reader and it pushes telemetry data to Azure IoT Hub. It piggies back on quickstarts and there’s plenty to learn from.

Using NodeJS and starting from the Matrix Core boiler plate samples on their GitHub repository I was, within an hour, able to read sensor data on the Raspberry PI. Awesome. I’ve also piggied back on this Azure IoT Hub quickstart to push data to my hub. The app’s, which reads sensor data and pushes to Azure IoT hub, code is available here (https://github.com/efog/azure-iothub-matrixcreator-sensors-reader).

Build.

So, I got an app which reads sensor data and sends it to Azure IoT Hub. Hurray! But in terms of challenge, after developping for over 15 years, this is Hello World level so let’s knock it up a notch!

This guy knows about knocking things up a notch!
I’ve put this app in a container built on Azure DevOps and ran it on my Raspberry PI but it wasn’t straightforward.

People familiar with Docker know that it’s all about OS virtualization. Therefore, the platform where the container is built needs to be similar on an host OS and CPU architecture level. If you gals/guys had a quick look at the Dockerfile in my quick app, you probably noticed this line:

FROM arm32v7/node:10.7

The app will run in a container built for arm32v7 architecture because Raspberry PIs are running on an arm32 CPU. My workstation is a lowly but thrustworthy Pentium G4600 with a buttload of RAM. Obviously, CPU architectures don’t match.

My first idea was to build using an Azure DevOps hosted Linux agent. Meh, I don’t know much about cross platform compiling on Linux but I know it’s quite doable on Windows to build .Net Core for ARM on Windows so why not, albeit naively, try? Nope nope nope, dang, nope.

The more you know: You can’t build containers for arm32v7 apps on Azure DevOps hosted agents.

Next idea, build on the Raspberry PI itself. Quickly it’s obvious that the Azure DevOps agent for this device isn’t supported… but it’s doable. Damian Brady had the agent run on the Raspberry PI. See this awesome trick here: https://damianbrady.com.au/2018/08/17/running-a-build-release-deployment-agent-on-a-raspberry-pi/). However, I would like to reduce the number of moving parts in the process to a minimum, this is also not officially supported by Microsoft and I don’t like the idea of having two class of devices: those for dev, those eventually for release. Dev devices should be as close as possible to the real deal except for some configuration changes. That’s my view mind you, I could see this plan float and scale well. So if Damian’s solution is good for you go ahead. What I’ve found later on just made things much easier.

You can build arm32 based container images on Windows 10 through some dark and arcane wizardry (not).

It’s possible to build arm32 containers on Windows 10. That’s a really good news! I’ve simply added my workstation as a private agent in Azure DevOps and BAM, I‘m building containers for my Raspberry PI.

How hard is it to get this to work? Well, I don’t know. I keep an always up to date installation of Visual Studio 2017 and Docker as well. I spoke of wizardry earlier: it isn’t. I just don’t grasp the intricacies and I can’t tell you what I’ve done in particular. Apparently the VM which is used to run Docker for Linux on my Windows 10 PC is able to run arm32 code but if anyone has a thorough explanation, I’ll be more than happy to listen.

So I went on with this. I’m able to build without having to install an agent on the PI. That’s sweet!

Deploy.

I have an app, a container and a Raspberry PI. Does it run? Yup, it runs on the device like I intend it to. Remember earlier, I assumed the container would be able to access the GPIO and ZeroMQ? Time to validate this! Looks like the guys behind Docker have thought of everything, for me in advance, like they had divine foresight. Here are the two secrets which unlocks the gates to data and hardware for containers:

  1. Configure the containers with NetworkMode set to “host”
  2. Use the --privileged switch when running the container.
I’ve verified the containers can access the hardware and ZeroMQ service and now sky’s the limit.

The app I’ve built above only needs to communicate with ZeroMQ. I’ve built another app in a container which strobes the led on the Matrix Creator. It’s built using C++ on a Resin image. It means that the “privileged” switch does the trick. I’ll talk about this later on in another article, I’ve yet to perfect my C++ skills and therefore have nothing to show… yet.

It’s alive!
Then I automated the deployment on a device using a simple orchestrator with NodeJS and Azure IoT Hub.

Device twins are in essence a really simple things. It contains properties which are either desired or reported, hence the name twin. Desired properties are generally set by the backend and Reported properties are brought back from the device. It’s then possible to query twins by reported/desired configurations or tags. This helps us answer questions like Which device runs out of date containers? or What devices are of dev-device class? For a better understanding of device twins head to this article: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins

Inside look on a twin, lucky you!

The Azure IoT Hub SDK really simplifies the access and manipulation of twins and messaging in several ways. It manages connection using different protocols such as AMQP and MQTT, both of which are de-facto standards for IoT software, and it maintains the connection alive thus alleviating the burden of reconnecting and detecting disconnects/what not.

The orchestrator/agent I’ve built leverages the NodeJS Azure IoT Hub SDK for devices and the device twins carries the desired containers configuration.

The agent listens to changes on desired twin container configurations, queries the local Docker and starts or stops containers. It leverages Dockerode to operate Docker and it cleans dangling images so the Raspberry PI stays fresh all day long. I’m happy to let you know you can grab the agent here: https://github.com/efog/azure-iothub-client-dockerorchestrator.

An Azure DevOps release pipeline task updates the device’s desired twin through an Azure Function and the orchestrator applies the changes.

The last mile, and thank you for following me up to this point, is to update my device’s desired twin everytime a new container is pushed onto my registry. I’ve cobbled up an Azure Function to do just that. Over the year, I’ve built so many Functions that I have all the templates and automations on hand so it took less than an hour to set this up. If you’re interested, it’s here: https://github.com/efog/azure.iothub.twinupdate.function.

Updating the twin using an Azure Function.

Wraping Up.

All I can say is that it’s an exciting time to be a software tinkerer. What I’ve built I could only dream of it 5 years ago. I have an app (https://github.com/efog/azure-iothub-matrixcreator-sensors-reader) which capture sensors data. I created a container for the app using Azure DevOps (Ugh, I can’t get used to the name), Azure Container Registry and my Windows 10 PC as a build agent. I wrote an agent/orchestrator (https://github.com/efog/azure-iothub-client-dockerorchestrator) which updates the Raspberry PI’s Docker service based on a twin configuration. I also wrote a twin updating function (https://github.com/efog/azure.iothub.twinupdate.function) which is used by the Release pipeline. Both the agent and function leverage the awesome Azure IoT Hub SDK. I know most of the code I’ve put up here is not production ready due to lack of unit tests and documentation. My next efforts in industralizing this are to make the foundations stronger when I’ll be building stuff for this platform. I’m trying to figure out how to build a self updating agent because it’s still a bit limited and I don’t want to ssh into my PI all the time to update the agent. If you have any idea, please send it my way, I payback in beer.

Thanks for reading along, I hope you enjoyed this piece!