In this series, I wanted to highlight easy ways to implement a few Quality Assurance (QA) processes when you are the only person working on Firmware in your team. If you feel that you don’t need it or that it looks like an enormous amount of work to make those processes truly work, think again, we are in 2019.
At Equisense, I first wanted to speed up the development process by automating as many tasks as possible, like building new Firmware packages and deploying them to our beta users and final customers. Plus, you will see that many tasks are prone to human error which is no more acceptable for quality software.
As I’m working on the nRF52 series microcontroller in our product, you will encounter Nordic’s tools and implementation of libraries such as Device Firmware Upgrade (DFU)… but those are similar in many other SDKs.
Where we come from
Let’s give an overview of some of the tasks I used to run on my machine that needed to be automated to have a better CD process.
First, compilation. The cross-compiler, with associated paths to be defined in a common Makefile, has to be installed and configured on each machine. If another person had to work on the project, that one may have installed a different version, using different settings, flags, etc… which at the end resulted in different code quality. Same for our code formater (we use Clang-format): a different version will output a different code, even with the same config file.
Then, DFU package generation. Before, one’s had to install a customized version of nrfutil from source code with associated paths. Then a Makefile command was available to generate a new package, carrying a new version, but some tools and files needed to be accessible at the right location.
Finally, deployment. Once a DFU package was generated as a zip file, one’s had to deploy it to our back-end for testing purposes first, then for full production release (we have staging, beta, and production steps in our deployment pipeline). That task wasn’t automated and I did those steps using Postman: get the path to the hopefully right zip file, write down Firmware version, then get a token to post to our server, carefully make sure that I didn’t mess up and click “POST”, and sometimes pray 🙏.
I needed a solution.
The flexibility that I wanted has been answered by Docker for a few years now. Indeed, containers are really efficient to run specific tasks independently without much overhead compared to your local machine. Copy-pasted from the Docker website:
Docker unlocks the potential of your organization by giving developers and IT the freedom to build, manage and secure business-critical applications without the fear of technology or infrastructure lock-in.
That was it.
So, I created my Docker image called equisense/nrf5-builder that you can pull from Docker Hub as well as use the Dockerfile and example on my Github. It includes the ARM GCC cross compiler, Python dependencies and nrf tools (nrfutil as a submodule). I created a Makefile to show how the docker can be used along with the Makefile contained in the root directory.
That one is pretty generic so I customized it for our own usage in order to build another Docker image based on equisense/nrf5-builder, you should build your own upon it if you work with nrf52 targets.
Automating builds and deployment
Here we are. Compiling and generating the firmware update package from a docker container gives the possibility to run those steps on a remote machine. By a remote machine I think a CI server, obviously.
We have been working with Bitbucket for a long time and, luckily, it includes a tool made for that purpose called Bitbucket Pipelines.
All you need to do is to describe the steps to be done on a new commit, a specific branch or tag or even manually in a yaml file. I didn’t want to have the code built on each commit but only when releasing a new version so I added a Makefile command to increment the version and to create a tagged commit. Once pushed, the Pipeline runs and I can deploy the new firmware to our Staging server then Production server by simply clicking “Deploy” 🤩. I also keep several generated artifacts by building the packages such as the .map file and upload them to the Downloads tool integrated into Bitbucket. Pretty neat.
Under the “Deploy” button is hidden the very few steps to get the version, a token and post the zip file to our back-end using a simple curl command… remember the form above? Gone 🌟.
I have to admit that so far I don’t use Docker on a regular basis on my machine but I should definitely integrate it into my development process and document the on-boarding process for future developers. A few Makefile targets to add and I’m sure I’ll have Docker to run transparently whenever possible…
Also, Bitbucket Pipelines may not be the best CI/CD tool but it’s simplest one regarding our needs. Depending on your Git hosting platform, you should check Gitlab CI/CD, CircleCI, Jenkins or the new Github Actions. I also wanted to run tests on a local machine that has full access to our hardware in order to launch integration tests. I tried TeamCity which I think can handle that job pretty well but that’s currently not my top priority and it implies some hard work if you want to test Bluetooth products (libs for Bluetooth communication from a computer are buggy and most of the time can’t be ported to different OSes).
Firmware development processes and quality can be improved by a slick delivery process but also by catching more bugs. In the next article of this series, I’ll cover how we can qualify quickly almost any bug occurring on products in customers hands.
Please let me know if you know some better ways or have some other tips for improving the Continuous Delivery process.