Introducing Jenkins to the SAFE Network

MaidSafe
safenetwork
Published in
5 min readJun 6, 2019

In the SAFE Buzz episode with our Testing and Release Manager Stephen, (05.23) you may recall he spoke briefly about the implementation of Jenkins and the efficiencies it would bring. Currently our build and release processes make use of SaaS products like Travis CI and AppVeyor. However, as with any technological solution, it’s not always perfect. There are issues with speed, the number of concurrent builds, and there are certain barriers when building complex things in a build pipeline. So let’s talk about Jenkins.

What is ‘Jenkins’?

Jenkins is a leading open source automation server, widely used to enable CI and to help facilitate continuous and automated delivery. We’re going to use Jenkins to deliver an automated release process for all of our products and repositories — which gets the code into the hands of users fast. It also lets us gather feedback more quickly enabling us to make any changes almost instantly. Speed is our greatest ally when building the Network, and we’ll always grab any tool that supports this ambition with both hands. As an example: a complex task such as building and deploying everything under safe_client_libs that could take over 3 hours in Travis now takes us between 30–45 minutes in Jenkins. You can’t argue with that sort of maths!

So what other potential does Jenkins unlock for us?

It allows us to automate some of the ‘business as usual’ things (such as soak testing) which are essential but eat up team time. With Jenkins, now we simply open the Jenkins URL, click the soak test job, select a PR or commit for it to run against, and click to start — leaving the team free to spend time on other things.

Jenkins also allows us to efficiently automate some of the steps in our current release processes across our repos. That helps rule out human error and improves reliability. And whilst it speeds up these processes, it also improves security by removing humans from the handling of keys, certificates and passwords etc.

We want to be able to make use of internal hardware assets — in other words, hardware, alongside cloud services such as AWS — as part of our build pipelines. Put simply this allows the team to utilise existing idle hardware in the office plus, when demands are high, create temporary additional machines in the cloud to run jobs.

We also want to use our internal hardware assets to support the creation and maintenance of resources used in our build pipelines (e.g. building a new version of a container that’s used in a build pipeline). Let’s explore this with an example. Say there was a job in Jenkins which built and deployed the latest version of one of our repo’s, e.g. SCL or Safe Browser. For that job to run successfully on either a machine in the cloud or hardware in the office, it needs very specific software pre-installed before the job can start. This software differs according to what is being built or deployed. That software will also be subject to change, for example the latest stable Rust version may update from v0.35.0 to v0.35.1. So it’s not inconceivable to imagine a situation where SCL needs to use Rust version v0.35.0 while Safe Browser should be updated to use v0.35.1.

As you can see, there are a number of factors needing configured before our build and deploy process can begin, we can use Docker containers to give us pre-baked images of how each machine should look for each specific job. This removes the need to install and set up each of these software dependencies every time a new job is requested and it also wipes it all after it’s finished so it doesn’t affect the next job running on that machine. Each job has its own associated Docker Container which has a set version of Rust, and various other software. This enables any machine which runs that job to download that image to the machine, which is much quicker than installing the software from scratch. Once the machine is finished running the job, the container can simply be removed from the machine, leaving it in a vanilla state again, ready for the next job, or ready to be killed if in the cloud but no longer required. It is crucial that these Docker containers are frequently checked to ensure they have the correct versions of each item of software — this can be scheduled through Jenkins to automatically run and produce fresh, up-to-date containers.

But what does this truly deliver?

So it’s clear that together, these changes all save time. But the real benefit is that it lets QA/DevOps take real ownership of the build and release processes for all products. That’s what DevOps is about really: they keep the CD processes running smoothly and make stuff production ready, and devs can get on with the application code. With a team focusing here, it avoids distracting the other developers with this time-consuming work.

But the biggest impact will be living true to our core values by making sure we have clear visibility of what status the repos are in across the entire business. It’s far more efficient and transparent if they are all in one place. This is typical of our open source attitude, where we believe the inclusion of everyone means faster innovation, better bug-catching and in this case, simply, everyone knowing what’s going on. This single location (https://jenkins.maidsafe.net/) will be a space where our developers can go and run any number or type of job that has been set up in there for them, not having to go through manual set-up or worry about where it’s running and how to run it. This takes responsibility away from the developers, allowing them to concentrate on what they do best — making great products.

What’s next?

The plan going forward is to gradually migrate the more complex jobs for each repo, such as release processes, to Jenkins. There are also some practicalities; it would take a fair amount of heavy lifting to move all jobs over to Jenkins so we’re beginning with the complex jobs while making sure we’re focusing on supporting the immediate next steps on the road to launch.

--

--

MaidSafe
safenetwork

Building the SAFE Network. The world’s first autonomous data network. Privacy, security, freedom. Join us at https://safenetforum.org/