Continuous Pipelines for DevOps

The Definitive Disambiguation Guide

Matthieu FRONTON
7 min readAug 22, 2016

From the multiple resources edited by some software companies out there, we can see it’s quite difficult to get a consensual definition of Continuous Delivery and/or Continuous Deployment.

It appears that each one is trying to defend its own point of view, which is actually fair. But now it starts to mess with the meaning for the whole devops community.

How can we rebuild a clear and simple definition so we never get confused again ?

You can either jump to the Conclusion section of this article or you can jump in and let me take you to a journey, starting back from the very basics,

Lets go back to the 70s…

Automation vs Orchestration.

Automation

Prior to 1976, automated builds were performed by some OS-dependent shell scripts.

In Apr 1976, Stuart Feldman released the first dependency-tracking, OS agnostic, build utility called “make” which uses a file containing a set of directives : the Makefile.

Its main purpose was to Automate the build of executable programs and libraries from source code.

This Automation strategy have then been generalized and spread to each stages of the software lifecycle :

  • Build stage : Automated Build (code),
  • Test stage : Automated Tests (unit/acceptance/user),
  • Release stage : Automated Release (version x.y.z),
  • Deploy stage : Automated Deployment (infra/server/app provisioning)

Some stages may have dedicated tools helping the automation process.

Automated Builds

Generate artifacts from source following directives.

Example tools are :

Automated Tests

Get insight of the application behavior and reliability.

The testing stage can cover multiple area. Example tools in these areas are :

Unit Tests :

Load Tests :

Acceptance Tests :

Automated Release

Create a release to the specified version

Package your delivery and make the resulting resource(s) available for the next deploy stage.

With any Build automation tool or raw scripting language, you will package and upload to a central repository server.

Here is a non exhaustive list of repository servers :

Automated Deployment

Deploy specific version to the target

The deployment stage can cover 3 distinct steps.

Infrastructure Provisioning :

Server Provisioning :

Application Provisioning :

NOTE : Nowadays tools are overlapping on many features. For example, I am currently working on a project where I use Ansible for the 3 provisioning steps : infrastructure, server and application provisioning.

Orchestration

Once you’ve automated 2+ consecutive stages, you need to coordinate them.

Orchestration is about coordination : you define how and when you want to execute the automated scripts. Orchestration solutions allow to define if they are executed on a scheduled, triggered, or on-demand basis.

We can implement CI/CD pipelines on any orchestration solution :

Yes, you can implement CD pipeline even on your old Jenkins 1.x orchestration server ! Remember that’s just about defining how and when you run automated scripts...

So what’s all this hype around Continuous Delivery/Deployment ?

It’s because orchestration solutions makers are creating additional capabilities (builtin or plugin) to help you implement pipelines.

Here are the two most recommended capabilities :

Pipeline as Code

The real important change in the field of CI/CD definition is that you should require the ability to code your pipeline and submit it inside your scm repository, along with your code.

Pipeline as Code is key to ensure long term maintainability.

Prior “Pipeline as Code”, your code and your automation was commited to a scm. The pipeline was defined through a webui, has multiple tasks and with their own unversioned lifecycle, making difficult to ensure long term maintainability. You also needed access to the orchestration server to create your tasks.

With “Pipeline as Code”, your code, your automation and your orchestration are now commited to the scm. The pipeline have the exact same versioned lifecycle, helping you to ensure long term maintainability.

“You need to setup a new pipeline for your project”

then: you needed an account and the required acls to the orchestration server

now: that’s ok, push the code

“You need to bugfix and package an old app version for security purpose”

then: chances are the tasks no longer fits with your old app requirements. You weren’t able to run the current pipeline.

now: that’s ok, pull, fix, push the code and the legacy pipeline will be started.

“You work on a branch requiring a slighlty different orchestration”

then: you have to duplicate all tasks making your orchestration dashboard more and more complex over the time.

now: that’s ok, push the new definition and the new improved pipeline will be started.

Remote Agents

Legacy orchestration servers used to require lots of cpu/memory/disk to ensure all your companies tasks can be executed. Complex and constant actions was required to operate these servers with high quality level : Plugins Management, Users Management, Versions Update, Capacity Provisioning, Monitoring,…. (just ask you OPS team for more)

Legacy orchestration servers were complex for OPS and deceptive for DEV

Prior “Remote Agents”, The tasks are executed on the orchestration server itself or one of its slaves, in raw environment.

With “Remote Agents”, The tasks is executed on the remote agent, in a dedicated environment context.

You want to build with a very specific component version which are not the ones on the orchestration server

then: If you update the raw environment, it’ll have an impact on all the next builds to come. Not only yours, but all the build running on the system. So you need to manually setup a new context (virtualenv, version manager, virtual machine, containers, …)

now: Your agent itself can guaranty a new dedicated context using virtual machine or container technology by design (transparent for you)

“You want your orchestration server to start a deployment”

then: The orchestration server will need to have access to the targetted environment. This will require you to ask an inbound whitelist on multiple IP/Port/Protocol which is often difficult to obtain, to maintain, and opens up a security concern…

now: The orchestration agent can securely register to the orchestration server from within the targetted environment. This may require you to ask an outbound whitelist on one IP/Port/Protocol which is often easy to obtain, to defend, and less an issue to the security team (I said “less”, not… “not”)

Disambiguation

“Automation” != “Automatic”

Having an automated stage (automation), doesn’t mean it is automatically started (automatic promotion).

“Continuous” != “Automatic”.

Continuous Deployment doesn’t mean each time someone push to the repository, the change will be spread up to the production without manual validation between stages.

Continuous is the idea of continuity between multiple distinct sections. Like when you plug multiple pipes together to get a (…wait for it…) pipeline ;)

“Orchestration” != “Orchestration” (duh)

In this article I am focused on top-level Ochestration : coordinating your pipes (automated stages) to create your pipeline (continuous integration/delivery/deployment).

Be aware you can speak of (and you will hear about) Orchestration at multiple sublevel of your pipeline. In the automated deployment stage for example :

  • Ansible maps arbitrary IT process to a concrete complex multi-tier worflow orchestration,
  • MCollective used on top of Puppet will help you build server orchestration and parallel job-execution systems,
  • Docker will manage your container orchestration,

So each time you hear about Orchestration, ask yourself what’s the Ochestration level first :

  • Pipeline level or Stage level ?
  • Which Stage ?
  • If in Deployment Stage, is it in Server Provisioning or in Application provisioning ?
  • etc…

Ochestration can have multiple meaning at multiple levels.
So without context, it has no meaning at all.
— Captain Obvious

So if you’re asked to implement the DevOps Ochestration (Achievement Unlocked : “Buzz Word Combo”) of company X or for projet Y, well… just laught and move on :)

Feedback Loop

Continuous Pipelines are not just about tooling.

You must not (!) forget the Feedback Loop to ensure constant improvement…

Conclusion

Typical Workflow

Typical software development process from early thinking to end user is the following :

Continuous Pipelines

Starting from a SCM to pull code from, we create scripts to build and test the delivery. Once each step is automated, we can coordinate them with an orchestator.

Starting from a Continuous Integration worflow, we now automate the Release of the tested artifacts to a ready-to-deploy version. Once each step is automated, we can coordinate them with an orchestator.

Starting from a Continuous Delivery worflow, we now automate the deployment to the targeted environment. Once each step is automated, we can coordinate them with an orchestator.

For many of us, that will be our (asymptotic ?) target.

Finally

With all the above in mind I’m confident you’ll be able to define your needs and choose the tool that fits your company’s requirements. You’ll also be able to get definition approval consensus from your coworker. Hope I did in the first place ! :)

I finally came up with this clear and simple definition I was searching for in the first place :

Continuous Pipeline is the orchestration of automated stages in your software lifecycle.

Happy Pipelining.

--

--

Matthieu FRONTON

Director - Cybersecurity & Digital Architect @ frog part of Capgemini Invent. Formerly Head of DevOps Strategy @ La Poste. Full Time Digitaloholic