Next-Level Deployment Pipelines

Albert Starreveld
Jun 16, 2020 · 11 min read

“Pipelines”, that’s what the menu in Azure DevOps says. Nothing more, nothing less. They can be used for anything. They’re a great tool to deploy software to resources in the cloud. But a pipeline can do much more than that.

Companies race each other. Who ships something new to the internet first? Who can serve a new market first? Deploying quickly, and often, is a huge competitive advantage. To be able to bring something new to production, the game is to get the software to the end-user as quickly as possible. Preferably before the feature is completed. That way, we can test assumptions and change our plan before we waste a fortune on building the perfect product that nobody needs. And that’s what pipelines are for. Building relevant software, quicker!

Pipelines can do much more than just installing things on cloud resources. They’re a tool to automate OPS-, development-, test- and even administrative processes to get feedback quicker, and to bring value to the client a.s.a.p.!

Deploying more often seems to be the obvious answer. If life were only that simple… How many issues are caused by software updates? Just throwing newer software at a resource continuously is reckless. You need to talk to your stakeholders before you install something new, and they want to be sure it won’t ruin their business. And just that, that’s what pipelines are for!

This article outlines:

  • Things you can do with a pipeline.
  • What stages should a pipeline have?
  • Why you need different environments and how your pipeline copes with that.
  • How to deal with versions and environments.

Part 1.) Things you can do with a pipeline

Pipelines are much more than an installation vehicle. Pipelines automate business-processes. Ultimately, a deployment-pipeline automates as many deployment-related business processes as possible. For example:

Compiling and installing the software

The most common use of a pipeline is to compile source-code. A pipeline can pull source-code from a repository and compile it. That will result in artifacts that the pipeline installs on a cloud-resource.

Testing the new version of the software

Quality assurance is an important part of the software delivery process. Everything, ranging from validating requirements, taking screenshots, and generating test-reports, can be done by your pipeline.

This is just the tip of the iceberg, but pipelines typically contain these steps:

Does the new software do what it is supposed to do?
Before installing software on a production site, stakeholders would generally like to see it. They want to make sure their features have been implemented correctly before they’re installed on the production site. Perhaps they’ll open a browser, go to your application, click some buttons to see if the application does what they expect. That’s something your pipeline can do, too! Frameworks like Selenium, Cypress, or any other click-automation framework can automatically click through the application and read from the screen to see if anything broke. These frameworks can even take screenshots and generate a report. And if clicking through the application fails, you can use your pipeline to stop the installation.

Will an installation potentially be successful?
A pipeline has several stages. One per environment. But who says the environment contains every resource required by the application to run? You can use your pipeline to validate it. Use pre-installation checks to make sure your database is accessible, or to make sure that new third party API you’re going to use hasn’t been blocked in the firewall. Because if that’s the case, just let your pipeline cancel the deployment because it’s going to fail anyway.

Has the installation been successful?
Pre-installation checks aren’t a guarantee for working software. They anticipate the expected problems. But the real problem is with the things you don’t expect. That’s what post-installation checks are for. Let your pipeline validate if the updated software is up and running after the installation. Just invoke an endpoint to see if it returns a 200 OK. If it doesn’t, you’ve probably broken something! Use your pipeline to automatically roll-back and restore the environment?

There are no out-of-the-box tools that validate the presence of resources in an environment. The development-team usually builds a test-assembly that contains the checks they need, or by a PowerShell script that runs Pester tests in the pipeline.

Generating documentation and release notes

Every sprint, we complete user stories. They are linked to features. Once the user story is completed we close the work-items. When we create a pull request, we’ll mention those work-items.

Every release contains one or many pull-requests. Based on the version-control system, the features that are in the release can be traced back. That means you can automatically generate release-notes! Several plug-ins are available that do just this. Check out this plug-in, for example!

The same thing goes for your functional documentation. If you write your tests carefully, they can be used to generate functional documentation automatically. Look into Cucumber!

Installing new cloud-resources or run migrations

Not having the resources needed by the software, may not have to be a problem. Why not have a pipeline install these resources during the installation?

Asking stakeholders for approval

In many companies, there’s just one more process. Not a technical issue at all. Before updating an environment, someone needs to approve that installation. And these approvals need to be logged for compliancy-reasons.

Platforms like Azure-DevOps contain steps you can include in your pipeline that will send an e-mail to a list of users. The e-mail contains a link to a page with an approve button. And they can choose to either approve or disapprove a deployment.

Is there anything you can’t do with a pipeline?

Pipelines can automate pretty much any process. Ranging from generating e-mails to installing software, to migrating databases. It won’t surprise me if you’re able to create a pipeline that performs a phone-call or launches a rocket to Mars.

Part 2.) What stages should a pipeline have?

Pipelines are much more than just a set of technical tasks. They are the automated version of a complete IT-department (and more). They contain steps that automate multiple disciplines. It’s not just an OPS thing. There’s a virtual software-developer in that pipeline, and a tester, and somebody from OPS, somebody from the legal department perhaps, and a test-coordinator, and many more!

A pipeline has a purpose that’s much bigger than just installing software. It’s an automated process that allows you to ship things to production worry-free, without any manual work. Think of a pipeline as a series of quality gates. Consider this:

  • If the software doesn’t do what it is supposed to do, functionally, it should not be installed in any environment, and most certainly not in production!
  • If the installation of the software doesn’t work, we don’t want to install it in any environment, and most certainly not in production.
  • If the software doesn’t work well with the API’s it depends on, we might want to install it on a test environment to be able to analyze and fix the problem, but we certainly don’t want to install it on the production site.

All of the above can be solved by using the DTAP concept. DTAP is an abbreviation for four environments: The Development environment, the Test environment, the Acceptance environment, and the Production environment. It’s a concept that existed way before deployment pipelines, but relevant nonetheless.

An environment is a group of resources an application runs on. Every environment has a purpose. Different people use the environment for different reasons:

  • The development environment is the place where work is done. If things are broken there, that’s okay. That’s what it’s for. It is the place where work is in progress. A developer might run a development environment on his local machine. It’s not uncommon to have a resource group in a cloud, either.
  • When the development team thinks the software is good enough to be tested, they install it in a test environment. The test-environment is the environment where the team makes sure the software works end to end. That includes the integration with third-party API’s or with other APIs in the company they have to integrate with.
  • The acceptance environment is a copy of the production environment. Typically, the OPS team is in charge of the Acceptance environment. This environment is used by stakeholders to make sure the software does what they expected it to do, functionally. The installation procedure on the Acceptance environment should be equal to the installation procedure on the Production site. So if that fails, expect trouble in production!
  • The production site. That’s where the money is made. Developers don’t have access to it, neither do testers. Only end-users do.

The journey from the developer’s computer to the production site

A pipeline compiles source-code. That source-code becomes an artifact that will be installed on the environments of a DTAP-street. The deployment pipeline steers that artifact, the package as it were, from one environment to next.

Software travels from one stage to another

Don’t think of a deployment pipeline as a deployment vehicle, think of it as an information pipeline. Everybody in the software delivery process has questions about new versions of the software, and the deployment pipeline provides just that. But not everybody is interested in every version of the software:

The build stage (for the development team):

  • Does the application build? Are there any coding errors?
  • What business rules have been implemented correctly?
  • What business rules haven’t been implemented correctly?

The development stage (for the development team):

  • Does the installation work at all?
  • Does a deployed version of the software run in any environment?

The test stage (for the development team):

  • Does the installation integrate with the systems it depends on?
  • Does the software work, end to end?

The acceptance stage (for the ops team and the stakeholders):

  • Are the stakeholders happy?
  • Will the installation of production go smoothly?
  • Will the software work on an environment that has the same specs production has?

The production stage (for the ops team and the stakeholders):

  • Has the installation been successful?

Create a single pipeline with multiple stages

Not all questions need answering right away. Not every build is going to production. And to make matters more complex, some information can only be collected in a particular environment. In many cases, for example, testing the system end to end results in updates to the data. That may be an issue in an acceptance- or a production environment. That’s why every stage only collects some information.

Don’t make a pipeline per stage. How will you ensure everything works on production? And how will other teams be able to test if their software integrates with yours? All of the questions above need answering to ensure a properly working version of the software on the production site. And the software needs to be installed on every environment to ensure it keeps working when other teams ship their software to production. Make one pipeline with multiple stages and run the software through every stage of de environment.

TAP will do fine

Lately, most companies don’t bother to create both the development- and the testing environment. Development is done on a developer’s machine, and the development- and test stage from the deployment pipeline are merged. The result? A pipeline that has a test, acceptance- and production environment only.

The run-book

Remember OPS-teams used to write run-books? “First run this script, run that pipeline and then configure that server and finally, go to that website to see if it worked. If it doesn’t, inform John and restore the previous version.”

A pipeline is a run-book. It is an automated, autonomous installation process people can trigger without spending any thought to it. It automates what used to be in the run-book and much more.

Part 3.) How to deal with versions and environments

Software development and operating IT-environments are two completely different disciplines. Software development it’s all about source-code and producing new features a.s.a.p.. OPS is about keeping an environment alive with compiled versions of the source code of a given point in time. Developers don’t have to do disaster recovery, OPS-people do.

OPS installs new software. When things go wrong they are the ones that have to restore the environment as quickly as possible. And if that fails, that’s when the business starts losing money.

Imagine an IT-environment that runs version A of the source-code. A couple of days later the build-server gets updated and a new version of the compiler has been installed. At some point in time, OPS installs version B onto the production site and that goes wrong. To restore the environment, they run a new build of version A, but it doesn’t compile anymore. Oops…

There’s a difference between version control and version control. Developers use GIT for version control, but that’s not going to work for OPS purposes. Retain builds instead. Save the built version of the software too! That way OPS can skip the build step, and be certain to install the same version that used to be in a particular environment.

Backward compatibility

Installing a previous version of the software onto an environment isn’t going to work if the database-schema is incompatible. And being able to restore a previous version is important. It’s “Plan B” when disaster strikes.

Spend some time to figure out a way to make every version backward compatible. When renaming a column of a table, keep the old column up to date too, for at least one version. These little things make the software backward compatible.

Infrastructure is a deliverable, too!

Use your acceptance environment to make sure your software is compatible with the production site. Finding out the software isn’t compatible with the production site, on the production site, causes down-time. Testing it on the acceptance-environment is the next best thing. To be able to draw any conclusions at all, the acceptance- and the production site need to be more or less similar.

That leaves the problem of keeping multiple environments alike. To do that, you’ll need a blueprint of the environment to compare the others with. What better way of doing that then coding your infrastructure? Use Cloud formation templates, Terraform, or ARM-templates, for example, to create a blueprint of an environment in code. They can generate the same environment over and over again in multiple places.

These scripts are idempotent. So they too can be run in a pipeline.

Summary

Build pipelines are more than deployment vehicles. They provide information about the fitness of the software. They provide functional information, and they’re a way to practice the installation to the production site.

Use a build to test the business rules of the software, in isolation. Use a separate environment to ensure the software integrates with the components in its environment. Use a test- and an acceptance-environment to do perform both technical and functional checks.

Builds aren’t branches. Versioning code, developing new features, and releasing software is a different ball game. When releasing software, make sure to move a compiled version of the software from one environment to the next. Don’t pull any files from repositories during a deployment. Artifacts must be static to make OPS easy.

Make sure environments are alike. That’s hard. Especially when you’ve got a complex infrastructure. Or is it? Use Infrastructure as Code to replicate environments. It too can run in a pipeline.

Create meaningful packages. Separate build from release. Retain builds longer than your release cadence. That way you can run ’em through your deployment pipeline whenever you want.

The Startup

Get smarter at building your thing. Join The Startup’s +785K followers.

Sign up for Top 10 Stories

By The Startup

Get smarter at building your thing. Subscribe to receive The Startup's top 10 most read stories — delivered straight into your inbox, once a week. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Albert Starreveld

Written by

software developer / consultant @ vxcompany.com

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +785K followers.

Albert Starreveld

Written by

software developer / consultant @ vxcompany.com

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +785K followers.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store