Why YAML is Important for Azure DevOps

Sebastian Schütze
RazorSPoint
Published in
4 min readJan 26, 2020

Azure DevOps introduced the multi-stage-pipelines UX experience for GA with the sprint update 162 (not exactly, but it’s a big step because this is default now). With the introduction of this feature also pipelines as code were introduced. You may say “Whaaaaat? Pipeline as Code?”. Yes! In the past, Azure DevOps pipelines are UI based tools. You could only configure them through the UI efficiently. For sure the pipeline itself is formatted in some sort of JSON object, but it includes a lot of metadata which is dependent on the environment where the pipeline is saved (Azure DevOps tenant, project etc.).

To understand why Azure DevOps is going the way to support coded only pipelines, we have to understand the history, reasoning, and thoughts behind this development.

A Small History of Pipelines

Before I get to the point, let me draw a very very short picture of how pipelines developed. Everybody knows the reason and need for automated pipelines: Everything was done manually in the past and this was the biggest source of error. The article which talks about Jenkins and its evolution of pipelines the history can be summarized as follows

  1. manual deployment
  2. coded/scripted deployment only
  3. UX based pipelines
  4. YAML based pipelines

Put aside exact dates or that YAML is not necessarily the most important or best thing for everybody, but it is definitely in the hype phase. It has somewhat proven itself (used by Azure DevOps, Kubernetes, OpenAPI, Gitlab, CircleCI, Jenkins Plugins).

Pipeline as Configuration

The title is important to what I believe is better than pipelines as code!

YAML itself is not important, but it does what pipelines as code can not do: Separating logic from configuration, which is an important part of GitOps or in general good DevOps!

For people who don’t know GitOps, it is basically an adoption of development based DevOps principles into the operations world. But you can read more about it at https://www.gitops.tech.

YAML based pipelines are the attempt of separating code logic from the configuration. As a developer, you want to have your code following best practices like

  • no hardcoded strings in code
  • write methods to do one thing and one thing only
  • make your code testable
  • make your code general enough so you can reuse it

What you would do, you write a method and parameterize parts of the method to make it reusable. Parameters are the configuration of your logic. This important to get to a new level of quality.

Pipelines as Code are not very Scalable

I myself started right away with UX-based pipelines. With Jenkins, it started with coded pipelines and on top of that UX-based pipelines were created. Both are not good enough. UX-based can separate configuration from code logic, but lack in flexibility or cannot be properly versioned in repositories. Coded pipelines can be versioned and seem to offer the possibility of flexibility, custom logic and versioning. But they offer many possibilities in doing it wrong. Especially when you mix the logic and the configuration. This is a problem!

Why is this a problem? If you mix these things pipelines are only as good as the person who creates them. They are less reusable and tend to get outdated or not being tested properly. Yes, you could argue that you technically can, but I am talking about big scale organizations where you maybe have 20 or more teams all creating their own coded pipelines for their application. This is getting into a problem

YAML pipelines, which are basically pipelines as configuration, are designed so that code logic and configuration is clearly separated by design. Yes, you still can have tasks in the pipelines (e.g. PowerShell) where you still run your own logic, but it is not intended.

In my opinion, YAML pipelines follow the KISS (Keep it short and simple) principle. There are several things you want to keep in mind

  1. When you search for a DevOps engineer, how fast can you find somebody using industry-standard versus your own custom framework?
  2. Is it really needed that you have to concentrate on debugging code?
  3. How much time do new employees need to understand your pipelines?

Also, another trend to simplicity is going from scripted things to declarative things. The biggest example is infrastructure as code. Before you were writing PowerShell or whatever scripts to create your infrastructure.

With declarative pipelines, you have better possibilities to standardize and scale pipelines on enterprise-level much easier.

Summary

To summarize why things like YAML are better than purely scripted pipelines:

  • Declarative
  • More readable
  • Wider used in the industry, thus you need to tech less (that’s why JavaScript got so famous…. And not because it is the best programming language)
  • Easier to reuse
  • Consciously reduced degrees of freedom (think of Markdown docs vs HTML or MS Office docs)

I think YAML is not well supported with IDEs yet or maybe not the best language, but it is a good start! And by the way: people also said that JavaScript is shitty (some still say that) but it evolved and many problems have been solved with it. And… it is accepted and adopted globally.

Originally published at RazorSPoint.

--

--

Sebastian Schütze
RazorSPoint

I am an Azure Nerd with focus on DevOps and Azure DevOps that converted from the big world of SharePoint and O365.