DevOps Patterns — Sharing Reusable Components

Avishay Balter
Microsoft Azure
Published in
11 min readFeb 13, 2020

This post presents a DevOps design pattern with the intended goal of enabling cross-organizational code sharing of integration and release management processes, and further discuss the implementation details of the pattern using Azure DevOps. The target audience are the technical “doers” who face challenges of maintaining standardization of these processes across their organizations. It can be a startup with a number of micro-services developed by different teams, or it can be a large enterprise where technical teams belong to different business units and have very little in common in terms of their technical stack, or even application-lifetime-management maturity.

The organization, on the other hand, would prefer them to all have the same release process, going through the same quality gates, testing phases, security scans, and such process. Moreover, such organizations would like to avoid “silos of DevOps” and to reduce the efforts to develop and maintain these automation processes to a minimum, across the organization and share the work done by one team, with them all.

Recent industry adoption and tooling support for pipelines as-code (GitHub Actions, Azure Pipelines YAML, Jenkins) allow teams to define their build and release processes in code files. The code is maintained, versioned, and shared in a standard git model and allows for advanced patterns over DevOps that enable organizational sharing of process, whatever the underlying application is.

The code samples and patterns that are mentioned in this post are based on the work done in collaboration between Finastra and Microsoft’s Commercial Software Engineering team, as part of Finastra’s efforts to modernize operations within the organization.

Finastra is the world’s third-largest fintech company. Its goals are to provide open banking capabilities to all its customers by offering Open APIs on its platform, and increase its services revenue.

In order to achieve these goals, Finastra has begun building a common platform, called FFDC (Fusion Fabric Dot Cloud) which serves this purpose and exposes the required APIs.

On top of the platform, and in conjunction with it, Finastra’s services are deployed in a CI/CD fashion into the cloud, providing the required functionality and serving the customers.

Some of the underlying services are being revamped and modernized with modern-day approaches like microservices and deployed as containers, while others retain their classic approach and are being deployed as monolith containers, or even as regular services inside virtual machines.

At Finastra, they want to create a seamless pipeline that supports all underlying architecture topologies — microservices and monoliths alike — and be able to produce a holistic approach towards CI/CD for all the apps.

Working with the code sample

The GitHub repo contains code which demonstrates the post’s concepts with an example “shared project”.

The code in the repository can be used to run the provided Node.js application through Pull Request (PR), Continuous Integration (CI) and Continuous Deliver (CD) stages, using Kubernetes deployment and Helm packaging.

Other build/deploy routes in the pipeline are mocked as bash scripts echoing their stubbed functionality.

Understanding the Pattern

We intend to implement a “write-once run-everywhere” CI/CD pipeline that builds, publishes, and validates any application while keeping resource ownership and identity access control at the project level, thus allowing sharing of ALM (Application Lifecycle Management) processes across the organization.

Participants:

  • Projects — Projects contain code, resources, and identities. There are two types of projects which are further explained below, a shared project (single) and an application project (many).
  • Template — Reusable code file. Templates are maintained in the shared project repository.
  • Pipeline code — Non-reusable code file that contains application-specific, or environment-specific parameters.
  • Pipeline instance — the imported form of a pipeline code file in the DevOps tool (Azure Pipelines)

Structure

Figure1: Pattern Structure

A project provides scope and context. It is a container of:

  • Code — In one, or more repositories.
  • Resources — Any environment components required for build, integration and test. These may include, for instance, connection endpoints to different testing clusters used in CD or authorized endpoints for specific scanning tools used in CI.
  • Pipelines — Automate processes such as pull request validations or CI/CD that builds and tests the application.
  • Identities — Who owns the project’s resources and what are their roles and access.

Shared Project

The shared project contains mostly code in template files. That code is owned by a centralized team of DevOps engineers and maintained as an innersource repository (the use of open source-like development methodologies within the boundaries of an organization), accepting changes from the different development teams in the organization.

Apart from code in templates, this repository contains the components required to implement “DevOps-for-DevOps”, as explained later in the post.

Application Project

An Application project is owned and maintained by a single development team that builds a product or a service.

This project’s repository contains the application code, Dockerfile, Helm charts and other coded components which are required for CI/CD. Additionally, this repository contains one or more pipeline code files. There is no build or release logic implemented in these pipeline code files. Instead, they reference the templates in the shared repository.

Figure 2: Components in a project

Templates are building blocks

Templates are a way to define reusable DevOps functionality in code files, usually YAML.

Template parameters are resolved at runtime and are used to control flow within the template, or to gain access to a resource.

Templates are either:

  • An atomic functionality that can operate anywhere. i.e., “build and push a docker image.”
  • A composition of other templates and tasks that represent a process. i.e., “CI for Node.js application that uses the build and push docker image template.”
parameters:
...
steps:
- task: HelmDeploy@0
displayName: Helm lint
...
- task: HelmDeploy@0
displayName: Helm Install Plugin Kubeval
...
- task: HelmDeploy@0
displayName: Helm Run Plugin Kubeval
...
- task: HelmDeploy@0
displayName: Helm Remove Plugin Kubeval
...

Implementing an atomic functionality, “helm lint”, as a template, in helm-lint.yml

steps:
- task: Kubernetes@1
displayName: login
...
- template: deploy-helm-native.yml
parameters:
...

A composite template of build templates in deploy-to-environment.yml

Much like a class in software code, templates conform to SRP/DRY (Single Responsibility and Do Not Repeat Yourself) principles and avoid “environmental side-effects” by using only template parameters and not agent state or build variables.

Template Hierarchy

Templates build on other templates in a composite hierarchy to describe processes out of the atomic units of functionality.

At the root folder of the shared project’s repository is a main template, which is a composite of all the templates in the repository. The main template implements a full CI/CD process, which every application in the organization goes through when building and delivering a new version.

The process delivers the application to a number of testing environment, implementing a different set of functional or non-functional tests on the delivered application before moving it to the next one.

Template parameterization patterns

Parameters of templates are, in a way, the API to the template. The creator of the template allows users of the template to provide different runtime arguments that control the way it works.

The root template’s parameters section is a list of JSON-like parameter-objects, named after each stage of the pipeline. This makes it easier to understand the purpose of each parameter.

PR: { 
enabled: '',
gate: '',
credscan: '',
codescan: '',
vulnerabilitiesscan: ''
}
CI: {
enabled: '',
azureSubscription: '',
azureContainerRegistry: '',
scan: '',
gate: ''
}
AUTOMATION: {
...
}
FUNCTIONAL: {
...
}
...

Each stage of the pipeline can be enabled or disabled using the boolean property “enabled”, allowing for different pipeline instances to be created from the template.

For instance, when implementing a PR pipeline, only the PR stage is enabled, and the other ones are disabled.

Resource Injection

Using template parameters to inject environment resources allows the code in the shared repository to be agnostic on the environment and assume access to it.

The templates uses parameters to define what service connections are required for it to operate, and referencing pipelines must provide these connections at runtime.

parameters:
azureContainerRegistry: ''
...
steps:
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: ${{parameters.azureContainerRegistry}}

Injecting service connection to an Azure Container Registry in docker-build-scan-push.yml

Control Flow

Typically every stage could be run in more than one way, depending on the delivered application.

For instance, Setting the build-type parameter for CI-stage to either Node.js/Java/Go causes the stage to run a different build template which fits the language.

steps:
- ${{ if eq(parameters.buildType, 'java') }}:
- template: java/build-maven.yml
...
- ${{ if eq(parameters.buildType, 'nodejs') }}:
- template: nodejs/build-npm.yml
...
- ${{ if eq(parameters.buildType, 'golang') }}:
- template: golang/build-go.yml
...

Switching between different code languages in build.yml

In the same way, using the property deploy-type selects the type of deployment used during CD:

  • Helm-native — does “helm upgrade” to a native kubernetes deployment
  • Weblogic-helm — operates Weblogic controller using helm
  • Gitops — commits updated helm chart to the Gitops repository
  • Fabrikate — uses fabrikate as a Gitops frontend
parameters:
deployType: ''
steps:
- ${{ if eq(parameters.deployType, 'helm-native') }}:
- template: deploy-helm-native.yml
...
- ${{ if eq(parameters.deployType, 'helm-weblogic') }}:
- template: deploy-weblogic.yml
...
- ${{ if eq(parameters.deployType, 'gitops') }}:
- template: deploy-gitops.yml
...
- ${{ if eq(parameters.deployType, 'fabrikate') }}:
- template: deploy-fabrikate.yml
...

Switching between deployment types in deploy-to-environment.yml.

The following diagram illustrates the behavior explained in this section.

At the template level, each stage, “CI” and “Test”, is broken up to a sequence of sub elements such as “build”, “test”, “package”, each element is a template with a parameter that can have different values.

At the pipeline level, when parameters are set to templates, they set the parameter value, thus coloring the sub-element accordingly, indicating the template runtime instance when parameterized with that value.

These types of parameters control flow within a template. They are evaluated within the template, by using the condition property on a task or a job, or by using conditional insertions.

- bash: |
echo docker scan using 3rd party tool
displayName: Docker Scan 3rd Party
condition: eq(${{parameters.scan}}, 'true')

Using condition property to determine if scan task should run in docker-build-scan-push.yml

- job: Tests
displayName: 'Functional Testing'
steps:
- ${{ if eq(parameters.Functional.testType, 'gatling') }}:
- template: Templates/Test/gatling-functional-tests.yml
...
- ${{ if eq(parameters.Functional.testType, 'postman') }}:
- template: Templates/Test/postman-functional-tests.yml
...

Using conditional insertion to select between test types in pipeline-template.yml

Note that:

  • A reference to a template does not have a condition property. This mean that composite templates that call on other templates are required to use conditional insertion to control flow between their sub-templates as seen in the sample code above.
  • If a task requires a service connection:
    - Skipping the task using condition property requires a valid service connection as input.
    - Skipping the task using conditional insertion does not require a valid service connection as input.

Implementing the pipeline

Application projects are owned and maintained by development teams. They contain application or service code and the code for pipelines that build, test, and deliver their application.

Additionally, a project contains the connectivity and access authorities (implemented as service connections in Azure DevOps) to various infrastructure or software components used during the process.

Template referencing from pipeline

Pipelines, which are written in YAML at the application project’s repository, directly reference the template from the shared repository. When working with Azure Pipelines, that can be achieved using yaml referencing, which works for GitHub or Azure Repos.

Different pipeline types

Pipelines are project-specific DevOps code that builds on top of templates. All pipelines, in all projects, use the same template and provide inputs to the template’s API through parameters, to control how, and in which environment, it operates.

A pipeline can be imported to the tool and run as an automatic workflow.

In a typical GitHub workflow project, a team maintains a pull request validation pipeline that is triggered when PRs are created and a CI/CD pipeline that is triggered when a commit was merged to the master branch.

pr:
branches:
...
- template: ../../../../pipeline-template.yml
parameters:
PR: {
enabled: true,
...
}
CI: {
enabled: false
}
AUTOMATION: {
enabled: false
}
FUNCTIONAL: {
enabled: false
}
NONFUNCTIONAL: {
enabled: false
}

Pull Request pipeline in demo application pr-azure-pipelines.yml

Supporting the unsupported

The pattern does not presume to support all edge cases that may arise from teams, using out of the box code, from the shared repository. The design of the template provides ways to override internal behavior and inject external steps by using task insertions throughout the templates.

Ultimately, though, the goal of the organization, and the teams within it, is to conform to standard methods. If, at some point in the template’s lifecycle, several teams require the same logic injected to the pipeline to operate, then it is raised as a PR to the main branch, where code is shareable.

parameters:
...
deploySteps:
...
steps:
...
- ${{parameters.deploySteps}}
...

Injecting deploySteps in deploy-to-environment.yml.

Application Release Management

With Environments, Azure Pipelines enables better release management by providing both visibility to target environments and a way to access the target environments by a deployment. Supported targets include Kubernetes, App Service, Virtual Machines, and more.

Environments are a collection of Azure DevOps service connections to Kubernetes’ namespaces (in our case) which are implemented as a service account on specific namespaces in the cluster.

By using deployment jobs in our template, we can easily use the service connection offered by the environment resource in our pipeline.

- stage: Automation
displayName: 'Deploy to Automation environment'
jobs:
- deployment: Init
displayName: Initialize Environment Automation
environment: ${{parameters.AUTOMATION.environmentNamespace}}
strategy:
runOnce:
deploy:
steps:
- template: Templates/Deploy/deploy-to-environment.yml

Using Automation.environmentNamespace parameter as value to provide cluster access in pipeline-template.yml

AUTOMATION: {
...
environmentNamespace: 'Automation.automation'
...
}

Injecting a concrete environment’s resource in our project, Automation.automation, as value to the template in ci-azure-pipelines.yml

Conclusion

This post described the “Sharing Reusable DevOps Component” design pattern, which can be used to maximize the effect of DevOps groups within an organization and explored it’s implementation using code-based methods in Azure DevOps.

The provided code sample is a good place for you to start build these processes with your teams, using any tool, provided it support the elements of the pattern. However, note that your team’s existing pipelines are also valid candidates to kick off the process and refactor the code to be more generalized.

In the collaborative, “code-with” customer engagement, Finastra and Microsoft’s Commercial Software Engineering teams worked together on refactoring existing pipelines and scripts with this model, and successfully created a coded foundation of DevOps components that today accelerates new teams to continuous workflows within Finastra.

You and your team can do the same!

Avishay Balter is a software engineer and architect @ Microsoft’s Commercial Software Engineering group where he collaborates with Microsoft’s customers and partners to build epic stuff! Follow him on Twitter or LinkedIn

--

--

Avishay Balter
Microsoft Azure

Code, Cloud, Ops and Analog synthesizers. Software Engineer and Architect @ Microsoft.