Bringing Order to Complex Gitlab Workflows with Dynamic Child Pipelines

Mark Cooke
Dec 13, 2021 · 5 min read


We recently needed to refactor a multi-client, single-region Azure Landing Zone pipeline to support multiple regions.

Refactoring it would have been easy if we were happy to look after many permutations of complex Gitlab .gitlab-ci.yml files.

Not content with taking a short cut, we set out to understand how to utilise Gitlab’s Parent-Child pipelines and, importantly, create them dynamically at runtime.

In this article, we’ll describe how we created a deployment process that let us easily replicate our Azure Landing Zone architecture across any region whilst reducing the amount of code we needed to maintain.

The Existing Codebase


The Landing Zone codebase was made up of a number of separate, but somewhat dependent Terraform configurations (with separate state files) that deployed core networking, storage, DNS and other shared services.

Code was then deployed into separate environments (from development through to production) — again, each with separate state files to limit blast radius.


Simple Gitlab pipelines usually consist of a single root .gitlab-ci.yml file.

The codebase we were working with was already more complicated, requiring multiple files for each Landing Zone feature, with hardcoded jobs for each environment:

The Problem

A significant issue with this structure was that the .gitlab.yml files were static; deploying new regions would require duplicated jobs, resulting in very large .yml files containing repeated code.

Using Parent-Child Pipelines

Gitlab provides some helpful features: Parent-Child Pipelines and Dynamic Child Pipelines.

Parent-Child Pipelines allow you create a Gitlab artefact (a yml file) in one job and consume that artefact in another, triggering a downstream (or “child”) pipeline within the original pipeline.

These child pipelines can run concurrently and act independently from their parents. A simple example is as follows:

In the example above, we have two jobs, setup and trigger_jobs.

  1. setup generates a gitlab_jobs.yml file (in our case, using Terraform, but any templating mechanism would work) that:
  • Declares the stages for the child pipeline
  • Contains two simple Gitlab jobs

2. The gitlab_jobs.yml file is made available as a pipeline artefact.

3. trigger_jobs consumes that artefact and, using the keywords trigger: and include:, triggers a child pipeline. The result is displayed in the pipeline as such:

This is all great, however at this point you might be thinking: what’s the point? Good question. What if we could take this one step further and also dynamically generate the child jobs based on config?

We can.

Dynamically Generating Parent-Child Pipelines

To dynamically generate child jobs, we need to read some configuration, and extend our templates to build multiple copies of our jobs, per region and per environment.

1. First, we dynamically gather the configuration we need for the given environment (note that the job in the example below extends .config — the script used by .config is shown in the next step):

2. Using the script below, each <environment>_build_config job generates a list of regions (in our case — from a terraform.tfvars.json file in a separate Git repository) and emits them to a file (gitlab.tfvars.json):

3. A templating mechanism (in our case we use Terraform — for consistency with the rest of the pipeline) creates yml files containing the planand apply Gitlab jobs for each Landing Zone feature (DNS, Networking, etc.). Jobs are created for each region we found in our config — no need to manage the lengthy .yml files by hand.

We only create a single .yml file for the subscription-level-resources and any number of .yml files for the other Landing Zone features. An abbreviated example of the Terraform template is as follows:

4. The generated .yml files are concatenated together to create one ‘master’ .yml containing all jobs for each environment — meaning we only need to share a single artefact with the rest of the pipeline: e.g. dev-regional-terraform-jobs-master.yml.

5. The second job for each environment in our root gitlab-ci.yml file consumes that artefact and in turn triggers a child pipeline. This then runs all Gitlab jobs referenced in the ‘master’ .yml file:

NOTE: Child pipelines are completely independent of their parents. Any variables, stages or artefacts must be re-declared or explicitly passed through to the child job.

6. The child pipeline contains our usual Gitlab jobs (Lint, Plan, Apply etc.) with the exception that the multiple copies of jobs for each Landing Zone feature are created — for each of the required regions.


By utilising GitLab’s Parent-Child pipelines, and creating them dynamically, we’re able to create a truly elastic deployment process that allows us to easily replicate our Azure Landing Zone architecture across any region simply by making a small addition to a config file. On top of that, it requires very little on-going operational maintenance.

Overall, by cleverly tweaking an existing process, we saved a huge amount of time and drastically reduced the number of static files we were maintaining in Git.

For those that have made it this far, thanks for taking the time to read the article, I hope it’ll help anyone on their journey to create a fully fledged CICD Pipeline.

Citihub Digital, a Synechron Company

Recording the digital DNA of financial services.