Achieving DevSecOps — Part 4.2: Coding governance, Infrastructure as Code: Producers

mohit sharma
7 min readFeb 18, 2024

--

It may be the most glorious of the codes but if none is ever going to use it, then it is worth no more than wasted time, worthless!

We talked about how Operations could be the owners and producers of secure infrastructure templates for the Azure resources used within the organization such that when they are used they enforce good governance.

Bicep templates only work for Azure Cloud and this is just one of the approaches

time to flex those “ARM” and “BICEP” templates

A notable capability of Azure Bicep templates is that it not only codes Azure resources with all necessary tags and properties for security but also encapsulates these resources into modules, I will be using that to establish a governance lifecycle of these resources.
These modules are then stored in a container registry, making them readily available for others to consume. In this article, I delve into the significance of this approach and how it’s setting new standards for cloud resource management.

Bicep supports clean and concise code but the feature that makes it a good collaboration tool is the ability to be shared as modules — reusable blocks of code that can be shared across projects. This modular approach not only streamlines the development process but also enhances consistency and maintainability across deployments.

The task of creating Bicep resource template modules has stages,
0. Create an empty (Azure DevOps) repository
1. It starts with making a list of the resources used in Azure Estate
2. Creating a skeleton code for each resource
3. Researching the preferred security settings etc for the resource
4. Consult the security team and Developers about the security best practices for that resource
5. Update the resource template with the agreed security or otherwise settings
6. Publish the finished Bicep templates in a repository, I used the Azure DevOps repo
7. Create a pipeline to compile these templates into modules and publish them to a (Azure) Container registry

The code is stored at: https://github.com/dashanan13/bicep-IaC-producer-demo

Once you explore the repository, you will notice that it serves as a template for what can be and how to set things up with the future in mind.
We start off with a resource “KeyVault” under the modules folder but essentially every resources is made similarly, it has the following files:
- ‘keyvault.bicep’: bicep template to deploy a keyvault with setting specified and also defines the parameters needed.
- ‘metadata.json’: version control XML file
- ‘pipeline.yaml’: definition of the pipeline that will deploy the resource to test it via ‘keyvault. bicep’ and if that works, it compiles the bicep file and publishes it to the container registry.

Apart from this, there exists an Azure container registry, in one of the resource groups on Azure, this container registry will house the modules and needs to be protected from any updates except from pipelines.

VSCode shows the code + Azure Portal shows the Container registry

Starting with ‘keyvault.bicep’, it is a template based on key vault bicep documentation, and similar documentation can be found for other resources too. The template is trimmed down to bare essentials but the idea is to develop it into something that has security guardrails baked into it from the get-go.
Another thing to notice is that every configuration value is provided as a parameter or a variable, non-hard coded, giving the user flexibility. On the other hand, though, parameters often have been given constraints, allowed values, and always a description. These addons not only contribute to clean code but also work as documentation and on the consumer side of things they are very helpful to understand the requirements.

It is important to note that resource templats are not being created randomly, there is planning at play here to decide the granularity to be addressed.
Example: Subnet is separated from Network because every team may need different number of subnets
but on the other hand it may not be justifiable for some other resources.

Plan the granularity of the template, too big is too constrained but too small is useless!

The template only asks for things that it can not assume or create like team names etc but creates values (variables) it can like resource names, this way it keeps a name format, and the tags. This makes them more usable and operations get more predictable infrastructure.

variable to create a unique name in the proper naming format

Importantly, every resource template gives a standard set of outputs. This pays off later with less customization with pipelines

Next, we look at metadata.json, this one is the simplest and it just defines the versioning of the templates and the numbers will be the guide for compiling it into modules with version codes.
This just has 2 values
- major: Only changes when the template changes fundamentally like adding a section for network constraints
- minor: Changes every time a new commit to a template code

There are more numbers added when you compile the templates into modules but these depend on the time of compilation, sort of automatic.

Finally, we look at automating the template compilation into modules and distributing them via container registry through a pipeline definition.
This is where I break down pipeline.yaml, this is the definition file for the automation pipeline described above.

The first section is about making sure the pipeline is triggered automatically ONLY for the template changed to update just that module.
I defined the trigger for only the master branch and limited it to only the module subfolder “keyvault”

The second section describes the values needed by the pipeline to run and the values needed for the parameters of my templates, not more not less.

The third section defines the pool of machines that will run this pipeline, I have defined my pool and one can use Azure pool, nothing fancy.

The actual fun happens in the next 3 sections, the third last section lints the code and deploys the resources to Azure for testing. If this goes through then we proceed to do other stuff, if it fails then we go back to the drawing board, fix errors, and come back stronger.

Source: https://sciencesprings.files.wordpress.com/2015/05/cern-lhc-particles.gif
3 sections to deploy an RG, collect output of RG name and then using it to deploy next resource

The first section of this part creates a resource group, the second section uses a bash command to gather the output generated from the template that deployed RG (name and id) and saves it to a runtime variable and the third part uses that output to provide a parameter value of RG to deploy to for the keyvault.
This is probably the most technical section of this file with ambiguous documentation etc but this was fun when done :)

This all could have been don't in a single template but again, Correlation does not imply causation. Them being together does not mean they are always that way hence different templates and hence the need for 3 steps

After this the last section publishes the module by compiling it, notice it only compiles keyvault module and not RG as that is another module with its pipeline.
Depending on the section makes sure it runs after deployment happens for Keyvault, the condition makes sure that deployment was a success and the next job pushes the module by compiling it to the container registry mentioned in the pipeline variables.

Needless to say, this pipeline needs its associated identity to have write permission on the Container regitry

Yes, this is it.
Next, we look at the consumer side of things, the developers using it.

Namaste!

--

--