Building an Infrastructure as Code Platform with Terraform, Ansible, and GitLab, using MinIO for State Management

Douglas Piero Sironi
6 min readAug 24, 2024

--

Given the need to create infrastructure across multiple environments while ensuring standardization and effective monitoring, it becomes crucial to provision these environments securely. To achieve this, adopting an immutable infrastructure approach, where environments are provisioned as code, is essential.

The purpose of this article is to demonstrate a possible approach to achieving this by using GitLab’s structures to enforce templates and standards, Terraform to apply and maintain standards across servers, and Ansible for software provisioning and configuration, utilizing a shared roles model across repositories. To manage the state of machines with Terraform, we use MinIO, as it enables this implementation on-premises.

Architecture Design

Figure 1: Infrastructure Platform Architecture Model

Step 1: The process always starts with submitting a standardized issue, specifying the stack model to be used, whether firewall permissions are needed, and if it’s a new setup or just a resource upgrade.

Step 2: The operator reviews the issue and begins the process. All conversations and time spent are logged within the issue.

Step 3: A new project is initiated in GitLab, based on the infrastructure model that will be created. This project is placed within the corresponding group in GitLab, where it inherits the necessary environment variables for standardized infrastructure creation.

Step 4: When the project is created, you only need to specify the IPs for the infrastructure to be provisioned in the environment specified in the issue (KVM, VMware). After planning with Terraform, the required resources are created, including adding labels if needed, for Veeam to perform backups based on label policies. Upon completion, the state of the created infrastructure is stored in a bucket.

Step 5: The next step involves executing standard tasks for all servers, such as identifying them, updating packages, installing necessary utilities, and registering the host in Zabbix for basic monitoring of the operating system and the stack. Depending on the resource group, the appropriate access keys are assigned to the responsible teams. For example, DBAs receive access keys for database servers.

Step 6: Based on the chosen model, the process of installing and configuring the entire stack is carried out. Similarly, users are created, and credentials are registered in Vault when necessary.

Step 7: With the application now running in the new environment, specific monitoring for each stack can be performed, registering the new server in Consul. Prometheus, in turn, identifies where it needs to collect information from. Each stack has its monitoring dashboard already configured, varying only by the name of the project that was created.

Step 8: The new infrastructure is delivered to the requester. In the case of databases, credentials are provided directly in Vault.

Project Structure

The folder structure in GitLab is organized as follows:

  • /infrastructure/: The main group, where global environment variables and default values should be stored.
  • /infrastructure/gitlab-models: Pipeline models, where we have two main projects:
  • ansible-pipelines: A project dedicated to maintaining the stacks and the composition of roles.
In the image above, we see an example of common tasks. In the structure, it is located at the path:
/infrastructure/gitlab-models/ansible-pipelines/common-task/provision.yml
  • terraform-pipelines: Pipelines for the available infrastructure models, such as vSphere, KVM, AWS, etc.
In the image above, we have an example of a pipeline that resides within the terraform-pipelines group, such as kvm-terraform-pipeline.yml. As we can see, it is a GitLab CI model intended to be extended in a stack pipeline.

/infrastructure/templates: In this group, we have the bootstrap projects, which will be used to create the stack models.

  • /infrastructure/provision/ansible/roles: In this project, we have the Ansible roles only, allowing us to centralize and update the roles in an isolated manner.
  • /infrastructure/dependencies-iac: This repository contains the platform’s dependencies, such as Dockerfiles for Terraform and Ansible, ensuring that the versions of the necessary tools and libraries are not altered.
  • /infrastructure/modules/: The modules created for Terraform are stored in this repository, with each project having its respective folder.

/infrastructure/on-premise/: This group is where the created infrastructures will be maintained, segmented by environment, data center, stack, and project. In the image, we can see the hierarchy of groups and subgroups down to the final project. At each of these levels, we can override the variable values associated with the groups.

How to use a platform

To simplify the use of the platform, we created a repository called issues-ops, where we provide an issue template that can be selected based on specific needs. This way, the infrastructure request is recorded right from the start.

Once the issue is created, the DevSecOps team can begin setting up the environment. To do this, they simply need to navigate to the appropriate group, in this case, infrastructure/on-premise/staging/dc1/loadbalancer/nginx, and create a new project based on a template. They should then provide the name of the project to be created and assign the necessary variables.

Within each template, the .gitlab-ci.yml file required for environment creation is already configured. In the case of NGINX, it is set up in this format.

In this setup, both the infrastructure creation templates and the Ansible templates are included, ensuring that the default roles are already integrated into these projects. Additionally, we provide steps to extend the model. If additional roles need to be installed, you can simply add the corresponding block, enabling a modular, building-block approach to configuration.

In the image below, we see the pipeline that ran the requested environment creation. You’ll notice that authorized_keys and common were executed, even though they were not explicitly declared in the .gitlab-ci.yml. This is because we have standard roles coming from the imported Ansible template, ensuring that the default roles are applied across all projects.

Conclusion

The infrastructure platform has greatly contributed to maintaining and enforcing standards because it requires a predefined model to be planned, tested, implemented, and made available as a template before any new infrastructure can be created. This process ensures that whenever we need to provision resources in an environment, we are establishing consistent standards, versioning these environments, and ensuring they can be reliably reconstructed if necessary.

One of the main challenges is keeping the models up-to-date and validated, especially as applications evolve and operating system versions change. It’s crucial to remember that when using infrastructure as code, all changes should be made through it, ensuring proper configuration versioning and environment immutability. Failing to do so may cause the platform to revert the environment to its defined state, potentially overriding manual changes.

The model proposed in this article is versatile, applicable to both on-premises and multi-cloud environments, making it an effective solution for hybrid infrastructures.

--

--

Douglas Piero Sironi

Senior Software Architect | Cloud-Native Architecture, Google Kubernetes Engine (GKE) | DevOps Architect https://www.linkedin.com/in/odouglassironi