ACF: Automated Code Formation ( Engineering Excellence)

Yadu Mathur
Walmart Global Tech Blog
6 min readAug 4, 2021
Figure 1: ACF custom header image

Code as a Template: Covering IaaC, workflows, UI screens, DB schemas, and backend API’s

Introduction:

Code formation and code as a template are evolving very quickly, as we make progress towards engineering excellence, we need to evaluate the options for auto-deployment, Continuous Integration, and management of IaaC (Infra as a code) as well as CaaT (code as a template).

In this blog post, our goal is to talk about the design of the “Code Formation framework” and its different components. We have created this framework to generate the code for managing the infrastructure. The framework includes the creation of web UI’s, backend APIs, and then connecting them from spec files and asset builder.

Objective:

Our goal was to orchestrate the “micro application and infrastructure code” which would then serve as a self-serving platform for users to deploy and manage their cloud resources.

Micro application here consists of micro front-end, tied back-end microservices, and Infrastructure as a Code, including the Terraform modules and workflows.

To create a self-serve platform for service offerings such as Azure Redis Cache ( cloud resource ) from Azure, we need the following components.

  • UI for getting the specification from the user
  • Backend API’s to handle the request which has the specification and pass it through
  • DB to store and fetch the specifications
  • Terraform code to deploy the cloud resources using the specifications
  • Workflow to integrate the ecosystem and all the other components

The Design:

The basic design of the “Code Formation framework” is created based on 6 important components

  1. Templates
  2. Spec File
  3. Entity-Relationship model
  4. Libraries
  5. Asset Builder
  6. Deployment
Figure 2: High-level view of “Code Formation” framework. ( k8 icon photo reference https://commons.wikimedia.org/wiki/File:Kubernetes_(container_engine).png, git icon photo reference https://iconscout.com/icon/git-free-opensource-distributed-version-control-system-square )

Let’s understand each one of the above

1. Templates:

The Code templates are a very important component for each part of the framework. Code Templates are reusable code snippets that allow you to quickly insert commonly used code fragments.

IaaC terraform templates generate Terraform main.tf file, variables.tf, provider.tf and outputs.tf files.

Below are the details on each one of them using Azure Redis Cache as an example.

Variables.tf: location, subscription, cache tier, cache name, SKU, subscription details, etc.

main.tf: Actual resource and module code of Terraform to deploy the resources.

provider.tf: Consists of the required_providers block which specifies the provider local name, the source address, and the version.

Outputs.tf: It has the return values from the Terraform modules.

Similarly, Workflow templates are used to generate workflows Yaml files which are executed via concord (https://github.com/walmartlabs/concord). Templates are written using Jinja2 and some of the ones can be customized.

Note: Sample Workflow YAML template

Figure 3: Snippet of an example of workflow template

The objective was to make the framework a plug-in plug-out application based on the cloud resource requirement. All we have to do is add new templates or change existing templates.

Note: Sample application.properties template.

Figure 4: Example of an application.properties template.

2. Spec File:

Spec file is the skeleton of the framework which decides on the outcome, execution strategy, and deployment sequence.

The spec file is placed in the spec repository. When the developer raises a pull request for a new flow on master, the code formation flow will be triggered during the PR merge.

Note: Example of the Spec file which can fork the whole project.

Figure 5: Example of a Spec template.

3. Entity-Relationship Model:

The deployment has a defined ER model which is in-sync across the CI process while generating the code for backend APIs.

Note: Example of the Entity Data Relationship.

Figure 6: ER model
  • All the service details including customer information are stored in the customer service entity. Each customer service can have 3 service instances — dev/stage/prod for a specific cloud service.
  • The service instance entity contains the details of the environment for every service.
  • The Process entity is used to keep track of types of processes whenever any activity is done for each service instance. The last operation done on the service instance is tracked by updating the respective process id in the Service Instance entity.
  • Since multiple processes can run in parallel, to manage locks on the currently running process, a Schedular lock entity is used.
  • Using the region mapping entity, the provisioning form dynamically reflects the currently available regions.

4. Libraries:

Most of the code is reusable hence the libraries have been created in advance for build-time code generation, e.g. Slack and email notifications, UI mappers, custom fields, etc. There are runtime libraries as well which are directly used in the cloud service code like pushing RBAC controls on cloud resources, resource provision terraform code, resource deletion terraform code, etc.

Figure 7: CICD for library Management

CICD of Library management: Libraries are versioned and used as part of the package in the framework. Any new update to the libraries will be versioned and the developer can update the spec with the new lib version which re-triggers the pipeline to deploy the Code.

5. Asset Builder:

Following are the steps which asset builder goes through

  • Step 1: Creates the git repository for the project
  • Step 2: It parses the spec file along with the templates to generate the code.
  • Step 3: It raises a pull request to the release branch of the newly created repo with the generated code.
  • Step 4: It runs the code through the unit test cases and integration test suite.
  • Step 5: It publishes the code coverage on Sonar
Figure 8: Asset builder flow. (git icon photo reference https://iconscout.com/icon/git-free-opensource-distributed-version-control-system-square )

6. Deployment:

A pull request is created to the release branch of the git repo and the code is generated through the framework. This brings the basic version up which is running in K8 through auto-deployment. To ensure there is no manual intervention all new changes have to be made via spec update.

The generated files through the Code Formation engine are labeled and tagged as “managed files”, ensuring that the developer follows the proper CI process to rebuild and in case of any changes those should go through the code formation framework.

This brings a well-designed CI and CD process in the project with standardization of git repositories, naming conventions, folder structures, build pipelines and release management.

Kubernetes an open-source platform for managing containerized workloads and services is used for deploying this container.

Note: Example of the K8 YAML template for deployment

Figure 9: Example snippet of K8 deployment file template

The Execution and life cycle:

There are two types of users of the Code formation framework.

Developer of Automated Code Formation: The developer is responsible for updating and managing libraries, creating templates, updating the asset builder to support new libraries, and adding new templates.

Consumer of Automated Code Formation: The consumer will be using the code formation engine to manage the deployed application by writing the spec file and running the code formation engine.

Figure 10: Overall flow of the code formation engine and the outcome. (git icon photo reference https://iconscout.com/icon/git-free-opensource-distributed-version-control-system-square )

The Result:

Our goal was to orchestrate the “micro application and infrastructure code” which would then serve as a self-serving platform for users to deploy and manage their cloud resources. We met our objective and additionally we reduced MTTM ( mean time to market ) of bringing a cloud resource self-service platform from 26 weeks to less than 2 weeks.

The work showcased in this article was completed by a stellar team of Engineers.

Engineering Team: Yadu Mathur, Anton Sherkhonov, Ankita Jaiswal, Mounika Yerramalla, Srinidhi Korrapati, and Mahesh Pedavalli

--

--