The Twelve Factor App — methodology for successful Microservices

SUJITH P V
Nov 6 · 6 min read

Why Twelve Factor App Methodology ???

Nowadays many are developing lot of loosely coupled Microservices compared to the old Monolithic applications to cater our business requirements. Here we will focus on a methodology which helps everyone build an independently managed ,iterated services, decoupled, stateless, robust Microservices.

The Twelve Factor App Methodology was published by Adam Wiggins from Heroku team in 2011.

As per Twelve-Factor App as described at 12factor.net :

  • Declarative formats for Development Set up automation
  • Portability with different execution environments
  • Easily deployable onto modern Cloud Platforms
  • Easily scalable without any significant changes or effort

Currently I’m working on a Microservice created in Java with Spring Boot and Project reactor and used GitHub as the Code repository. We have deployed our service onto AWS Cloud backed by Kubernetes Platform and CICD built in Jenkins. Now lets deep dive onto The Twelve Factor App methodology as given below and we will try to relate and state how we followed this methodology based on the different guidelines

Codebase :

Codebase guidelines is one of the Twelve Factor Methodology attribute which says each application codes should be placed in versioned repository individually. We know many are using GitHub as code repository with versioning. As per the guidelines, we should have a single repository for each application i.e we should not try to mix different applications/dependent applications inside the same repository. Single Codebase helps everyone in easy integration of CI/CD workflows inline with Code repository.

In my projects, we are using GitHub as the source of truth for each service. We have created workflows for CI/CD in Jenkins with integration with the GitHub.

Dependencies :

As per this concept, we should not add Project dependencies directly onto any Code. Almost every language provides packaging system for managing and distributing libraries. So in nut shell, we should be using the dependency management tools to obtain the relevant libraries as mentioned in its manifests from the Central Server only. This helps to bring uniformity of dependent libraries across execution environments.

In my Projects ,we are using Maven for resolving dependencies. We define our required modules references in pom.xml. Maven tries to fetch these referred artifacts from the Maven Central repository. Also we ensure the right version of the dependencies are marked in pom.xml to avoid any conflicts on version upgrades or new releases

Config:

As we know, every application relies on certain Configurations to manage and run the applications across environments. For example database connection strings, dependent service contexts, identity configurations etc. As per this concept, it says all the application configurations should be kept separately from the application code and should not be hard coded within the application code. It should seamlessly work by changing the configurations with desired values across environments. So we can rather keep configurations in .config/.yaml/.json/.properties files etc as per different languages.

In my Project, we are keeping our Configurations in application.yml files and it varies across the different environments. We keeps these environment specific configurations separately in Cloud and pulled down during teh respective deployment processes.

Backing Services :

Backing Services can be defined as any dependent services which the application invokes over network to fulfill its functionalities. For example, our applications may be using any Caching services like Redis, Kafka Queues or Couchbase databases etc. As per this concept, each Microservice should treat dependent applications as resources accessible via Http and easily switchable by the configuration change rather than changing the actual implementation. So in nut shell, we should write our applications with flexibility of switching the backends without any code change. This can be very well achieved via dependency injections and can be decided on run time based on the configurations mentioned in the application.

In my Project, we are interacting with some queueing systems like AWS SQS Queue and dependent context added in the configurations file. We have written our application in Interface driven way so it provides the flexibility to switch over to different stack like Azure Queue of the same services by changing the configurations.

Build, Release, Run :

As per this attributes, 12 Factor methodology says, we should have clear separation between application build , release and deployment. Build stage converts code repo into executable bundle known as build. Release stage bundles the latest build with respective configurations for the execution environment. Run stage takes the latest Release version and tries to deploy onto the execution environment.

In my Project we are adhering to this concept by having different pipelines for the Build/Release/Deploy stages. As soon as a developer raises a PR request, a Sanity Build job will be triggered which will run code quality checks, unit test and coverage checks etc. Once PR is merged, a Merge workflow triggers and merges the code to Master branch. Next we trigger a Release workflow and it marks the code base with a version and release tag is created in GitHub. At the same time docker images are pushed to AWS ECR. Now we have separate Deployment Job which takes release version as input and deploy onto required environment.

Processes :

As per this concept, app is executed in the environment as one or more stateless processes. Mostly our applications are scaled across multiple instances/processes and traffic is managed by a load balancer. Here every client requests may land in any of the instances/processes and there are no guarantee that subsequent requests from same client lands in the same instance. So if our application relies on any file system or local memory to maintain state, this will be a bottle neck in cloud scaling scenarios. If we need to maintain any state , we should use some stateful backing service like databases.

In our Project, our application are deployed onto Kubernetes Cluster backed by Ingress to control traffic to the containers. We have Horizontal Pod autoscaling and Cluster Autoscaling in place to cater to the increased loads. The states are maintained in Couchbase database.

Port Binding :

Usually we deploy our applications with some Web servers. But as per this concept, each service should be self contained and self hosted on the respective port mentioned.

In our Project, we uses Spring Boot and by default our applications are self hosted within Tomcat Server.

Concurrency :

In the twelve-factor app, processes are a first class citizen. The share nothing, horizontal scalable nature of processes means that adding more concurrency is a simple and reliable operation.

Disposibility :

This concepts says that Processes in twelve-factor apps should be started or stopped in minimal time. This helps in rapid deployment and scaling of the applications.

In our Project we use the containerized deployments which is orechestrated by Kubernetes.

Dev/Prod parity :

This concept says design the system with continuous deployments by keeping less gap between development and production environments. It says try to match the development environment with production environment so that we can get rid of any compatibility issues in application with different execution environments.

In our Project, containerization helps us in achieving the infra compatibility between development and production environments

Logs :

Every application , generally uses Logging to analyze the behaviour of the application. Usually logs are written to a text files onto disk. Logs can be considered as an aggregated stream of events which are time sliced. As per the twelve factor methodology, it says an application should not attempt to write/manage any logs to log file. Instead it should try to emit events onto stdout. Ideally we should have separate agents to manage the emitted logs and forward to the central server for monitoring and analysis.

In our Project, we uses slf4j with logback to write the logs and Splunk agents are employed to capture these stdout log events and pushes to central server.

Admin Processes :

Primary responsibility of any application is to cater its business functionalities. But still every application will have certain administrative/management tasks. As per the Twelve Factor methodology, it says run admin/management tasks as one-off processes. Ideally we should ship the admin/maintenance tasks source code along with application code base to achieve the proper versioning.

In our Project, we have created few schedulers for purging the Elastic Search indexes periodically.

Overall, in nut shell The Twelve Factor Methodology suggests some guidelines on building independently managed ,iterated services, decoupled, stateless, robust Microservices by following the above concepts in consideration.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade