An Example of Development Workflow for Microservices on AWS

Mario Scalas
The Startup
Published in
5 min readOct 14, 2020

This is a series of articles where I describe my own solution for creating, building and deploying microservices in a Kubernetes environment running on Amazon Web Services (AWS).

In this first article, I‘m going to describe the main ideas and provide an overview of the selected tools and the reasons behind their selection. Keep in mind that I am leveraging well known tools and practices in a way that I find useful for me: I share it in order to get some feedback from interested readers too because you can always learn new and better ways to develop software.

Disclaimer

The focus of these articles is about the workflow, that is how we get to move code into runtime environment quickly and automatically. Because I don’t want to make this series long, I will omit several real worlds scenarios (like promoting builds to production environment, handling multiple versions, different update techniques, and so on).

I’ll keep as straightworward as possible for just one use case:

As a Developer I want my code to be automatically deployed in runtime so that I can save my time and have a fast feedback loop.

I won’t use any complex application use case, sorry! Infact, I will just show how to build a single plain Spring Boot -based microservice (that is, a “Hello World” REST API).

Additionally, I will use AWS and related services: this means that you need a valid AWS account and be prepared to spend some money since some of the services are not within the free tier (e.g., EKS control plane costs you 0.1 EUR / Hour). I will not be rensponsible for any expense you may incur if you try this code in your AWS account.

The building blocks of a service infrastructure

When I start building a new application based on Kubernetes I use a set of practices and tools that enable those practices. In order to develop a new application I usually setup:

  • Source code repository (Git) — for hosting my source code, the Spring Boot App in this case;
  • Docker Image Registry — for hosting the Docker images that are built in the process;
  • Kubernetes cluster — for hosting my microservices;
  • CI/CD Pipeline — for building the Docker images from source code and push them into the cluster.

I’ve usually done this manually in the past: it’s really tedious and error prone work. So, I gave up and looked up for different approaches.

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools (wikipedia)

AWS has its own IaC solution, called CloudFormation, which I personally find to be too complex and verbose for my tastes. In a nutshell, in AWS CloudFormation, you describe desired infrastructure state, called a stack, (like network, firewalls, virtual machines, disks, …) by coding the resources that you need into YAML (or JSON) files that then you can feed to AWS so that it will automatically create, update or delete your infrastructure (see this or google for CloudFormation template samples). Yes, it will automatically compute the steps required to go from the current state to the new state you have coded. It’s just that cool!

If you change your infrastructure by changing its code then you can avoid infrastructure drifts, track your changes, benefit from deployment speed and repeatability, and (of course) rollback to a safe previous state (well, almost in all cases ;)).

Yet, I find YAML and JSON to become extremely verbose and hard to understand, even when you use nested stacks to divide things in logical and reusable blocks. Luckily, someone in Amazon thinks the same and developed the Cloud Development Kit (CDK), a high-level abstraction of the AWS resources writen in TypeScript and with bindings for other languages (e.g., Java, Python and C#). Behind the scenes, CDK will generate the same YAML you’d codify by hand but it will also provides your with additional features, like the minimum privileges that your resources will need in order to run.

Don’t ask me why but, even being a strong Java developer, I find the TypeScript code to be easier to develop and understand (I tried Python too but … I can’t really use it for anything else but Jupyter Notebooks ;)). Additionally, the setup is quite simple, only requiring NodeJs runtime to be present.

Let’s see the core infrastructure we are going to define and codify.

The component stacks

Cluster, repositories and CI/CD pipelines are main building blocks

In my view we have three infrastructure stacks we need to define and manage:

  • The Cluster Stack — which defines the AWS EKS cluster and node group (the EC2 instances that will run your containers).
  • The Stateful Stack — which will define Git source repository (AWS CodeCommit) and Docker Image repository (AWS ECR);
  • The CI/CD Stack — which defines how to build the Docker images from source code and how to deploy them into cluster (using AWS CodePipeline and AWS CodeBuild + a few tools that we use in order to implement our deployment pipeline).

Cluster Stack is the runtime environment: our microservices run in a cluster managed by EKS: we use Kubernetes deployment definitions that are provided within the microservice source code itself.

Internal Artifacts Stack contains our source code and the Docker image repositories that are deployed into the target cluster; in my case I have one source code repository and one image registry for each microservice (which is a common scenario, I guess).

The CI/CD Stack implements our build and deployment strategy:

  1. how we fetch the source code (e.g., from which branch);
  2. which Skaffold profiles we would like to use when running Skaffold-based image build;
  3. which tag we want to use when tagging the image and deploying it into the cluster.

We are going to implement all of these stacks using AWS CDK constructs, carving up our desired result.

Last notes about Skaffold as CI/CD tool

Note that deployment into the cluster and Kubernetes resource definition are performed using Skaffold and Kustomize: I personally use these tools for local development workflow in my team so it makes sense (for me, and it’s quite easy) to use the same tecniques and tools when building a CI/CD Pipeline too.

I’ve implemented a nested stack that leverages Skaffold and other required tools under CodeBuild: it’s not the cleaner solution but feel free to use CDK-only constructs only even for Kubernetes resources, like you may do with cdk8s.

Conclusions

In next article will setup our first microservice, a simple Spring Boot-based microservice using Skaffold and Google Skaffold and Kustomize.

Stay tuned!

--

--

Mario Scalas
The Startup

I’m a software engineer / solution architect / team coach and I’m passionate about software development practices, technologies and software processes.