Learnings from Building a Microservice

Naimun Siraj
Teachable
Published in
7 min readMar 9, 2023

Last year, Teachable embarked on its microservices journey, and I was fortunate to be a part of the team that worked on one of the early services. Over the past eight months, I’ve been developing the Creator Email Service (CES), a personalized email service that enables us to communicate with our creators in a way that is customized to their interaction within our platform.

The service was written in Golang and leveraged our parent company, Hotmart’s, existing AWS Elastic Kubernetes Service (EKS) infrastructure. The journey was challenging but fulfilling, and in this post, I’d like to share some of the key lessons and obstacles we encountered along the way. I’ll primarily focus on non-Golang specific challenges as they deserve a separate post.

Route to Hello World

The first task was to deploy an application with a /helloworld (for testing) and /healthcheck endpoint in Kubernetes (K8s). Our microservice documentation was slim at the time, so we had to experiment through trial and error. This leads me to the first lesson learned that may be obvious, but is hard to always embody.

Lesson 1: Don’t be afraid to dive into unfamiliar codebases.

Hotmart provided many custom Github Actions and Helm Charts that made it easy to plug and play. However, if there were any complex issues, we were at a dead end. I didn’t want to bombard our infrastructure team with questions without putting in the work, so I sifted through these modules and tried to understand the inputs/outputs. On top of these modules, there was also a lot of sleuthing in infrastructure code i.e. Terraform ( HCL) to provision resources for our microservice.

The path to deployment was in sight:

  1. Setup the Github repo with an appropriate project structure, inspired by golang-standards.
  2. Wire up an HTTP server and the relevant endpoints.
  3. Create a CI/CD pipeline with Github Actions.
  4. Deploy to staging!

Most of my time was spent trying to understand custom Github Actions and ensuring the K8s infrastructure and Github workflow files were configured correctly. The Github Actions pipeline consisted of a few key things:

  1. Building out the AWS resources (i.e. AWS IAM roles, vault secrets, domain, etc).
  2. Dockerizing your application and pushing that image to AWS Elastic Container Registry (ECR).
  3. Deploying the application to K8s using the image created above.

High-level visual:

For the most part, the errors thrown in the workflow pipelines were clear enough to be iterated upon fairly quickly. The common errors I bumped into included incorrect file paths, missing attributes, or my favorite, indentation errors.

Working with AWS Resources

For the CES, we were building an event-driven service and needed a way to capture creator events (i.e. course publishing, subscription change, etc). We wanted to move away from our existing Kafka-based system as it was tightly coupled with other systems. Since our infrastructure is AWS native, it made sense to leverage Simple Notification Service (SNS), a managed notification service, as the initial hub to publish events via a Topic. At this point, we could have enforced that our topic sends these events to CES directly, but that would have coupled our service with SNS. If there was ever service downtime, we would miss events.

Our solution was to integrate a Simple Queue Service (SQS) that would be subscribed to our SNS topic and receive those events. CES could then poll the queue for events as they come. In the event of CES downtime, the events would be in the queue and once CES is back up (hopefully within a reasonable amount of time), it would read the events.

Creating the SQS/SNS resources with the appropriate permissions took a bit more time than expected due to the nature of IAM roles. There were a few items that required permissions such as:

  • CES reading from SQS
  • Fedora, our monolithic application powering Teachable, writing to SNS
  • Staging/Test environments writing to SNS

High-level visual:

Lesson 2: Working with cloud infrastructure can be tedious, but rewarding.

Working with cloud infrastructure can feel like magic. There is a lot that is abstracted away from the end user, which is good, but also requires you to trust that things are working as intended. Within a few hours, we had a production ready pub/sub and queue system ready to use.

Dockerizing Your Application

A key aspect of application development is to understand that your application will be iterated upon by engineers in the future. With that in mind, it’s important to create a smooth and efficient path to continuous development, enabling faster iterations with minimal downtime.

That’s where Dockerization comes in. An application can have many dependencies and services that need to be up and running for it to function such as a database, caching layer, application server, background worker, etc. Without Docker, maintaining a consistent developer experience can be challenging and cause significant friction. Inconsistent dependency versions and application configurations across test/staging environments are just a few examples of potential issues. With Docker, you can ensure a portable, scalable, and reproducible developer experience across the board.

For the CES, the application binary is executed on a pre-built linux image and depends on the following services that could all be Dockerized:

  • postgres (existing public image)
  • redis server (existing public image)
  • redis worker (separate compiled binary running on the pre-build linux image that’s maintained by Hotmart).

These services are all managed using docker-compose files, which make multi-container Docker applications very easy to work with. Engineers do not need to worry about the public images as they are maintained by the Docker community.

As a result of Dockerizing the CES, we were able to onboard engineers to our project quickly since they were able to spin up the local dev environment within minutes and were submitting PR’s within a few hours.

High-level visual:

Lesson 3: Prioritize dockerizing your application early to speed up the development lifecycle.

Dockerizing your application can be an invaluable step in the development lifecycle, particularly when your application is complex and has a lot of moving parts. However, it may not always make sense to use Docker, especially for simpler applications that don’t require a lot of additional services or dependencies.

K8s + Helm Magic

Kubernetes has revolutionized the way we package, deploy, and manage containerized applications. For many engineers (myself included), the complexity of K8s can be daunting, especially if you haven’t worked in a containerized cloud-native environment.

When I first started poking around our infrastructure codebase to grok how we deployed applications, I learned that Hotmart has an extensive set of tools for deploying an application to K8s. Thanks to the pre-existing Hotmart Helm charts, we didn’t need to create K8s manifests (the blueprint for the application) from scratch.

Helm can be thought of as a package manager like yarn or pip, but instead of managing software packages on a single machine (i.e. your laptop), it manages K8s applications in a cluster of machines. Hotmart has an application Helm chart that comes equipped with pre-configured settings and allows for any custom configurations to be added via a separate YAML file. This file is then fed into a custom Github Action that runs the necessary helm commands to deploy the application on the K8s cluster. This includes creating/updating the Helm repository with your application, creating a new release with all your latest changes, creating/updating the relevant K8s manifests, etc.

High-level visual:

Lesson 4: Leveraging abstractions can help speed up development and deployment, but it’s important to understand what’s happening under the hood.

At first, I found it difficult to understand how Github Actions, Helm, and K8s fit together to result in a deployed application because it felt abstract and overwhelming. They obviously serve different purposes, but how they interacted wasn’t immediately apparent. That being said, I was relieved to learn that a lot of these abstractions were essentially shell commands (via AWS + Helm + K8s API) which are run by GitHub-hosted or self-hosted runners to get the necessary resources provisioned/deployed into AWS/K8s. Every team’s infrastructure setup is unique, so it’s essential to find the tools and processes that work best for your team and your company.

Summary

There were many other learnings along the way, including methods for observability, K9s, structured/unstructured logging, and more. These learnings are targeted for engineers who are new to microservices. They are pretty high-level, but can serve as helpful guides during development.

Building your first microservice can be an arduous task, but mostly because of all the unknowns. Once the unknowns are addressed and you have an outline of the bite-size milestones you need to achieve, the path to production will be clear!

P.S. Special thanks to our infrastructure wizards (including Hotmart engineers) who provided immense support on this journey. Also shoutout to the Growth team for all the love and support!

--

--