Painkillers for Local Development of Microservices

The strategies and tools you need to work faster and smarter

Renzo Rozza
Apolitical Engineering
6 min readNov 29, 2022

--

Since its inception, the Apolitical platform has been expanding its reach to an ever-increasing number of public servants across the globe. Alongside this growth, the platform’s functionality has also been evolving to meet user needs. And in the same way, to ensure we succeed in our user-centred mission, the engineering teams have grown continuously and continue to do so (On an unrelated side note, if you are interested in joining our team, don’t hesitate to get in touch. We’re hiring!).

Now back to the more technical side, where growth is always easier said than done. The Apolitical platform is designed following a microservice architecture (also known as microservices). And as you may know, this architectural style structures an application as a collection of small services that are easy to maintain and test, loosely coupled, independently deployable, and much more.

The microservice architecture has many advantages, but it’s not always plain sailing. For instance, providing engineering teams with a good developer experience can sometimes be challenging. In particular, one of our biggest challenges at Apolitical was putting forward the right strategy and tools to iterate at speed. I’ll share more about how we managed to overcome this challenge later.

TL;DR: To stop doing tedious and repetitive tasks as part of our local development workflow: We implemented a monorepo with Git submodules to centralise access and standardise the synchronisation of multiple repos. We incorporated Tilt with Docker Compose to codify a common development path and ensure reproducibility with continuous feedback loops. And, we extended interoperability by connecting locally-running services to third parties and databases on the cloud.

Figure 1. Diagram of local development platform

How we implemented a monorepo

A monorepo is, to a certain extent, a simple concept: one version-controlled code repo that holds many distinct projects with well-defined relationships.

A common misconception is that following a monorepo is the same as following a monolithic architecture. Conversely, a monorepo simplifies code-sharing and cross-project refactorings. It also significantly lowers the cost of creating and maintaining microservices.

A simplified example of a monorepo’s structure would look like this:

/dev-platform
/apis
/api-1
/api-2
/uis
/ui-1
/ui-2

Simple, right? But… that’s not all! We want to take things one step further. And, rather than just moving all the code from every repo within our codebase to a monorepo, we decided to implement a monorepo with the use of Git submodules.

Submodules allow you to keep a Git repo as a subdirectory of another Git repo, whereby you can clone multiple repos into a monorepo and keep your commits separately. Following the example above, the dev-platform is the monorepo and it’s defined as its own Git repo. Then, api-1, api-2, ui-1, and ui-2 are added to the dev-platform as Git submodules.

Therefore, the commands executed inside a Git submodule, such as api-1, will change the submodule’s Git history, not the parent (dev-platform). Thus, executing commands inside the parent changes the parent’s Git history, not the submodule’s. For that reason, projects are nicely isolated on the repository layer. Still, the working trees put the source files side-by-side on your filesystem, allowing you to work as if everything is coming from one place.

How we redefined the development path

Implementing a monorepo was only the first part of improving the developer experience. In the second part, we decided to provide the engineering teams with the right tooling to locally orchestrate the required services to implement new functionalities faster.

For clarity, imagine that we have the following scenario: a feature request that requires changing the front-end ui-1.

In our old workflow, developers would have to:

  1. Clone the front-end ui-1 repo
  2. But also, clone the back-ends used by ui-1, in this case, the api-1 and api-2 repos
  3. Set up the right environment variables for each repo individually
  4. Then, spin up each service individually
  5. And finally, check the external dependencies. (Note that databases running locally may need to be populated at this point)

It became clear that going through that process was very prone to misconfigurations. And, as it doesn’t scale — there are too many tedious and repetitive tasks. It gets even worse when having to keep 20 or more repos up-to-date as the platform evolves at a very fast pace.

To overcome this, we introduced Tilt with Docker Compose into our local development workflow.

Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to configure each service and its dependencies. In addition, Tilt adds a layer on top of Docker Compose to automate all the steps from a code change to external dependencies: watching files, building container images, and bringing your local environment up-to-date.

At this point, the example monorepo’s structure shown above would have been updated to look like this:

/dev-platform
/apis
/api-1
/api-2
/uis
/ui-1
/ui-2
.env
docker-compose.yml
Tiltfile

The .env file defines the environment variables, which are used by the docker-compose.yml to define how each of the microservices (UIs and APIs) are meant to be built and run. The Tiltfile simply watches the .env for changes and then loads the services defined on the Compose file.

Now, let’s go back to the previous scenario: a feature request that requires changing the front-end ui-1.

In our new workflow, developers would only have to:

  1. Clone the dev-platform repo
  2. Initialise the submodules with one command: git submodule update
  3. Set up the right environment variables for the dev-platform
  4. And finally, spin up all the services with one command: tilt up

It’s easy to see that the number of steps is fewer and simpler, but more importantly, these steps are the same no matter how many microservices (UIs or APIs) your application has. In addition, one of the most exciting features of Tilt is that the live dashboard allows you to look at all your microservices at once, and get an instant feedback loop from your logs, broken builds, and runtime errors.

The final cherry on the cake was integrating Traefik Proxy and Cloud SQL Proxy into the dev-platform. These two proxies can be defined as part of the Docker Compose services. Traefik Proxy allows routing the traffic to every service under the same local host domain (including adding self-signed HTTPS certificates). On top of this, Cloud SQL Proxy allows the services running locally to connect to the databases on the cloud. In short, these two proxies allowed us to locally replicate the same behaviour as deploying and running the services in one of the staging environments. The consequences of this for the engineering teams were that they get to spend more time coding without having to push changes to staging environments.

What we learnt

At Apolitical, we care about having a good developer experience for our engineering teams. But we also know that’s tricky to measure. Therefore, it’s sometimes hard to get buy-in from the wider company for these types of projects. Despite microservices being the best way to architect our web platform, it was clear that time and effort had to be put into designing the right local development strategy and tools to make our workflows more efficient.

Using a monorepo with Git submodules to structure all the code of our web platform plus using Tilt with Docker Compose to locally spin up services has completely transformed how we work. Our onboarding process for new joiners is a lot smoother than before, and lots of time has been shaved off from our development loops. Additionally, best practices have been codified to give our engineering teams a fast and reliable workflow.

If it sounds like this could work for you and your team, what are you waiting for?

--

--