Golang at leboncoin

A journey in micro-service architectures

leboncoin tech
leboncoin tech Blog
6 min readAug 29, 2019

--

by Nicolas Barbey (backend developer)

leboncoin Paris office

If you follow leboncoin engineering blog, you already know the big picture of our organization : feature teams, focused on the main website features, are composed of all engineering expertise. Chapters allow people of the same expertise to meet and learn from each others, moving toward a common goal. To follow the organization shape, we architect the back-end with micro-services.

But today, we want to zoom-in from the IT level architecture down to one micro-service architecture, down to the line of code. I believe it matters as much as the bigger picture. We want to tell our journey from trying our hands on golang and micro-services to deploying more than 100 services on a daily basis. This is the view of my team, the identity feature team which managed credentials and user personal information. This does not reflect the view of the whole back-end chapter. We develop different features and have different requirements, so we may have different opinions on good practices in software development. But this is good, we want diversity !

We will present multiple iterations of our services architectures to show how we evolved it and why. Our choices may seem naive but keep in mind that we started with zero knowledge in go, in micro-services and with a lot of legacy code (more than 1 million lines of it) and we had to keep the business running and still develop new features.

Without further ado, let us present our first service. This was three years ago now. It is a service responsible for handling authorization and authentication. So this is quite a critical service with some performance and scalability requirements as it gets 10000 requests per second at peak. Little tooling was developed at this time and we deployed on virtual machines (now we use Kubernetes), testing was not so fast, not very convenient and we had little observability of the system (logs, but no monitoring). We went straight to the point, having a simple transport layer (JSON over HTTP) followed by use cases, and a small library to reuse some code.

“To the point” architecture

The main issues were testability (we had only end-to-end tests) and no abstraction on the underlying technology used (direct calls to PostgreSQL from the HTTP handler for instance). It made the code hard to refactor and to understand as it was growing. Another issue is that we did not fully get rid of the monolithic database that holds all the data of leboncoin (I heard it is one of the biggest PostgreSQL instance in France. The master is on a machine with 2 To of RAM and 3.8 To of persistent storage). This is beautifully illustrated by the following sketch of two micro-services and their shared database (courtesy of Mathias Verraes).

This issue led to our next iteration on architecture. We tried to figure out ways to extract data from our monolithic database to new smaller database. But we still had to keep the legacy systems running of course.

For this new architecture we did a rewrite of the code which handled ad listings of professional accounts (called les boutiques) as it holds a reasonably small amount of data. We started by designing our new database schema according to our needs and used a view to emulate the old schema and not break the legacy systems. At the time, this had to be done in the same database. We figured that this would be a first step and that we would move the data to another machine later on (which we never had time to do of course). As for the service architecture, we divided the code into three micro-services based on who would use the service (users, administrators and our data team).

Client-based architecture

This made sense as the three services had different performance, criticality and security requirements. But as the logic of the system was distributed across the three services and the database, this made the code quite hard to understand and modify. This issue was even worse since the database code was still in the legacy repository and the services were in the new repository. So much as the actual architecture started to look more like a distributed poo pattern.

“distributed poo” architecture (courtesy of Alvaro Sanchez)

This is when we decided to try architectures which promote separation of concerns between the domain logic and the other concerns of software such as communicating to the database or sending metrics. We learned about three tiers architecture and domain driven design. This happened during the GDPR project. We implemented this design for the service responsible for the users request to take out their data. We were on a tight schedule since we had to meet the legal deadline for GDPR. On the technical side we were also introducing Kafka for some communications between services. We improved our release process but database deployment was still not optimal.

Three-tier architecture

In this architecture we separated the transport layer (HTTP handlers), the infrastructure pieces (implementation of a store in PostgreSQL for instance) and the domain logic. All of this was put together in a big structure holding all the infrastructure instances. This structure was a parameter of the use cases and HTTP handlers. We introduced dependency injection with this big structure which referenced only interfaces defined in the infrastructure packages.

This was a success as the code was much more readable. It was easier to write unit tests. We were also able to switch easily from a Redis implementation to a PostgreSQL implementation without touching the domain logic.
There were still some issues. For instance the big structure introduced a lot of coupling and made it harder to split code into another micro-service or to test use cases (contained in the big structure) in isolation because we had to instantiate all the infrastructure pieces.

Atthe end of this project, we were proud of the evolution we made from the first go service. We have now the concept of use cases, infrastructure and transport. This made the lives of new-comers easier to understand our code. And made it simpler for us to implement new features, and perform refactoring tasks. But we were not satisfied of the testing which still required a lot of boilerplate and made splitting the services still hard.

In a following article we will describe in more depth the next iteration of our architecture. We will show how we achieved more decoupling of our code. We will also demonstrate how to ease the testing of the code in a more isolated fashion.

--

--