Behind Micro Frontends

Julien De Sousa
ADEO Tech Blog
Published in
9 min readFeb 10, 2022


It all started with the idea of ​​sharing and globalizing a solution that was already performing well. How would we reuse everything we learned about building powerful websites for all our business units? So we started on a journey to transform the Leroy Merlin France (LMFR) website into a solution with greater scalability and modularity, to enable us to reach a bigger audience and to innovate faster with new features.

We started from a technical stack that is already fine tuned (best performance for french websites) but that had a few restrictions. We wanted to deliver software releases and infrastructure changes faster and with much more efficiency. The principles that guided us throughout this project were:

  • Reduce our time to market
  • Increase team autonomy
  • Reuse whenever possible
  • Increase performance, and above all scalability
  • Reach a level of availability matching our business requirements

We decided to move our infrastructure to GCP and to transform our software architecture.

Digital transformation of


With the increasing popularity of online shopping since around 2008, the Leroy Merlin France website has been expanding rapidly. We have big ambitions for the future!.

The organization website around 2008 was established as follows:

Single Team and single code base, cira 2008

At the time, we had a single team working on the website on a single code base. Collaboration was easy and developments were fast.

When the number of people working on the site increased, we began to encounter mainly organizational difficulties. Many people were working on the same code base on several features simultaneously. These difficulties led us to regroup into Features Teams (FT) in order to distinguish different functional areas related to the website. A new way of working was born in 2014:

Multiple teams and single code base, cira 2014

This methodology allowed us to continue our development but we continued to encounter other difficulties such as:

  • Much coordination is required among the teams, especially for deployment where we have to synchronize all developments carried out by each FT on the unique code base.
  • No real autonomy of the teams where the coupling between each FT is still very strong.

This context led us in 2017 to think about innovating our way of working as well as our architecture choices that will allow us to continue our future development and achieve our objectives.

What architecture design to use for our future LMFR website?

Different scenarios have been imagined in order to solve to our problem:

Multiple teams and multiple code base: single app design
  • Separate the code, one code base per team
  • Requires the implementation of a build & releases process
  • However the teams are still not really autonomous during deployment
Multiple teams and multiple code base, multiple app design
  • Independence of teams during development, releases and deployments
  • However, how do we deploy our applications on the server while preserving each app’s functions? How to aggregate each application?
Multiple teams and multiple code base, multiple app and servers
  • System aggregation to web pages
  • Micro service and micro frontend architecture

This final answer is none other than what is known as micro-frontends, backed by micro services running on multiple servers

Micro-Frontends (MFE)

What is MFE?

MFE is an system architectural concept that extends micro services to the frontend world. This concept allows building feature-rich browser applications based on micro services.

Who are talking about and popularizing it?

The term Micro Frontends first came up in ThoughtWorks Technology Radar at the end of 2016.


Who uses it?

Users of micro-frontends

What are Micro-Frontends?

Micro-frontend is a way to cut a large, imposing frontend layer into smaller, more manageable chunks, and then being explicit about the dependencies between them. Technology choices, codebases, teams, and release processes should all be able to operate and evolve independently of each other, without excessive coordination.

Monolith frontend application

Monolith frontend

A single large application is called a monolith when it manages a multitude of features within a single code base. Generally, several teams work on the same application, and in particular on the same frontend, which makes coordination more complicated.

Micro-Frontend application

Micro frontend

HTML fragments managed by each application

A micro frontend is an application that responds to a set of reduced features within the same functional domain.

This allows us to split our monolithic application into a set of small applications, each with their own code base and their own life cycle from development to deployment.

This also makes it possible to build an autonomous team for each of these applications, for which they are now fully responsible.

Benefits of Micro-Frontends

  • Facilitates reorganization of teams
Incorporating a new feature

When a new feature is added to the site, we build a new team which is free to design and build its own micro-frontend, and make available the new feature on the site. The team sets the application development pace to make available its features in total autonomy and unrestricted by the other feature teams.

  • Refine expanding functionality
Expanding functionality

During the life cycle of different applications, its functions can expand and make the corresponding micro frontend bigger and bigger. The team can easily decompose it into several smaller applications in order to further refine the functional domains of each.

  • Deploy a single update independently of other teams
Independent deployment

The life cycle of each application is now completely isolated, allowing each team to be completely independent for its intermediate releases and updates.

This also allows teams to implement continuous deployment of their application.

  • Fine tuning application performance
Managing the performance of each application

Each micro-frontend or feature can be scaled individually from the performance perspective.

  • Improving fault tolerance
Isolating faults of each application

Faults are now isolated, so each application will not have a negative impact on another application.

Kobi: Building an in-house aggregation solution

Each application now returns a HTML fragment developed by its feature team. It is necessary to aggregate these fragments into a full web page.

Application aggregator

Different solutions available on the market were studied but none met all our needs. We have therefore chosen to develop our own Aggregator. We decided on server-side aggregation because :

  • SEO
  • Rewriting module issues to include CDN URLs
  • Richer fragment settings (TTL cache, fallback, timeout, etc.)
  • Error management and concept of primary fragment
  • Parallel loading of imbricated fragments
  • Chunk HTTP fragments (i.e. we start sending the response to the client when some fragments are not yet resolved, so as to reduce overall loading time)

Our KOBI aggregator was conceived !


Replatforming: a progressive migration with Kobi

So, the development of the KOBI solution began in 2017.

Once the base of the solution was delivered, we started to replatform our website page by page by progressively putting the new fragments in our pages while calling our monolithic legacy backend for the remaining pages.

The first step was to redesign the site header and footer on all pages, while continuing to display our legacy content in the body of the pages:

Redesigning the header and footer on all pages

This marked a big turning point in the replatforming project because from that day, all website pages are served by KOBI. This process allows us to progressively replatform more and more elements. Elements that have not yet been replatformed fall back to the legacy code.

Following the success of the first step, the replatforming of the rest of the site was able to continue progressively. Today 90% of the site has been replatformed. The end of this project is expected for this year.

Solution for deploying micro-applications

Application architecture based on micro-applications is only one of the prerequisites for a scalable, efficient and resilient solution.

We need a reliable, flexible and fast deployment tool.

Before talking about our current deployment solution, here is a history of our means of deploying the website.

Traditional hosting

From January 2011 to May 2013, we launched the modern version of the site, running on VMs and dedicated physical servers from a traditional hosting company. We had to wait several days for each order to be implemented.

For production releases, all the commands were written in a text file and the application RPMs were sent to the host by FTP. These deployments to VM took up to 2 days.

From May 2014, we started automating our servers with the Puppet / Rundeck / Foreman stack.

We described each type of puppet server in order to achieve consistency between each installation. Without manual operations, the deployment time was reduced to half a day.

There were two huge advantages when we started to use Puppet:

  • We had the guarantee that all our servers were installed in a uniform way.
  • We were able to give a hand to the features teams to deploy themselves up to production.

Migration to a datacenter with “on-demand” capacity

In November 2016, in order to respond more quickly to the increase in traffic from our users, it was decided to migrate the site infrastructure to a datacenter with “on-demand” capacity. We could request new physical servers to host our VMs in 4 hours. Thanks to Puppet, application migration has been made automatic.

Then in June 2017 we started working on migrating our site to Docker containers. To host our API, we chose the Red Hat OpenShift solution. This made it possible to have a Kubernetes hosting solution installed on-premise as close as possible to our legacy services.

Last migration to the public cloud

Finally in 2019 it was decided to migrate the site’s infrastructure to the public cloud.

In January 2021, we therefore carried out our latest migration. The legacy VMs that had not yet been converted to API were reinstalled on Google Compute Engine. Furthermore, all our API was moved from OpenShift to Google Kubernetes Engine. Turbine, a deployment application on Kubernetes clusters developed internally at ADEO, allowed a seamless migration for our developers. We could use the same deployment diagrams on OpenShift and GKE.

Using dockers on GKE.


This project was a big leap forward for the website, allowing us to renew the technologies used and to provide much more independence to the teams in their day-to-day work.

Today, we are more than a hundred people working on the various parts of the website, with each team in complete self-sufficiency in the development of their components. It also allowed us to be able to deploy all of our components in the cloud because the effort to make our applications cloud-ready was part of the replatforming project.

Now we’ve successfully moved to the cloud and that we’re well on our way to embrace the micro services architecture, what comes next?

The ultimate goal of our transformation is to produce a platform that will serve all our business units, to leverage our business and take it to the next level. Along the transformation journey,

We expect our platform quality to rise to the so-called “Global Ready” level. This is a set of characteristics that each product must attain in order to be able to support our business globally. This will be the focus of a future article — stay tuned !