How tools and practices shape our engineering culture at leboncoin

leboncoin tech
leboncoin tech Blog
15 min readSep 10, 2024

By Kévin Platel, Principal Architect

Just as the locals’ customs and everyday habits can be baffling when you move to a new country, so can the choice of tools used and practices followed leave you stumped when you join a new company. Why this specific framework? Why this language? Why this CI/CD platform? Why this tool? Often, our answers focus on functionality and how to use it without considering the deeper reasons behind our choices.

When was the last time you truly reflected on why your team or company selected certain tools and how they’ve shaped your company’s culture? For me, this moment came during a massive merging project where I faced rapid and extensive changes.

At times like these, long-standing practices are challenged, and I found myself resisting these changes without really understanding why. Why do we do things this way? Is it just legacy and bad strategy that should be changed, or do these practices carry more meaning, influence, and impact than is obvious at first?

I discovered that, more often than not, every choice we made had a broader impact than we first thought, for better or worse, shaping the engineering culture. For example, a monorepo can change team dynamics, and a tool like Gerrit can influence the way we approach code reviews.

This is the journey I want to take you on: the development practices at leboncoin and how they’ve shaped the engineering culture of the company.

Where it all started…

Leboncoin was launched more than 15 years ago as a fork of the Swedish second-hand site Blocket, with a monolithic application using C, PHP, and PostgresDB within a single SVN repository. There was no distinction between the frontend and backend; everything was developed by the same people. This setup was practical at the time, with deployment handled in one operation for all components.

However, as our needs evolved — particularly amid the rise of mobile applications and user expectations — the monolith’s complexity grew. We shifted to a modular approach, with separate repositories for different tech stacks, including Java for Android, Objective-C for iOS, Golang and microservices for the backend, and JavaScript with React for the web.

But then came the big question: How do we organize all this? Do we continue with the monorepo approach?

At that point, we made the decision to change — for a simple reason: We were now organized by layer, meaning we had backend and frontend engineers in different teams and projects, mostly going through a V-model. So it was simpler for each technological stack to operate on its own and own its code. This is when we moved to a per-tech-stack repository, and we didn’t change this for the next eight years.

The path we have chosen

Now let’s jump to today.

Currently, leboncoin is organized into multidisciplinary, domain-focused teams, with each team comprising experts from different tech stacks. Each team is responsible for solving challenges within a specific domain, using separate repositories for each stack: web, Android, iOS, and backend.

This structure follows principles inspired by the Spotify model, where vertical teams focus on different slices of the customer journey, supported by horizontal technical stacks.

We use a common repository for shared practices and tools, holding guild meetings every two weeks with dedicated Stack Tech Leads to foster collaboration and standardization.

With all of this in place, a sense of common belonging and harmonization continued across the teams, making the repository their shared home, where everyone had a room.

Everyone has their own objectives

As companies grow, the integration of new engineers brings diverse practices and challenges. At this stage, companies are making money but are also threatened by competitors.

When companies enter this phase, there’s a strong drive to be as efficient as possible, to develop missing features, and to move forward quickly. The CTO at leboncoin at the time, when I was a Tech Guild Leader, had a clear message: “I want to be able to focus and mobilize people on strategic projects.” Therefore, balancing the CTO’s desire for strategic focus with the engineers’ need for innovation became a key issue.

So I spent time with different teams, hanging out by the coffee machine to better understand their day-to-day concerns and stress points. Several observations struck me: How could teams that work alongside each other in the same offices live in such different worlds? How could the priorities of the CTO be so far from those of the engineers? How could these teams have such contrasting outlooks on issues and solutions?

Yes, it depends on management, the people, and other factors, but the company’s expectations for these teams were vastly different. One team, considered quite technical, was often left to operate independently by the business, embracing experimentation and risk as part of pushing new boundaries. In contrast, another team, which frequently interacted with Finance and Legal departments, was highly risk-averse, focusing on reassuring their cautious stakeholders.

This difference is both common and understandable. The domain each team operates in comes with distinct expectations, timelines, risk management approaches, and levels of trust, all of which contribute to creating a unique culture and response to challenges.

The impact was noticeable: One team could quickly and eagerly adopt new libraries and practices, while the other, under significant pressure, preferred to concentrate on established ways of working.

Ultimately, this could have created a huge gap between teams, leaving some so far behind that the CTO’s goals couldn’t be achieved. Most people would have preferred to join the more innovative teams, which also occurred in other marketplaces previously owned by Adevinta (the former owner of leboncoin), where platform teams were perceived as a refuge from business pressures, offering the space to be able to concentrate more on technical aspects.

We aren’t alone

Now, you might be wondering, where am I going with all this? I began by discussing monorepos, then touched on CTO project management, team culture, and innovation. Don’t worry — I’m getting to the point!

As we explored different approaches — especially for our frontend stacks like web, iOS, and Android — we tested various repository and collaboration models. We tried per-module repositories, per-team setups, dedicated expert teams for cross-cutting concerns, and rotating people on global projects.

Yet, despite these independent experiments, most of our stacks ended up adopting similar models. Reflecting on our current setup, we realized we had very few programming languages, almost one per stack, with each stack having its own repository. We were following a version of the Spotify model, with dedicated core teams addressing issues outside the traditional product roadmap.

So let’s look at some big companies’ wisdom to understand how they operate:

  • Spotify is organized in a multi-repo setup, with freedom up to a point. It uses a Golden Path approach and has gone through technology expansion, eventually standardizing to ensure maintenance and consistency.
  • Amazon uses a mix of large monolithic repositories for big systems and smaller repositories for well-defined services or small teams, with standardization processes that allow teams to deviate with strong justification.
  • Meta adopted a similar approach to Amazon but is more flexible with its technology choices.
  • Google is stricter about including new programming languages and tools. It operates one of the largest monorepos, reflecting a preference for unified build, test, and cohesion processes for critical systems.
  • Palantir uses multiple repositories with different visibility, access, and dependencies, likely influenced by the diverse needs of its clients.

In my exploration of different tech giants’ code-management strategies, I found that these approaches often mirror their unique company cultures:

  • Zalando emphasizes team autonomy, using a mix of monorepo and multirepo strategies to fit its fast-paced fashion ecommerce environment.
  • Uber has consolidated its code into monorepos for certain platforms, promoting unified development practices across its varied services.
  • Airbnb seems to balance innovation with consistency, aligning with its goal of providing a seamless guest and host experience.

After these insights, I reevaluated our own operations and what had been working well. The guild setup facilitated shared practices, and with a shared repository it was natural for people to share libraries. We even had dedicated individuals maintaining versioning updates.

However, we began to drift as some folks wanted to explore beyond the established framework. Inspired by the open-source community, it seemed feasible to envision a world where libraries, services, and tools could evolve independently.

Everything is connected

It took me some time to fully grasp this idea, but eventually it clicked: Every aspect of a company shapes its culture, which in turn affects its decisions and impacts on its overall capabilities. What might seem like a purely technical decision — such as the choice of programming language, repository setup, or tooling — can have a profound impact on the company’s culture. We can view this as an extension of Conway’s law: The structure of your repositories, like other technical decisions, dictates certain communication patterns within your teams, just as strategic focus influences team alignment.

If technical choices have an impact on culture, then what is culture, and how do different setups influence it? Let’s explore this using the C4 model, which defines a set of abstractions for describing the static structure of a software system. In the C4 model, a software system consists of containers (applications and data stores) that house components, which are implemented by code elements (classes, interfaces, objects, functions, etc.).

Now let’s mentally walk through different repository structures and their cultural implications.

One repository per component

First, let’s define what we mean by a component. Here I’m referring to technology building blocks that make up an application or executable — things that run, like libraries, packages, frameworks, or SDKs.

Now imagine a world where each component of an application (or container) is stored in a separate repository. This approach can be seen in some web applications where each UI element is a library, or in Spring Boot applications where various parts are modularized (though with some necessary React glue code). In this model, an application becomes like a vast collection of Lego bricks, with each piece being reusable and combinable to form the desired outcome.

The Unix philosophy often supports this model, emphasizing the creation of well-crafted interfaces and communication standards to enable composability. This model is common in open-source communities for several reasons. Firstly, collaboration is often asynchronous, driven by contributors working in their own time for the greater good. The community-owned nature of these projects means that contributions must meet the community’s standards and allow others to build upon them.

Time is another factor. Open-source projects, unlike private ones, are less pressured to deliver value within specific timeframes like marketing windows or quarterly results. The core tenet of open-source projects is to be beneficial for the community and its users, not necessarily for a board of investors.

These factors create unique constraints and goals, fostering a culture of thorough code reviews, shared ownership, and deep expertise. Contributors often don’t know how their work will be used, necessitating heavy planning and testing. The pace is dictated by the needs of the project, creating a culture where each contributor is an island, doing their work independently yet still connected to the broader project.

One repository per container

Next let’s consider containers, which in the C4 model are units that run, such as mobile applications, web applications, binaries, or Docker containers. So what if each container had its own repository? This setup is common in operating systems, where different executables are installed through package managers and combined via OS abstractions.

Service-based architectures are another example, where each service interacts through APIs or other communication methods. In these models, containers are like electrical components wired together, each serving a dedicated purpose, managing its own life cycle, and possibly using different technologies and libraries. This gives teams the freedom to experiment with new approaches within each service.

Collaboration here is interface-driven. Boundaries are crossed through communication, requiring teams to discuss and agree on behavior, much like two parties agreeing on a contract. In open source, containers often expose their contracts (like APIs) for others to consume, or developers build upon these contracts with minimal agreement. This makes technologies like Protobuf, Avro, JSON Schema, and Thrift appealing for formalizing these contracts.

Ownership is often at the team or community level, where knowledge and habits are built. The culture here fosters active collaboration, where dependencies between containers are well known, but each container is its own house, arranged according to the team’s preferences. Company-wide guidelines might still exist, akin to city regulations for public-facing buildings, but they require control and tools to enforce at scale.

One repository per system

Moving up the C4 model, we reach systems — sets of containers that interact and provide coherent features. Now imagine defining the cohesive unit at the system level, with one repository per system.

A system provides a set of features that a user or other systems can interact with. Think of SaaS platforms, Apple or Google Connect, payment systems, or mailing services. A system might not act alone; it often interacts with other systems to deliver final value to the consumer.

In this model, a system like Kubernetes, Chromium, Android, or Linux could have its own repository. Collaboration here can be very open, depending on how inclusive the system is. These systems often feature well-defined SDKs, APIs, and interfaces for interaction. Components within the system either act externally or are tailored for specific needs (e.g. WebRTC within Chromium).

There’s typically a distinction between insiders and external contributors, with insiders having different levels of access and trust, influencing collaboration styles. The culture here resembles a small village — self-sufficient but trading with others for resources it lacks, largely independent but with some external dependencies.

One repository for all systems

Finally, what if we placed everything in a single repository? This model changes the collaboration dynamics entirely. Multiple systems need to interact closely, creating a structure similar to the system-level model but within a larger context. External contributions shift, as both parties (internal and external) now share a common goal, with the closer proximity fostering a different kind of trust.

This culture is more like a small island, where everything must come from within, and space and resources are limited, requiring stronger collaboration to make things work.

Choosing the issues we want to solve

Let’s move from theory to practice.

Our company was acquiring other businesses to enhance the user experience across different verticals — renting a flat, buying furniture, shopping for clothes, purchasing a car, and so on. Technically, this meant striving for consistency across features while also enabling quick integration of new features from acquired companies into our main platform.

The leboncoin CTO wanted a setup that promoted strong collaboration, minimized waste and duplication, and allowed teams to focus. But we also needed to balance this with the engineers’ desire to innovate and experiment with new ideas.

So where did we need to draw the line?

This is where the previous collaboration models informed our decision. We wanted high consistency, reuse, and efficiency in production and testing without heavy R&D investment. We needed to focus on delivering user-centric features, avoiding waste along the way.

Ultimately we made a significant, and still somewhat controversial, decision: We would consolidate all services into a single repository, enforce the use of a single programming language, and create a unified setup for backend services. This approach aligned most closely with the all-systems-together model, with one key difference: The frontend was excluded, which brought its own set of challenges but was the most acceptable compromise.

This trade-off was one we — along with the other tech leads — accepted to push the company’s technical culture forward. It allowed us to:

  • Easily communicate on shared issues: With one technology stack and the same tooling, issues are similar across teams, making it easier to build understanding.
  • Propose common solutions: When everyone shares the same language and tools, solutions are more likely to fit everyone’s needs.
  • Provide help from senior engineers: Senior engineers can more easily review code, suggest improvements, and solve problems when they share the same context.
  • Facilitate internal mobility between teams: Engineers can move between teams more easily, as they only need to adapt to the business context, not a whole new tech stack.
  • Simplify large-scale refactoring: Changes to logging, observability, or naming conventions can be done atomically across all services, ensuring everything works before merging.
  • Focus tooling investment on what matters: With everyone in the same environment, it’s easier to justify and benefit from investments in tooling.

Of course, these choices came with costs:

  • Reduced technological innovation: When working within a single repository, the goal is often to achieve high reusability and broad impact. To accomplish this, a certain level of customization is required to address challenges that traditional tools can’t solve. As a result, introducing new technology — like a programming language — requires re-customizing these tools to maintain the same level of functionality, which can be quite challenging.
  • Customized tooling for large use cases: Due to the size of the repository and the specific goals of such a setup, most tools designed for smaller projects aren’t a good fit. Additionally, you often need to tailor the experience to your unique needs, leveraging the repository’s large, global impact. This often leads to investing in custom tooling, which then needs to be managed like its own product.
  • Adapting work methods: Of course, this kind of setup isn’t something you’re typically prepared for in college, open-source projects, or small companies. Traditional methods that promote independence and smaller projects aren’t suited for this environment, requiring a deliberate effort to challenge and adapt your ways of working.
  • Intimidating changes to common components: For junior engineers, the prospect of changing libraries or tooling that could impact hundreds of services can be daunting. It often necessitates collaboration with more experienced engineers to guide them through the process. Additionally, in many setups, common components are not optional, requiring alignment among many stakeholders with differing opinions. This cultural aspect can present challenges for change, as some individuals may be reluctant to engage in or create conflict.

Despite these challenges, we’ve seen benefits, like being able to maintain the latest version of Go across all services for over a decade, or ensuring that services untouched for years still receive security updates regularly. However, there have also been downsides, like bugs affecting all services, complex changes requiring significant effort, and CI on common packages taking a lot of time.

But, more importantly, it has fostered the culture we were aiming for.

In the end, we are a community

Ultimately, this setup transformed our company into a cohesive community. Senior engineers could swiftly address issues and mentor others, while shared practices facilitated smoother collaboration. The unified environment fostered a sense of belonging, with teams working together toward common goals even if they had different day-to-day challenges.

People with the most seniority were able to make large-scale changes, fix issues, and help teams quickly, which allowed us to align practices and push teams under pressure to adopt new libraries or benefit from evolving standards. Since everyone spoke the same language, presentations or code reviews resonated with the entire team, inspiring others to adopt solutions or even improve upon them.

We were all living under the same roof, which meant we had to respect shared rules — even if individual teams had the freedom to design their services as they saw fit. This setup encouraged teams to collaborate, and sometimes even step in to help others when necessary. For instance, if you need to update consumers with the new version of an API, why not handle it yourself? After all, you’re the one who needs it most. Of course, the other team would need to validate the changes, but thanks to the new shared practices and setup, no one would feel out of their depth when helping out. This fosters a sense of shared ownership — because why wouldn’t you lend a hand?

I did see a lot of resistance to this model from newcomers, and that’s understandable. If you’ve lived alone your entire life, how would you feel about moving into a gigantic shared house? Integrating into a community is hard, as we see all too often in the news. But I firmly believe that sharing struggles, along with the good times, binds people together in a way that helps them understand what others are going through.

In my view, leboncoin has successfully created a sense of belonging through its growth — from fewer than 30 backend engineers when I joined to more than 100 now — thanks in part to this highly technical setup. But this is just one part of the story. The mono-backend-repo setup has also been adopted across other stacks, following its success internally.

Reflecting on our journey, it’s clear that every technical decision carries broader implications. As the saying goes, “You don’t remove complexity, you just move it around.” By choosing to consolidate our repositories, we tackled the complexity of integration and collaboration head-on. In doing so, we shaped our company’s culture and laid the groundwork for future innovation and autonomy.

For those navigating similar crossroads, I would offer this: Start by understanding the unique needs and goals of your organization. Consider how your technical decisions align with your desired company culture. Are you aiming for tight collaboration or fostering innovation through independence? Are you prioritizing speed or aiming for long-term stability?

Experiment, iterate, and be mindful of the trade-offs. What works for one organization may not work for another. The key is to align your technical strategies with your cultural objectives, ensuring that your tools and practices not only support your business goals but also foster the kind of environment where your teams can thrive.

In the end, the real success lies in creating a sense of community and shared purpose, where everyone, regardless of their role or experience level, feels connected to the larger mission. That’s how you build not just strong systems, but strong teams capable of achieving great things together.

--

--