How we managed to modernize the app from the inside and out

Natalie Halimi
Booking Product
Published in
19 min readMar 15, 2023


By Natalie Halimi and Stella Zhang

In this article, we share highlights and learnings from our journey of modernizing the codebase and design of the app. The article focuses on the mechanics of getting such a program off the ground, keeping it running, and proving impact.

Inspirational images from the app

Around 2 years ago, we began talking about how might we help other teams to more easily launch features and innovate, while providing our customers with the best possible app experience.

At that point in time, the app was built over a +10-year-old codebase, with patches of multiple coding languages, the UI and business logic layers closely coupled and many existing unnecessary dependencies between different components and libraries. Introducing even the smallest change might mean months of work involving multiple teams, with these changes usually leading to bugs. This, of course, led to a higher than reasonable time-to-market and a less than optimal customer experience.

On the design side, we had an outdated, off-brand, inconsistent experience. Previous attempts to change or fix those usually failed because of the tech complexities and because no one was really sure where to start from given the scope of the problem.

To top it all off, using old technologies meant it was harder to recruit talent, which at the time was already challenging overall in the tech industry. It was clear that something needed to be done — and the sooner, the better.

In 2021, we began to modernize the app, primarily focusing on making it easier to ship features faster, making our app more stable and giving our customers a reliable, modern, on-brand, consistent experience.

So, where do you start when having to modernize an app of such caliber?

Before we dive in, it’s important to note that we believe modernization shouldn’t be a one-off program but rather a continuous evolution backed by built-in processes across any tech organization. As technologies and design trends constantly evolve, tech organizations must have such processes in place to keep up. However, if you didn’t have those in place for over 10 years, you might just need a centralized effort to help you get there…

To tell the story we’ll take you through 5 major challenges we faced, how we solved them and what were the key learnings we gathered in each.

Challenge #1: Defining what modernization work will be included in the program

The app is massive, with hundreds of screens and dozens of funnels and sub-funnels, each composed of hundreds of thousands of lines of code, and this is just the client side. In addition, there was much to consider when assessing what to focus our efforts on — from backend refactoring, through modularizing libraries, to client-side code and design refactoring. We spent a full quarter learning, testing out different approaches, and gathering insights. It was clear that to try and do it all at once would be highly difficult to manage and would take way too long. We had to zoom in on a smaller but impactful scope.

How we solved it

Defining the key problems we want to solve

To help us determine what to choose, we went back to our motivation for launching the program to begin with:

  • The longer than reasonable time to market and bad development experience
  • The fact that our app was prone to bugs, ones that were difficult to troubleshoot and fix
  • Our customers were receiving an inconsistent, outdated, and off-brand design

Considering the above, we chose to focus our efforts on two main pillars:

  • Refactoring the client-side codebase by separating the business logic from the UI layer and moving to modern, standard frameworks
  • Refactoring the design by implementing the design foundation and making small UI adjustments aimed to reach a more consistent information architecture and a modern look and feel

Understanding the scale of expected work

With the scope of our app modernization program at hand, it was time to figure out the effort and to try and estimate timelines. To better understand that, we went through a tedious exercise of mapping out the entire app, as you can see in the following illustration. Once we did that, we had a good sense of the amount of work that was expected — and it was massive!

Breaking down the app into smaller, digestible parts

Starting from a smaller, but impactful scope

We mentioned earlier that we spent a full quarter learning. The way we did this was through a small pilot that we ran for one quarter, testing out different working models and getting a sense of timelines and possible bottlenecks. By the end of that quarter, we had an implementation strategy and processes we believed would solve all identified bottlenecks. However, the strategy and processes were still hypotheses — they were created based on a very small sample of work. Given the expected disruption to the business and the large amount of resources and time that would invested, we decided to test our hypotheses before launching the program across the entire organization.

We focused on one strategic funnel — the Accommodation look to book (L2B) funnel — which takes our customers from browsing through our accommodation inventory to booking their desired property. This area was small enough to begin our work on and offered a potential big impact given its financial importance to the business.

We later learned that opting not to start with the entire app was the right decision, as it gave us time to expand our knowledge, optimize our processes, and, most importantly, gradually spread the word about app modernization while gaining the trust of our leaders and colleagues.

By the time we were ready to scale up to the entire app, we had around 45% of the Accommodation L2B funnel modernized, and we received positive feedback about the program from our peers and leadership. This made onboarding the rest of the organization to the program much easier as we proved app modernization could be done, we showed that we knew how to do it, and we had put in place enough validated processes, tools and documentation to make this type of work as systematic, seamless and frictionless as possible.

— · Our key learnings · —

💡Dedicating time for learning has set us for success

Dedicating a full quarter for learning wasn’t an easy decision, as it meant a delay of 3 months before launching the program. We chose this path because we started from a very ambiguous place, and we didn’t want to figure out critical aspects of the program while working with the entire org and possibly losing its trust if things went wrong or took too long. We relied heavily on collaboration throughout the program, and so trust was a big deal. By dedicating time to learn before officially launching the program, we got results faster and did things more efficiently once we actually launched the program.

💡Proper time estimates are difficult to make before work has begun

This program was one of a kind. We didn’t have anything to draw knowledge from internally or externally, so estimating the time it would take to modernize the entire app, or even one funnel within it, was initially way off. We estimated we could modernize 80% of the Accommodation L2B funnel in just under 6 months, and believed we could modernize the entire app within 1 year. In reality, after 6 months, we were at 45% of the Accommodation L2B funnel modernized, and as for the entire app, our current estimates are at 2.5 years in total. We learned that you can’t really know in advance how much time such a program will take, so it’s wise to make stakeholders aware that estimates are tentative and would become clearer and more foreseeable after more substantial work starts.

💡Providing detailed and specific guidelines

Initially, we went fairly broad in our documentation when it came to what it means for an area to be fully modernized. This led to gaps between what teams assumed was work done and what we expected. Once we’ve provided more specific guidelines for what is the bare minimum required when moving to the new, standard framework, it made things easier as teams knew how to better plan their work, and we could be more confident the work that needed to be done actually is.

Challenge #2: A Massive, Complex Organizational Structure

The tech organization is as big as it is complex. Under’s multiple business units, there are several dozens of teams, each focused on a different strategy, all pushing towards helping our customers experience the world. This leads to a very complex ownership model where 10 or more different teams sitting under different business units could own different parts within a single screen. This meant that when figuring out our modernization strategy, we had to answer the following questions:

  • How do we ensure every part of the app has a team accountable for modernizing it?
  • How do we make sure everyone commits to modernizing their area?

How we solved it

Securing accountability

We’ve already done a quite comprehensive work around mapping the entire app down to the level of a single component. We took advantage of that and worked to identify the owner of each area. Within that process, we onboarded teams to the program and gradually secured commitments for the work needed.

Work agreements were put in place to define the responsibilities of the different app product teams alongside our own responsibilities as a workgroup that is leading the program. We mapped out the accountability using the following RACI chart:

RACI table that helped us form work agreements with all relevant stakeholders and achieve accountability
Using a RACI chart to form work agreements and secure accountability

This proved to be a very successful work model that helped us navigate the complex organization structure and the large number of stakeholders we worked with.

As the workgroup leading and facilitating the program, we committed that we will:

  • Provide guidance and support wherever needed
  • Make relevant educational materials available
  • Handle progress tracking and reporting
  • Help remove blockers and solve dependencies
  • And finally, pitch in with hands-on work (wherever necessary and possible)

A communication strategy for a large scale program

To kick start the program, we broadcasted a live stream to the entire app community and ran a roadshow, where we presented the plan to key stakeholders and decision-makers across the organization.

Over the course of almost two months, we had discussions with peers and leaders across the entire tech org. This way we could both onboard everyone, get their buy-in and secure commitments for the year.

For the date-to-day stakeholder management we strategically broke down our focus areas to different funnels, relying on our previously mentioned mapping of the app:

  • We opened communication channels for each funnel, allowing us to personalize our messages, which made it less likely would be ignored, as it often happens with more generic communication.
  • Within each quarter, we focused on a single funnel, for which we:
    — Ran deep analyses to identify blockers, dependencies, and opportunities for either ensuring commitments are met or pushing for a larger scope, where possible
    — Initiated 1:1 meetings or personal messages with area owners to ensure app modernization is still a priority and risks are being mitigated

— · Our key learnings · —

💡Securing official commitments early on

When we kicked off the program, initially focusing on the Accommodation L2B funnel, our agreements with the teams were less formal and on a peer level. We ended up not reaching our targets. While understandable, given our assessment was way off, we did identify that it was partly because we didn’t have official commitments in place. Our peers came with their best intentions, but considering org changes, reprioritization happening within each org, and the simple fact people leave or move roles, this was not enough. As we scaled up our operation to the entire app, we learned from our mistakes and made sure we secured commitments on director/VP levels for each area of the app. Mind that this process took us the good part of two months, so make sure you take that into account when planning your roadmap.

💡A program manager is a must-have!

Adding a program manager to support us played a critical role in our ability to scale up the program to the entire app. Even with our smaller scope, working with only 16 teams, it was hard to juggle the program management aspects of our work alongside our other product management responsibilities. Just as we scaled up, Rosangela Fonseca joined us as our Technical Program Manager.

💡Succeeding together

One of’s core values is ‘Succeed Together’, and this value couldn’t have been more accurate when it came to this program. We depended on many teams to realize our vision. To secure their continuous collaboration, we had to empathize with their own priorities and challenges, be prepared to help as much as possible, accept it when they raised risks that might prevent them from fulfilling their commitments, and give credit where credit was due. This didn’t only help us succeed but also made the work enjoyable and helped strengthen collaborations across the app tech org.

Challenge #3: Experimentation-first Culture is known in the industry for its experimentation-first culture. Every small change introduced on our storefronts is done through an A/B experiment. These experiments are defined with business and behavior metrics, measuring if we improved or hurt the business and/or the customer experience. When modernizing the app, the expected number of changes that would be introduced was massive and translates into hundreds of experiments. We had to make sure these were done in the most efficient way so that experiments wouldn’t become one of our major bottlenecks.

How we solved it

Determining the success criterion

When setting up an A/B experiment, you need to define the success criterion — meaning the thing on which you’ll be basing your decision. While defining our hypothesis, we realized that our success criterion shouldn’t be an observable improvement of customer engagement or conversion, but rather aiming to hold the line — ensuring neither customer experience nor the business are hurt. The following hypothesis definition will help clarify our reasoning behind this decision:

The app modernization hypothesis
The app modernization hypothesis

Defining an experimentation setup strategy

We knew from experience that when running an A/B experiment, you want to set it up in a way that will allow you to easily pinpoint issues, should they come up. Or in other words — there should be few changes introduced per experiment, to minimize the number of variables impacting the outcome. To get a statistically reliable metric that accounts for weekly trends, each experiment needs to run at a minimum of 14 days and might require several iterations. Considering this work needed to be done within a reasonable time, we had to devise an experiment setup strategy that could move us forward fast enough while safeguarding the business and the customer experience.

After testing out different scenarios, we ended up with a recommended approach. We knew that not every case would fit this flow and that iOS and Android might differ to some extent. However, we wanted to offer our app product teams a best-practice approach they could rely on. In the end, this approach was relevant for many of the cases we witnessed.

The main principles of this approach were — break down big chunks of changes into individual phases and use the experiment runtime to continue developing the next big chunk of work.

Implementation strategy of app modernization
Implementation strategy of app modernization

— · Our key learnings · —

💡Aiming for improvement isn’t always the best course of action

When introducing a change, most businesses will aim to see an improvement. However, there are other factors that are, in some cases, even more important and that will lead to bigger value to the business in the long run. In our example, if we would have gone with incremental improvements rather than holding the line, experimentation would take much longer, significantly increasing the length of the program and wasting a lot more resources and time.

💡When experimenting, it’s important to be flexible and adjust as you learn

Our suggested experiment setup strategy offered a good enough solution for most cases. However, our best results were achieved via a more personal approach — as we gained more experience with app modernization experiments, we could tailor an experimentation strategy per area to further improve our chances to succeed.

Challenge #4: Design as a major potential bottleneck

In our defined scope, an area would be considered to have a modernized design when:

  • The right design foundation was in place — meaning using the design building blocks that are part of the design systems of the company
  • The design language was implemented

When we started working, we still didn’t have a design language defined. There was a rough idea of what it should look like, and we did have overarching app design standards, but we still had to translate each screen into the new, modern, unified design language. In addition, adoption of our Design Systems component was at that point low. We needed to make sure the Design Systems team and infrastructure were ready for the rapid increase in adoption.

How we solved it

Creating processes to scope the work and track progress

The workgroup designer (Initially Catalin Bridinel, and later Bruno Lopes), composed a process that would allow us to gradually take each product team from onboarding to delivery. This was crucial, as it helped us to understand the scope of work expected around design language, and provided us with the base for measuring our progress when it came to design deliverables. In the following chart you’ll find the details of this process:

A flowchart that demonstrates the collaboration process we’ve built to create a modern design for the entire app
A flowchart that demonstrates the collaboration process we’ve built to create a modern design for the entire app

Defining the new design language out of the teams’ vision

When translating our existing design to the new design language, we had two options:

  1. Looking at the current state of things
  2. Looking at the teams’ vision

With option number 1, all we had to do was go through the app, take snapshots of the current design and craft the new language on top of that. With option number 2, we had to take into consideration the short/medium term plans each product team had for their area and work together with them to adapt that to the new design language.

We decided to go with the second option. It made the most sense to us for two reasons — the first and most important being assuming a team dedicated to an area which they have been working on for a while now, and have been optimizing it consciously, will know what would work best for that area. The second reason was that it saved us time in the long run — if we would have ignored the teams’ plans, their design, development, and experimentation work would have been doubled, and their vision would have been delayed.

Preparing for an increased adoption rate

When we started our modernization program, the design systems were at a fairly mature place with a competent team leading the scaling up of coverage, ongoing support, and communication. However, up until that moment, the adoption of the design systems was low-key. We weren’t sure if the tooling, processes, and documentation in place would be sufficient to deal with the expected increased adoption.

The Design Systems team (At the time led by Mauricio Zaquia) worked closely with us to optimize the team’s support processes, aligned their roadmap for growing component coverage with the app modernization program needs, and ensured relevant documentation was in place. This alignment produced a work agreement with expectations and responsibilities in place. It helped us a lot down the road as the adoption of the design foundation grew at a rapid pace.

— · Our key learning · —

💡Establishing processes wherever possible

Building processes helped us achieve fast progress in many parts of our work and prevented major potential bottlenecks. The comprehensive process we’ve built around design was the fruit of testing and iterating, while collecting learnings, identifying patterns and turning them into reusable work flows.

Challenge #5: Measuring progress and impact on the business

There were two main questions we needed to answer:

  • How can we track progress so that we can ensure we’re moving forward and can create reports that will be shared with our peers and leaders?
  • How can we prove app modernization is indeed positively impacting the business, as we claimed at the start of the journey?

How we solved it

Getting into the nits and bits for proper progress tracking

Once again, our tedious work around mapping the entire app came to our aid. By breaking down the app thoroughly from funnels to components, we could track progress from the biggest to the smallest area.

Initially, we relied on Google Sheets to track progress, asking teams to update their status per component and automatically translating that into a numerical value that indicates progress. This tracking approach served us well when only working on the Accommodation L2B funnel. However, it proved harder to manage and prone to errors when we scaled our operation to the entire app. We ended up uploading the data we had on Google Sheets to a program management software where we could assign tickets to teams and automatically generate reports and alerts where needed.

Our automatic cadences included:

  • Direct messages to assigned owners, alerting them about upcoming and missed deadlines
  • Monthly and quarterly reports sent to different tiers of peers and managers, notifying them about upcoming commitments and progress made
  • A dashboard that was updated monthly, showing the current status of the program and allowing people to filter the information by area, department, or team

Finding the most efficient and reliable way to prove business impact

You might recall we’ve mentioned earlier that most of the targets we’ve set out to achieve weren’t measurable per change introduced. It is only after enough of the app was modernized that we could look at the cumulative impact.

But why did we need to prove impact to begin with?

If you have experience in a big tech company, you know that a program or a project might be deprioritized before it has reached its completion. This is especially correct with long-term projects. When we started modernizing the app, the leadership team had our back, but we knew we had to prove impact at some point to keep the program top-of-mind and a high priority. In addition, while intuitively knowing our work was bringing a positive impact, we were curious to learn by how much. We considered these two variables to evaluate our success:

  • Development velocity — how fast can our product teams develop new features and ship them to market? Time-to-market has been on our radar from the very beginning, so it made sense to measure success by looking at the change in velocity.
  • Product quality — how many bugs or issues come up during or after the release of a feature? We knew these were costing us time and money, so showing an improvement there proves direct value to the company.

When trying to get actual numbers of the development time or the number of issues raised, it was an impossible task or at least one that would cost us a lot of time and resources. Our goal was to find a reliable, measurable and reusable framework that we could use to measure impact periodically, with minimum effort. This led us to a survey, where we asked our tech teams (Developers, Engineering Managers, and Product Managers) to indicate to what extent they agreed with two statements connected with our chosen success criteria.

We also made sure we knew for each responder to the survey:

  • How advanced modernization is in their area — meaning, is enough codebase modernized to make a difference
  • How long they’ve been in the company — meaning, do they have experience with the old codebase with which they can compare their current experience
  • What is their role

The results were staggeringly positive — with only 60% of the app modernized (at the time of the first survey), 67% of respondents agreed development velocity has improved, and 70% agreed our app is more stable than before.

Showing perceived velocity and quality impact of app modernization on the app
Showing perceived velocity and quality impact of app modernization on the app

The survey covered our tech modernization impact. We still needed to show a positive impact on our design modernization. For the latter, we relied on our A/B experiments. We’ve mentioned before that our success criteria was — causing no harm. However, in many of our design language experiments we could see significant improvement on conversion and in customer engagement. As we kept track of all of those experiments, we could easily show the impact generated by them.

Our key learnings

💡Automating as much as possible

Initially we relied mostly on a semi-automated process, where we counted on teams to update their progress using our app mapping sheet. This worked well while we were working on a small scale. However, the moment we expanded our operation to the entire app, moving from 16 to almost 60 teams, managing this manual tracking became a nightmare. We had to chase after people, there were many mistakes in the status updates and we found ourselves spending a lot of our time just dealing with tracking error correction. We would recommend moving into a fully automated system as early as possible. For us, it was Jira, as it was a tool already widely used in the company. But really, it can be any project management software that allows you to assign tickets, track progress and generate reports.

💡When running a survey, use the help of professionals

The results we’ve shared were actually from a second iteration of our survey. Our first iteration wasn’t as insightful. Initially, we worked alone, 2 Product Managers trying to figure out how to compose the survey and what would be the best structure for it. When creating the second one, we decided to get help from our data and research internal resources. This made a huge difference. Our second survey was so insightful that it not only provided us with good visibility on the impact of modernization but, it also provided some important learnings that helped us further improve our support and program management processes.

Wrapping it up

It’s important to mention that our success in driving the app modernization program and the progress made so far wouldn’t be possible without our track leaders (during the majority span of the program) — Alexandru Litu, Thijs van As, Gustavo Contreras, Tara Nielsen and Puja Nanda, who believed in this vision from the beginning, helped push it in the organization, and always had our back. Having a company objective of strengthening our foundation also plays a major part in the ongoing success of this program. At the end of the day, most people understand the value of getting your tech and design up to date. Still, not many companies would be willing to commit to such a multi-year endeavor, and it’s great to work in a company that does just that.

As mentioned at the beginning of this article, modernization shouldn’t be considered as a one-off program. While we’re heading towards the last milestone that will take us to a 100% modernized codebase and design, our Core tech teams are already busy defining ways to ensure that we never again end up in a situation where such a massive one-off program is needed.



Natalie Halimi
Booking Product

Tech enthusiastic that loves to solve complex problems in a creative way