In step with Brevo’s rebrand: rolling out our navigation revamp with user research

Claire Jin
brevo-official
Published in
8 min readOct 24, 2023

Decoding the Process: from initial insights to release and beyond.

Since its creation in 2012, Brevo (formerly Sendinblue) has grown significantly. Several companies have joined us, leading to the launch of new products and features. Little by little, we moved from an email marketing tool to a full CRM suite. Our platform soon encountered limitations in its information architecture, particularly in its navigation.

Mockup showcasing Sendinblue’s navigation structure.
Sendinblue navigation before the rebranding. Illustration by Sendinblue.
Wireframe displaying Sendinblue’s multi-level navigation: level 1 in the header, levels 2 and 3 in the sidebar, and tabs within the content either level 3 or level 4.
Wireframe illustrating Sendinblue’s navigation levels. Illustration by Sendinblue.

Continuous user research showed that our navigation structure made it hard for users to discover what we had to offer. Moreover, our technical approach often felt out of reach for non-marketing professionals. We recognized the importance of highlighting our extensive product offerings and elevating the overall user experience. However, finding the right time to revamp a product’s navigation can be tricky, and such broad projects often get postponed. The transition from Sendinblue to Brevo, coupled with a complete update of our visual identity, gave us a great chance to tackle this issue. In this article, I’ll walk you through our user research strategy, with a focus on our navigation revamp.

A timeline of the different research and design phases described in the article.
Information Architecture timeline. Illustration by Brevo.

Laying the groundwork through initial research and framing

The initial phase of our project was dedicated to intensive data gathering. We started by leveraging our research repository, delving into valuable data from past projects. Drawing on the extensive knowledge within our organization, we gathered insights from various teams. For instance, the Support team provided us with relevant tickets. And in collaboration with the Marketing team, we leveraged a comprehensive 360-degree quantitative study to learn more about key user segments and patterns.

To streamline this process within the Product team, we organized a one-week data collection sprint. The objective was to gather crucial data from various tribes, specifically focusing on elements such as personas, user journeys, analytics, and current initiatives. Afterwards, we initiated a mapping exercise employing the Object-Oriented UX (OOUX) method.

Screenshot from a FigJam session showing various objects listed on post-its, organized with team comments.
OOUX online workshop. Brevo screenshot.

One of the standout insights from our OOUX analysis was our evaluation of the Add more apps menu item. Previously, this option directed users to a dedicated page that provided brief explanations of each app, where users could also activate and deactivate the apps. While this page was not solely for onboarding, it played a pivotal role during that phase. As we considered enhancing discoverability by placing the apps directly in the menu, we faced the challenge of potentially losing this explanatory step. Moreover, our dive into the conceptual model highlighted a recurrent issue: many objects had names filled with jargon with closely aligned definitions due to their similarities. This revealed a significant clarity gap in our navigation and how we introduced objects.

Our tight schedule required close collaboration among team members. During this phase, we pinpointed knowledge gaps and inconsistencies, refining our project’s direction and research plan.

Decoding the user’s lexicon through a cloze test

Before moving forward, we needed to closely examine how users perceived and interacted with our content. This journey started with a cloze test, focusing on the main objects of the platform (often found in levels 1 and 2 of the navigation). The goal of this test was to assess understandability and clarity: given a definition of the object, do users naturally use the word we use to describe it? Is there any consensus on another term?

Screenshot of a cloze test, with a sentence describing a platform object and testers’ various answers for the missing word.
Screenshot of a cloze test question. Captured by the author.

Among the terms we evaluated, some proved highly problematic. For example, no participants, regardless of language, used the term workflow when discussing our automation feature. While workflow might technically describe a kind of automation, our users consistently referred to them simply as automations. Using this insight, and supported by previous research findings, we chose to simplify: within the automations feature, users now create and manage automations.

Armed with the insights from the cloze test, we had rich discussions with the tribes. At times, it led to iterations with our content team, while at other times, it underscored the need to amplify user education and guidance to ensure comprehension.

Deciphering the user’s mental models through card sorting

With the output of the cloze tests, we needed to explore the users’ mental model to understand how users group and categorize the different objects in a way that makes sense to them.
We conducted both moderated and unmoderated card sorting sessions. Moderated sessions allowed us to engage directly with users and glean deeper insights from their thought processes. Unmoderated sessions, on the other hand, offered participants more flexibility and were conducted on a larger scale in different languages. By blending these approaches, we were able to gain a comprehensive understanding of our users’ perspectives.

Cards displaying platform content spread across a table, grouped into user-defined categories labeled with post-it notes.
In-person card sorting session. Photo by the author.

As expected, we found that the majority of participants were discovering many features for the first time, which led to many questions, curiosity, and sometimes frustration: “It’s amazing how much is in the tool; it’s a shame we aren’t aware.

Certain categories emerged quite clearly from the card sorting. For instance, there was a new category covering technical features, and another one focused on business elements such as sales pipelines and payments.
We noticed users often wanted to place specific cards in several locations. This underscored the importance of ensuring an easy and intuitive flow between different applications to accommodate our users’ varied usage patterns.

Tracing the User’s Path through a tree test

Having gained important insights from our card sorting sessions, we advanced to the next research step: tree testing. This method enabled us to refine our emerging navigation by observing real user interactions and collecting feedback.

Screenshot depicting our tree test with structured navigation levels and their sub-levels.
Screenshot showcasing the design process of our tree test. Captured by the author.

Providing users with this simplified version of navigation without the influence of additional design or guidance was valuable. It allowed us to see whether this new structure aligned with users’ mental models before investing time in visual design and development. We identified areas that were working well and, more importantly, those that were confusing.

We noted several inefficiencies in the task flows especially with items buried deep within multiple sublevels — for instance, some dashboards were particularly challenging to locate: “it’s something I want to see right when I go to this app, I shouldn’t be clicking again and again”. This led to very interesting discussions on hierarchy and the definition of key information. It helped us to prioritize and implement specific changes, setting the stage for further refinement and subsequent testing in the following phases.

Gauging user interactions through click testing exploration

After refining our information architecture based on tree testing results, we initiated iterations on the new components within our design system, transitioning into the next phase: click testing. This prototype testing stage was crucial for visually evaluating our new information architecture and updated visual identity while our new name was still confidential.

We carried out this process collaboratively, involving designers, product managers, and other stakeholders. During the click testing phase, our focus was on the first two levels of navigation, the homepage, and overall patterns. We tried out different versions of the menu, playing around with the look and feel and the order of items to further enhance the user experience.

GIF from the click-testing prototype illustrating the different levels of navigation.
Anonymized click testing prototype, showcasing our new navigation. Prototype by Brevo.

By observing where users clicked to perform specific tasks, we gathered valuable feedback about our navigation, layout, and design choices. Overall, our new menu structure and UI received positive feedback, with comments such as, “it’s clearer and more modern than the old version.” The tests also demonstrated good discoverability and findability for apps and features that were previously unknown to most users. These insights enabled us to approach the launch more confidently, making informed decisions along the way.

Measuring impact during and beyond launch through AB testing and post-implementation metrics

While our click testing was promising, given the significant shift for the Add more apps removal, validating these results in real-world contexts for any unforeseen user behaviors was essential. During the launch, we ran an A/B test comparing two versions of our new interface: one featuring the Add more apps tab and another offering direct access to apps via the menu. Over a month, this test assessed the business and usage impact of these variations. Based on the results, particularly regarding app discoverability, we affirmed our decision to remove the Add more apps tab from the menu.

Mockup showcasing Brevo’s navigation structure.
Brevo navigation after the rebranding. Illustration by Brevo.
Wireframe displaying Brevo’s multi-level navigation: levels 1, 2 and 3 in the sidebar, and tabs within the content either level 3 or level 4.
Wireframe illustrating Brevo’s navigation levels. Illustration by Brevo.

At the same time, we deepened our understanding of the rebranding’s perception through collective continuous research. We defined a range of metrics to follow. Beyond business metrics, we evaluate general satisfaction levels with tools like NPS, CSAT, and CES. We also implemented the tracking of specific metrics to gain deeper insights. For instance, we closely monitor the feature discovery rate, measuring how quickly new users engage with each feature and the new feature adoption rate to assess how readily users discovered new elements. Additionally, we shifted our focus to usage metrics, such as cross-feature usage, which tracks the frequency at which users switch between features within a single session, and we correlate feature usage to identify if the features we expected to be used together really were.

Conclusion

Our journey to reshape our SaaS product’s information architecture and navigation was both enlightening and challenging. Here’s what we’ve learned:

  • Strategic Alignment: ensuring that all stakeholders share a unified vision is vital. This approach nurtures collaboration and steers the project in the right direction.
  • Clear Scope: by precisely defining the initiative’s boundaries, you can create a roadmap that avoids extra steps and stays on track.
  • Resource Allocation: for complex tasks such as designing a new navigation structure and establishing a solid information architecture, the full attention of a dedicated core team is essential.
  • Mixed-Methods Approach: leveraging various research methods provides comprehensive insights, fostering alignment across teams.
  • Continuous Improvement: the journey doesn’t conclude with the launch. With continuous feedback, from metrics to interviews, you should continually refine and evolve your information architecture.

While this article has highlighted the different stages of progress, we are committed to continuing our efforts. At the moment, we’re actively iterating on content, refining object names for intuitive discovery and use, and introducing menu tooltips to clarify object functionalities and benefits. Stay tuned!

Further reading

  • Information Architecture: For the Web and Beyond by Louis Rosenfeld, Peter Morville, and Jorge Arango
    A comprehensive guide through information architecture, connecting foundational IA components and research to strategic design and implementation.
  • Mental Models: Aligning Design Strategy with Human Behavior by Indi Young
    A detailed exploration of leveraging mental models to understand user behavior and effectively inform design strategy.
  • Card Sorting: Designing Usable Categories by Donna Spencer
    A guide exploring how card sorting can streamline information architecture and align navigation with user expectations.

I’d like to thank Maud Laville and Cyril Leton for their insights.

--

--

Claire Jin
brevo-official

Elevating teams to new heights with actionable insights. Uniting UX research & product strategy. Empowering others through teaching and mentorship.