Brex’s AI-Powered Engine for Identifying Customer Insights

How Brex uses LLMs to scale customer-centric thinking.

Jonas Gebhardt
Brex Tech Blog
5 min readAug 22, 2023

--

One of Brex’s core values is to Inspire Customer Love. In practice, this requires deeply understanding the needs of every single customer, and, at an individual level, really every single person using Brex’s products. It is difficult to comprehensively process and understand large quantities of customer feedback while moving at the pace of a fast-growing technology company. The industry standard (which is really a workaround) is relying on quantitative data, typically through product analytics. However, quantitative data is inherently limited: It only tells us what happened and how often, but not why. Building customer-centric products requires actionable insights grounded in qualitative data.

Brex processes 10,000s of qualitative data points each month, which includes surveys, in-product feedback, support chats, and transcripts from customer interviews and sales calls. This data can range in length from a tweet to an hour-long transcript. Also, since Brex is increasingly serving a global audience, our customers communicate this feedback in many different languages. Properly analyzing and synthesizing this feedback is time-consuming and error-prone manual labor. Luckily, recent advances in artificial intelligence, and specifically large language models (LLMs) such as GPT, have made it possible to deeply analyze and aggregate all customer feedback automatically at scale.

During a recent Brexathon, we built a system to do exactly that: aggregate customer feedback from many different channels, use AI to extract topics, summarize data points into actionable insights, and route those insights to the right product teams. Since then, this tool has been used across all levels of the organization to discover customer insights and inform our product roadmap in real time.

High-level data flow: Using AI to categorize and route customer feedback to the right teams

From hackathon demo to crucial planning tool

Earlier this year, we hosted a company hackathon focused on AI and scale. Internal customer research with our GTM and Customer Success organizations revealed a sizable opportunity to improve the product development lifecycle by building tools to extract customer insights across heterogeneous data sources.

A cross-functional team rallied around this idea, shipping an initial prototype in just four days. It consisted of a data visualization tool for easily segmenting and reviewing trends in the raw feedback, a set of AI-generated summarized insights based on common themes across key segments, and lastly, a conversational interface that allowed asking product questions using natural language (e.g., “What are key feedback drivers for Brex travel?”).

Together, these features help product and engineering teams to stay on top of incoming feedback relevant to their domain. Also, by surfacing common themes and trends over time, company leadership can easily identify and prioritize the most important levers for improving our customers’ experience.

For example, the tool makes it easy to explore feature requests for a specific product area, to understand how user sentiment might vary between different roles and customer segments, or to identify bugs related to one of our Critical User Journeys.

Screenshot of the tool at the end of the hackathon, showing a spike in feedback during our response to SVB’s insolvency

The team was able to move quickly thanks to Brex’s mature data infrastructure and an internal framework for spinning up new AI applications within the bounds of our stringent security, privacy, and compliance requirements. In particular:

  1. Data from external tools such as Salesforce and various support vendors was readily available in Snowflake, our data warehouse, making it easy to focus on the business logic of unifying feedback across sources, rather than having to worry about the data “plumbing”.
  2. An internal AI engineering task force had prepared an in-depth prompt engineering guide, helping the team get up to speed working with LLMs, which came in handy because we already understood the key strengths and limitations of LLMs and were able to draw realistic boundaries for the scope of our demo.
  3. Working with external AI providers comes with many privacy and security concerns, which Brex takes extremely seriously. Prior to the hackathon, our Foundations org provided clear guidance defining acceptable usage of AI. We were also able to build on a blessed internal LLM proxy, which freed our team from having to evaluate vendors and worry too much about compliance, security, or privacy.

Since its initial launch, the tool has seen rapid internal adoption. It is now being used to inform product roadmaps, design documents, and improvements to our support experience. By analyzing support cases at the transcript level, it surfaces insights about customer needs at unprecedented granularity.

Summarization is a use case where LLMs really shine. For example, we noticed that feedback submitted in Arabic and Hebrew had been automatically translated into English without the need for special instructions. We also refined the prompts to classify nuances, such as distinguishing customers’ support experience from their product experience. Still, the AI managed to surprise us with unexpected capabilities. In cases where support conversations happened to touch on multiple topics, it proactively attempted to apply multiple category labels, which is an idea that the prior categorization system fundamentally wasn’t able to support.

We are continuing investments in this tool, with special focus on a few areas:

  • Making evaluation of LLM outputs repeatable to enable confident iteration on prompts. This includes establishing a human-curated ground truth data set.
  • Automating summarization of topics over time, to power higher-level reporting.
  • Enhancing quantitative SLO and Critical User Journey dashboards with key qualitative data points.

Driving corporate efficiency with internal AI applications

While customer-facing AI applications have been getting a lot of attention, there is huge potential for leveraging AI internally to improve processes and tools. This is especially important given the renewed importance of efficiency in the current macroeconomic environment.

The ROI of internal AI applications is more predictable since iteration cycles are fast and stakeholders are inherently aligned. By automating most of the manual labor required during customer research, we enable our teams to focus on shipping great product experiences.

Brex follows an Applied AI philosophy that prioritizes creating business value over costly foundational infrastructure investments. As such, we aim to make it simple and easy for employees to quickly build tools that accelerate Brex and continue to improve our customers’ experience. If you’re interested in building together, we are hiring.

--

--

Jonas Gebhardt
Brex Tech Blog

Software Engineer @ Brex. prev @ Meta, Instagram, startups