Designing a Modern Search Experience

A case study on the Global Search project at Rex and how we rebuilt our search to solve for our user’s biggest pain points and elevate our app experience.

Cody Lindsay Gordon
Bootcamp
Published in
8 min readJun 11, 2023

--

A screenshot of an application showing a search experience and several results including contacts and properties.

My role in this project was Design Lead & Project Manager.

How It Started

After implementing our new customer feedback processes at Rex I started to see trends appear in our feedback database. Now that feedback was properly categorised and weighed, a major theme was starting to surface — our search experience.

The volume of search related feedback was above any other feature by a significant margin. We knew search was a core part of our experience, so this raised alarms. I used this as evidence that we needed to begin this project to investigate and solve the problems with our search experience.

A screenshot showing the volume of feedback received for different categories. The volume is represented as a user impact score. The global search category has a score of 121, much higher than the next which has a score of 47.
Volume of feedback (measured as user impact score here)

Research & Discovery

Goals:

  • Learn more about customer problems with search
  • Gather more feedback on the search experience
  • Synthesise problem set

I started by diving into the feedback we had already received. Categorising this further gave us a list of specific problems our users had with the search experience.

A screenshot showing a list of different categories of feedback about the global search. Some examples are “Improve contact name matching” and “Search for contact by address”.
Feedback categorised into specific problems

User Research Sessions

With this base understanding of the problem set I reached out to our users to get further insights into how people used search and what their experience was like. This further expanded our pool of feedback as I could dive deeper into some of the issues raised, but also gave me a chance to have them demonstrate some of the issues in person.

A screenshot of a kanban board showing recordings of user interviews at various stages

Session Playback

Having users demonstrate their interactions is a start — but to get a look at how users were actually interacting with the search I used Fullstory to replay real sessions. This was key to identifying low-level interaction and usability issues that had not been reported by users.

Synthesis & Insights

There were many specific problems identified with the search experience, but two broad themes emerged.

There were major functional gaps around how users expected the search to work and how it actually worked (our functionality didn’t align with their mental model). For example:

  • Users expected to be able to search for a contact by their home address
  • Users expected to be able to find a contact by their name even if it was slightly misspelled (or a variant—Stephen vs Steven)
  • Results weren’t presented in an order that was found to be useful or intuitive

There were also interaction problems that were due to the information displayed and how the user could interact with the results. For example:

  • It was unclear why some results were shown (fields that matched the search term were hidden)
  • Desired or useful information was not present in the results
  • It was difficult to open results in a new tab
Two groups of post it notes outlining various functional problems and interaction problems

Scoping to ship fast

From this analysis I found that solving for the functional problems would have the greatest impact. I saw an opportunity to focus in on a backend-only release that could address most of these problems without requiring frontend changes, meaning:

  • Getting improvements into the hands of users faster
  • Faster iteration cycles with no frontend dependency

With this we could split the work into two streams, shipping the backend improvements sooner while planning to release frontend improvements at a later point.

Problem Statement

Search is core to how our users interact with and navigate our app. There are major functional gaps between what our users expect of our search and what it is actually capable of. Even minor inconveniences add up over many interactions and create a frustrating experience.

Hypothesis

Rebuilding our search infrastructure with a modern implementation that meets user expectations will have a significant positive impact on the overall user experience of our app.

Setting Baseline Metrics

In order to measure the current experience against any changes we would make—and to ensure we were moving towards a positive outcome—I took baseline measurements of key metrics. This included events to track things like time to first result, number of searches before clicking, and abandoned search rate. In addition, an in-app survey was used to measure a user satisfaction score which came in at 3.2—very low compared to other areas of our app.

The CSAT score for the search experience was very low

Redesigning the Search Engine

The major piece of work would involve implementing a new search engine using Elasticsearch. I worked closely with a backend developer to iterate on different search parameters and relevance scoring approaches in order to come to a result that solved for the set of functional problems we had identified. Some of the parameters we looked at were:

  • What fields a search term would match against—and how different formats of terms could search different fields (such as a phone number)
  • When we should apply fuzzy matching
  • What weighting was applied to different fields in order to get the most relevant results
  • How relevance could be modified by things like record ownership
  • Implementing keywords and exact matches for more specific searches

There were a lot of levers to pull, and we also had to keep performance in mind so we weren’t slowing down the indexing or the search itself. Because we were aiming for a backend-only release, we were able to iterate rapidly in this phase.

A system diagram showing how different types of input match against fields on different record types
A (simplified) system diagram showing how different types of input match against fields on different record types
A screenshot showing an API call and the returned data
Initial testing was done via API calls with real databases

Release and Iteration of the Search Engine

We had done extensive testing in our own environments, but the only way to make sure this was going to work was to put it in the hands of real users. Because this was a high risk change, I started small and reached out to a select group of users to test it out first. Rapid feedback was crucial, so a feedback button was added right into the search results.

This strategy paid off, as we were able to uncover unforeseen issues and address them quickly. I slowly added more users to the early access program and monitored the feedback channels and the metrics closely, tweaking the search parameters to see what impact they had.

Once we were satisfied — the new search was performing well and no new issues had been identified for some time — it was enabled for a new region every few days until it was fully released.

A dashboard with different graphs showing performance metrics
Performance metrics being monitored

Things that worked, things that didn’t

Some of the lessons learned during this roll out:

  • Fuzzy matching against email fields = bad idea
  • Auto-abbreviating road types only works sometimes (interchanging “street” and “st” makes sense, interchanging “chase” and “ch” doesn’t because “chase” is commonly used outside of addresses)
  • Matching against alternative name spelling = great idea (finding “Steven” when searching for “Stephen” — customers loved this 😍)

Upgrading the Search Interface

The other stream of work was to fix the interaction problems with the search interface. I worked alongside another product designer to redesign the search interface to address the identified problems and to take advantage of the capabilities of the new search engine.

A screenshot of a search interface with comments overlaid outlining various interaction problems
The previous search interface and some of the problems that were identified

Some of the improvements include:

  • Highlighting of matching search terms to show why results have been returned
  • Pulling in data from related records when matching against those records (for example, the search matches the name of the property owner)
  • Using the new relevance scoring to display a varying number of results of each type of record
  • Improvements to the information displayed (based on the most common user requests)
  • Keyboard navigation and search tips for advanced queries
An application showing a search experience and several results including contacts and properties, animated to show search results being navigated using a keyboard.
The new search interface featuring keyboard navigation
Matching terms are highlighted to emphasise why a result has been returned
Related data is shown when it matches the search term
Categories allow the user to quickly jump to relevant sections when the search is broad
Search tips improve discoverability for advanced queries

Measuring the Impact

We knew we were on the right track as we had been monitoring the usage metrics and feedback/satisfaction score during the implementation process, but post-released we could gather a larger amount of data to properly measure the impact. The results showed huge improvements across the board:

  • It took users 38% less time to find the result they were looking for
  • The rate of abandoned searches dropped by 34%
  • Feedback related to issues with searching dropped to nearly zero
  • General app satisfaction also saw a small bump after release
  • User satisfaction with the search experience increased from 3.2 to 4.5 out of 5
A huge improvement from 3.2 to 4.5 out of 5
New search experience rated as “very cool”

Project Timeline

Core work on this project happened over a few weeks. The majority of timeline for this project was over the staged rollout, but this was mostly monitoring for issues and not being actively worked on full time.

A timeline showing each stage of the project. 2 weeks were spent in the core research and design phase, while the staged rollout happened over a month and a half.

Takeaways

Scoping down to a backend-only initial release allowed us to move fast and iterate almost instantly. The search engine was crucial to the success of this project so removing the dependency on frontend really helped speed up the refinement of this experience.

You can never find all the bugs testing in an isolated environment. The gradual rollout (starting with an opt-in program) minimised any negative impact on our customer base of unforeseen issues/bugs. This was another crucial element as the search was a key component in the day-to-day processes of our users.

Refining a high-use experience had a huge positive impact on overall customer satisfaction. On a macro level, we didn’t add new functionality — we already had the ability to search. But our users were happier, a major friction point in their workflows was now seamless. And now we had a search experience that was leagues ahead of our competitors.

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. Bootcamp is a collection of resources and opinion pieces about UX, UI, and Product. To submit your story: https://tinyurl.com/bootspub1

Cody Lindsay Gordon
Cody Lindsay Gordon

Written by Cody Lindsay Gordon

Australian product designer & coder with over a decade of experience in user‑centered design practices → https://clg.name

No responses yet