Telling a Story with User Behavior Data

Grace Greenwood
Jun 6, 2019 · 6 min read

How we derived an analytics approach that is working: theorize, segment out your users, and enrich your data with related metrics. Repeat.

TLDR: Data on how users interact with your app or website is easy to come by and exciting to review, but does not immediately answer one basic question: are users successfully navigating this thing I built? Using the review of a single form to set the stage, we demonstrate how pinpointing and combining specific behavioral data sets can help tell a story about your users, even when it doesn’t explicitly define them.


After launching an app or website, we immediately want to know how it is performing. Is it easy for my users to navigate around and find what they need? Are they happy and satisfied? At first, it may seem like quickly pulling data on user behavior will provide easy answers to these questions; just count pageviews on the site or app, track link and button clicks, and review the biggest referral pages or sites, and suddenly you have all the answers, right?

Right?!

Turns out understanding the user experience requires a deeper synthesis of data that goes beyond simple pageview tallies and session durations. At Braintree, our Knowledge Managers and Sites Designers have teamed up to create our own three-pronged approach to solving this problem and answering the ultimate question: is my app or website any good?

Learning on the job

One of our first attempts to understand the user experience through behavioral analysis centered around our Help Contact Form. As its name suggests, this is a form built into Braintree’s website that users fill out to contact our support teams for assistance. The design of the form encourages a simple user journey: select an issue from the provided list, fill out some general information, and submit the inquiry. The issue selected not only identifies the user’s core question, but also determines which support team the submission routes to and surfaces a list of curated links to suggested documentation related to that issue. By clicking on these links, users can find their own answers in our docs and potentially avoid having to submit the form and wait for a human response.

To see how users interacted with the form, we first pulled data from Google Analytics and Mixpanel. While it was interesting to see the raw statistics — like how many people visited the form, clicked on certain things, submitted it, or bounced away — we soon realized that none of these metrics specifically indicated whether the form was well-received. In fact, we found ourselves in a common data conundrum of needing to identify which metrics should be seen as problematic. For example, was it bad if the conversion rate of users selecting an issue and then submitting the form was low? Did that imply users were giving up on a confusing form, or were they successfully clicking out of the form to find their own answers in our suggested documentation?

While the big pile of data in front of us didn’t immediately illuminate problem areas with the form, we suspected there might be some valuable nuggets in there somewhere. Finding these nuggets of insight within the raw data was an interesting journey, and through it we found ways to improve the experience for thousands of form users. Ultimately, we discovered three key steps for using data to identify data insights: theorize, segment our users, and enrich the data.

Step 1: Theorize

We started off by forming a simple hypothesis: if a user exhibits certain behaviors, then they have had a bad experience with the Help Contact Form.

While extremely vague, this hypothesis forced us to then ask ourselves a few fundamental questions to determine what those bad experience indicators may be:

  • What behaviors do we want our users to do?
  • Why are those behaviors important?
  • What other behaviors could users opt for instead?

In the case of the Help Contact Form, the answer to one question fed into the next. We wanted users to either click on our suggested documentation links to find their own answers, or to have a painless experience reaching out to our support teams for quick assistance. This all starts with users partaking in the important behavior of selecting an issue on the Help Contact Form; by doing so, users allow us to either redirect them to relevant documentation or route their question to the team best equipped to help.

So what behavior could users partake in that would completely subvert this intended experience? This is where our vague hypothesis became solid theory: users who selected the “Get help with something else” issue (or the generic “Other” issue option) were having a bad experience on our site. These were the users who either read through our list of issues and didn’t find one that matched their actual problem, or those who (understandably) didn’t want to read through our robust list of issues in the first place.

Steps 2 and 3: Segment and enrich the data

Identifying the measurable set of dissatisfied users gave us a whole new lens through which to explore the big bucket of behavioral data. Instead of looking at all the data for all of the users, we could segment out the users who were partaking in a behavior that indicated a negative form experience!

At this point, we enriched our original data set by pulling all of the support tickets that came from form submissions with the “Other” issue selected. Our support teams assigned reason codes to these tickets, effectively defining the exact topics our users had questions about when they reached our form and couldn’t find what they needed. If the topic was the same as one of the issues already existing on the form, we knew our issue-selection UX needed some work; if the topic was something we didn’t have listed on the form already, then we knew we needed to add some new issues. In our case, we saw a lot of both.

Suddenly, we had a process for identifying the specific types of questions our users had that our Help Contact Form didn’t help them with. And knowing this gave us a clear, measurable course of action: iterate on issue content and design changes to decrease the number of users selecting the “Other” issue on the form.

How’d we do?

For our Help Contact Form, we made incremental changes over the course of a year to improve both the user interface for finding issues and the quality of the issues themselves. This included:

  • Rewriting the issue content into quick snippets that would be easier to skim
  • Adding new issues for the common topics assigned to “Other” form submissions
  • Adding more (and more relevant) suggested documentation links for each issue
  • Redesigning issue selection with search

In December 2017, over half of the Help Contact Form submissions had the “Other” issue selected. In the first quarter of 2019, after we had implemented multiple rounds of these improvements to the form, the number had dropped to only 25%. On top of this, the average number of form submissions per business day remained consistent from month to month, while both the number of users viewing the form and the number of users clicking into our suggested documentation links steadily increased.

Given this, we can confidently say that our Help Contact Form users are now finding more of their own answers in our docs without waiting around for human assistance, or getting derailed by a frustrating form experience.

Conclusion

Behavioral data is not enough to tell you exactly what’s going on with your app or website. Knowing the number of users clicking on something isn’t valuable until you can segment out the types of users doing the clicking into meaningful sets. Don’t be afraid to narrow down your data set or combine it with related metrics to search for trends. Similarly, don’t be too concerned if doing so doesn’t immediately reveal any life-changing information; just try again! Ultimately, you can only use data to tell a story about your users, not to define them.

We still have so much to learn, but we derived an analytics approach that is working: theorize, segment out your users, and enrich your data with related metrics. Repeat.

Braintree Product and Technology

Essays on design, engineering, and product development at Braintree.

Grace Greenwood

Written by

Knowledge Manager and UX Analyst @ Braintree

Braintree Product and Technology

Essays on design, engineering, and product development at Braintree.

More From Medium

More from Braintree Product and Technology

More from Braintree Product and Technology

More from Braintree Product and Technology

A Year (plus a little) on TC39

More from Braintree Product and Technology

More from Braintree Product and Technology

Pay It Forward

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade