Examining your Data-User Experience

Understanding your DUX

Jeremiah Coleman
MyTake
9 min readSep 24, 2019

--

Source: Monty Python and the Holy Grail

Everyone wants to benefit from data. There are so many ways to collect and use data, so why not? In theory, a big break in an industry is developing some “magic insight” and putting it in an app for users to see.

Why is it never actually that easy?

I believe teams often ignore their Data-User Experience until it’s too late, with the outcome being frustrated users that didn’t get the “magic insight” they wanted.

Traditional UX is a very broad-ranging activity — your (many) goals include making it easy and intuitive for the user to perform in-app actions, build a sense of how those actions help them, draw them into a routine of opening and using your app, and create a positive mental association to the value you provide. No matter the purpose of your app, these are all essential goals. For some apps — social media, games, shopping — the end goal for both you and your user is in the app itself. For data apps, the goal is very different.

In the context of a data app, you present data to the user, and expect them to both make a decision and take action in the real world. It’s this case where I find the traditional UX skill set to be slightly lacking. To head-off the “magic insight” problem, we need to be honest from the start about the challenges of creating a valuable Data-User Experience.

What is DUX?

A Data-User Experience is a type of UX where presentation of data is the main feast of the app. In a DUX, the user is presented with data in some form. This can be as simple as a few numbers, some simple charts, or more complex, such as a score generated by a predictive machine learning algorithm. The user is expected to examine this data, make a decision which is (hopefully) informed by that data, then take action on that decision. If all goes well, the user attributes the success of their decision and action to the data presented in the app.

How is this different from a traditional UX? Why does it matter? Most modern UX work is influenced by data. Developers and designers test different patterns and decide what they want the user to do, and make that action as intuitive as possible. In a DUX, you buck the trend by encouraging users to make their own decisions and take action.

(You can, and should, take action to reduce any friction you can in the user taking action, but the ultimate point of a data app is to put that power in the user’s hands.)

If your app directly presents data to users, then you are applying some form of DUX. In this article, we’ll dive into some high-level DUX used by B2B vendors and in B2C apps. DUX is also quite applicable in internal data projects/initiatives/applications within larger organizations, but we won’t dive into that today.

The DUX Matrix

This is the DUX matrix — a way I like to think about how our app presents data vs. the user investment required for payoff. Some quadrants are better than others, but may require creative thinking to get there (the benefit being your users getting value from your app).

On the horizontal axis, we measure transparency (or simplicity) of the displayed data. To the far left, is the most transparent, and moving right sees increasing levels of complexity of opaqueness (many vendors would market this as “secret sauce,” and users would consider it “black-box”).

Some aspects considered transparent:

  • Data from known/understood sources
  • Easy-to-follow and consume formats (simple totals/counts, gauges, bar/line charts)

And conversely, more black-box:

  • Data from proprietary data sources
  • Derived data (inferred) from less-transparent data
  • Non-intuitive predictive modeling

Transparency and simplicity serve one simple function: building trust in your product. The easier the user can consume and understand what you show them, the easier it is to build trust for them to actually do something with that data.

Not to say that less transparency is always a bad thing — occasionally, it is a necessity! Just be aware that it will be much more difficult to clear the trust hurdle. You don’t have to stick to one side of the spectrum, either. If there is a black-box element to your data, try to balance it and build trust with other supporting data that is easily consumed and understood.

The vertical axis, friction, is much harder to measure, nearly impossible to control, but understanding it is critical to the success of your app. Friction boils down to one question: how much effort is required from the user to see success using your data?

An exercise-tracking app, for example, can show you data about your workouts, fitness level, etc, but it can’t make you get up and exercise! That is entirely dependent on the user to look at the data, decide what to do, then go do it.

On the other hand, take an app like Datally, a mobile data-usage monitor provided by Google. Compared to exercise, looking at your data use habits and cutting back (to save money) is fairly easy.

To some extent, you can UX away some friction, like automatically tracking exercise and congratulating the user. But that is a far cry from making the user exercise, which will always be hard.

Despite its difficulty to change, friction is still incredibly important to understand. Not acknowledging the pre-existing friction in your target user base can lead to your team treading water for months while frustrated about user engagement and success.

B2B vs. B2C

Let’s take a look at a few data apps and where they fall on the DUX matrix. Like many other aspects of software development, Business-to-Business (B2B) and Business-to-Consumer (B2C) often have different patterns from each other.

Data apps intended for B2B consumption tend to lie in the upper-half of the DUX matrix — implying that friction toward user action will always be somewhat high. There are several reasons for this:

  • No matter what data is being presented, any derived benefit requires challenging assumptions and (likely) changing existing problems
  • Change/action means coordinating among not one user, but several
  • Increases risk to the business — what if these changes don’t work?

Lead scoring systems are incredibly common, and well-needed in the B2B space. You give us some level of information about a person (or group of people), and we’ll tell you who is worthy of your time. These apps live in the upper-right hand corner of the DUX matrix — they are black-box because they almost always present some ambiguous “score” (usually a number or letter-grade), and high-friction because they depend on sales reps actually calling those highly-scored leads.

On the other hand, customer-engagement apps such as Mixpanel and Pendo tend toward the transparent side of the matrix. User retention and engagement graphs are quite simple to understand and easily traced back to the data that produced them. But taking action is time consuming, often requiring further research and experimentation to actually achieve results.

B2C data apps tend to be a different animal — leaning almost entirely to the left side of the DUX matrix. Due to a smaller change surface (usually just one person/user), friction can span both high and low. But most successful apps are very transparent in the data they display. After all, most people don’t have any incentive to change their life based on some ambiguous, secretly-generated number. There’s also the requirement of appealing to both technical and non-technical audiences (in B2B data apps, it’s a safe assumption that users will have at least some experience interpreting data); keeping the data simple and transparent appeals to a broader range of people.

The most commonplace high-friction high-transparency B2C apps are exercise trackers — my favorite of which is Google Fit. All exercise trackers are, by nature, high-friction. How in the world would you expect an app to force you to go exercise? Google Fit lowers the friction of using the app by automatically tracking some activities (walking, running, and biking), but that does not make those activities easier. Exercise data is extremely transparent, usually being some combination of “how long”/”how far”, which is a great indicator of progress (or lack thereof). “Heart Points” are a concept specific to Google Fit, which tie into American Heart Association fitness recommendations — one thing to note is Fit is very intentional about keeping this metric transparent. They don’t want to scare off users with some ambiguous black-box.

The B2C world is also where we see the majority of high-transparency/low-friction apps. My favorite example is the emerging world of car insurance apps: install an app which tracks your driving habits, and get a discount when you meet certain milestones. Those milestones and your progress toward them are easy to understand, and there’s rarely anything standing in the way of better driving. Bonus, there’s a monetary incentive!

What about low-friction, non-transparent?

After diving into the common B2B and B2C categories, we see that three quadrants of the DUX matrix are quite well covered. What about low-friction blackbox?

This category is probably where most data app providers see themselves — “I’m going to take data, apply an advanced ML algorithm, show the results to the customer, and it will magically solve their problem.” And most teams quickly realize it’s more complicated than that.

Why is it more complicated? There are two possible answers (which are not mutually exclusive).

First, the target problem area may actually have high friction to user action. Developers and designers may often see the action as “just” — “just do X,” or “just do Y.” Initial interactions with a small set of customers may even seem to back up this assumption. But when user sample size grows, it will quickly become obvious that there is more friction than initially assumed.

(Not coincidentally, one of my former colleagues deemed “just” to be one of the most dangerous words in business, as it’s used to gloss over many details. Just don’t say it!)

The second answer goes back to the basic idea of data-user experience — in this context, we’re showing users data, then expecting them to take action. Most problem-spaces that are truly low-friction AND have black-box solutions don’t actually present the user with data. Those apps just take action for the user. A great example is the ever-evolving Google Assistant. This ambitious app is backed by (presumably) tons of data, analytics, algorithms (whether it’s ML or if-then statements), and is packaged up in a way that “just” does things for the user (such as screening calls or making dinner reservations). But there is the key difference — the app is designed in a way to eliminate user action, not inform it. Even then, adoption of such a convenient app is slow as consumers are less-likely to trust something they don’t understand.

Google Assistant is an exceptional example because it is built by one of the largest tech companies on the planet. Most developers do not have the same resources. It falls on us to honestly evaluate our technology, target problem space, and proposed solution, to make sure we’re not fooling ourselves or over-promising to our customers.

Conclusions

  • Traditional UX is necessary and valuable at every step in the product development process, but is lacking for data apps
  • Honestly evaluating the friction of your problem space will help you prepare for challenges in adoption
  • Ongoing evaluation in the simplicity of your data and presentation will help build trust with users
  • Data apps can be amazing, but unless you can automatically take action on behalf of the user, your product will never be a “magic bullet”

Comments, concerns, questions, or angers? Feel free to reach out to me in a comment, on LinkedIn, or via email!

--

--