The high stakes of designing data products

andrea b
high stakes design
4 min readOct 18, 2019

--

We need data analysis tools that make data more accessible to less technical audiences. But unfortunately, this is easier said than done.

As we outsource data processing and analysis to complex machine learning (ML) models, the design of effective data products — and their interfaces — becomes more challenging. More and more organizations are adopting data-driven practices, but not all decision-makers have the technical expertise to understand how complex machine learning models work.

Complex interfaces and other visuals that show too much noise can obscure important signals, causing confusion or frustration. But if the abstraction is too simplistic, it can obscure complexities and uncertainties, encouraging overly-confident interpretations of underlying data. Finding an effective balance between these two modes of failure is not easy.

A data analysis software tool might be built on the most scalable and impressive back-end architecture; it might leverage the most powerful set of algorithms and models; but if its interface isn’t easy to use, most people won’t use it. And if data isn’t neatly-packaged into an accessible and relevant so what, decision-makers won’t care.

This is why design is the last mile of all of the “big data” and “AI” infrastructure that the tech industry has been building for decades. If data is the new oil, the user interface (UI) or data visualization is the gas pump that allows consumers to fill up their tanks.

While many people think of design as merely the “look and feel” of a product, often, the more important aspects of the design have to do with how well a product anticipates and responds to people’s needs. Good design gives us products that are accessible, informative and useful; even better design promises us experiences that are intuitive, compelling, and a pleasure to use.

The shape and size of some gas pumps make them easier to hold, but the more important design innovation was the invention of the fuel pump itself. When you’re trying to fill up your car, even a heavy, ugly pump is more helpful than a beautifully detailed barrel with no hose.

Many interfaces to data analysis tools rely on some form of visualization, and designing these visuals is a high stakes design problem. In the context of decision-making, design choices — about how to represent information, what to include, and how much to show — can influence the way decision-makers see and make sense of data. Poorly-designed interfaces can exacerbate users’ biases, or (inadvertently) lead them to draw the wrong conclusions.

Good visual interface design, however, require more than a commitment to accuracy. A visualization must provide a useful abstraction of the underlying data — that is, an abstraction that removes noise and helps users understand the signals that are relevant to their decision.

A well-designed visual interface provides a useful abstraction of underlying data — one that removes noise and helps users understand the signals that are relevant to their decision.

Designers can learn from the growing discussion about “Explainable AI” and “Interpretable ML,” both of which aim to provide users more transparency into complex models and systems. However, while there is general consensus that AI is a “black box,” there is much less agreement about who should have the ability to look inside or how they might do so.

A range of emerging tools and technologies promise partial solutions, but each approach raises its own design challenges. What types of explanations do users want? How much control over systems do they need? How does this vary across different applications?

Someone who doesn’t understand how a model works may struggle to contextualize — and therefore, to trust — the information that model produces. Concerns about potential failures, liabilities, or unfair outcomes may motivate calls for transparency. Either way, interface designers must navigate not only the calculus of model selection and the limitations of available training data, but also, the tradeoffs of adding complexity vs. maintaining the clarity and usability.

What should designers do when faced with a choice between accuracy and usability? When and how does transparency help to build users’ trust? And when does simplification obscure important nuance? Or fool users into thinking they understand more than they do? What happens if users recognize an abstraction as an over-simplification? Do they feel deceived by the design?

How do we want to represent — and to see — the information on which we base our data-driven decisions? There are no easy answers, but we can’t afford to ignore the question. The stakes are too high.

The good news is that this question points to a vast new territory that is ripe for design intervention. But to improve today’s tools, we have to broaden the discourse around “Explainability” beyond the boundaries carved by computer science research agendas. We need to seek out new perspectives from a diverse design community with the collective expertise to invent new solutions.

— —

Image credits:
Photos of dashbord by
Luke Chesser on Unsplash
Photo of Icelandic gas station by Mahkeo on Unsplash
Photo of neon sign by Austin Chan on Unsplash
Photo of “Drifter” (a project by Studio Drift) by Christian Fregnan on Unsplash

--

--

andrea b
high stakes design

Andrea is a designer, technologist & recovering architect, who is interested in how we interact with machines. For more info, check out: andreabrennen.com