Welcome to high stakes design, a new IQT Labs blog about design, data visualization, and machine learning.
Why? We are so glad you asked!
We need to design data products — artifacts and interfaces — that make data more accessible to less technical audiences. Good design gives us products that are accessible, informative and useful; even better design gives us experiences that are intuitive, compelling, and a pleasure to use.
A data analysis software tool might be built on the most scalable and impressive back-end architecture; it might leverage the most powerful set of algorithms and models; but if its interface isn’t easy to use, most people won’t use it. This is why design is the last mile of all of the “big data” and “AI” infrastructure that the tech industry has been building for decades. If data is the new oil, design is the gas pump that allows consumers to fill up their tanks.
While many people think of design as merely the “look and feel” of a product, often, the more important aspects of the design have to do with how well a product anticipates and responds to people’s needs. The shape and size of some gas pumps make them easier to hold, but the more important design innovation was the invention of the fuel pump itself. When you’re trying to fill up your car, even a heavy, ugly pump is more helpful than a beautifully detailed barrel with no hose.
In the context of decision-making, data often becomes a high stakes design problem. Design choices — about how to represent information, what to include, and how much to show — can influence the way decision-makers see and make sense of data. Poorly-designed data products can exacerbate users’ biases, or (inadvertently) lead them to draw the wrong conclusions.
Good design, however, requires more than a commitment to accuracy. It must provide a useful abstraction of the underlying data — that is, an abstraction that removes noise and helps users understand the signals that are relevant to their decision.
Complex interfaces that show too much noise can obscure important signals, causing confusion or frustration. If data isn’t neatly-packaged into an accessible and relevant so what, decision-makers won’t care. But if the abstraction is too simplistic, it can obscure complexities and uncertainties, encouraging overly-confident interpretations of underlying data. Finding an effective balance between these two modes of failure is not easy.
A well-designed data product provides a useful abstraction of underlying data — one that removes noise and helps users understand the signals that are relevant to their decision.
As data analysis is outsourced to automated processes and increasingly complex machine learning (ML) models, the design of effective data products is becoming even more challenging. More and more organizations are adopting data-driven practices, but not all decision-makers have the technical expertise to understand how complex machine learning models work.
In some cases, someone who doesn’t understand how a model works may struggle to contextualize — and therefore, to trust — the information that model produces. In other cases, third party concerns — about potential failures, liabilities, or unfair outcomes — may motivate calls for transparency. In both situations, data product designers must navigate not only the calculus of model selection and the limitations of available training data, but also, the tradeoffs of providing users more transparency into complex systems vs. maintaining the clarity and usability of interfaces to (and explanations of) their data products.
The desire for transparency into artificial intelligence (AI) and machine learning (ML) systems raises substantial user interface (UI) and user experience (UX) design challenges, many of which are intertwined with the emerging technology areas of “Explainable AI” and “Interpretable ML.” Today, while there is general consensus that AI is a “black box,” there is much less agreement about who should have the ability to look inside or how they might do so.
A range of emerging tools and technologies promise partial solutions, but each new approach raises new design questions. What types of explanations do users want? How much control over systems do they need? How does this vary across different applications?
How should designers balance the (sometimes competing) aims of fidelity and usability? When does transparency help to build users’ trust? And when does simplification obscure important nuance? Or breed over-confidence? Or fool users into thinking they understand more than they do? And what happens if users recognize an abstraction as an over-simplification? Do they feel deceived by the design?
Over the coming months, we will investigate these design questions on this blog, through essays, case studies, tool analyses, proof-of-concept design projects, and conversations with a diverse set of stakeholders — researchers and technologists, designers and decision-makers, policy-makers and people affected by data-driven decisions. We’ll kick off with a post introducing dataviz.cafe, a catalog of open source data visualization tools curated by IQT Labs.
We can’t promise easy answers, but we are committed to investigating the tradeoffs and examining what is at stake. We hope you’ll join the conversation.
Andrea directs IQT Labs’ design & visualization group. Learn more about IQT and IQT Labs at www.iqt.org.