A well-designed data product provides a useful abstraction of underlying data — one that removes noise and helps people understand the signals that are relevant to them. Complex interfaces can confuse and frustrate users, but too much simplicity can obscure nuance and uncertainty, encouraging overly-confident (or just plain wrong!) interpretations. As we outsource data analysis and interpretation to models that are increasingly complex and opaque, we raise the stakes for design. How does transparency help to build users’ trust? When does simplification fool people into thinking they understand more than they do? What if we have to choose between accuracy and usability? Do people feel deceived when they recognize an abstraction as an over-simplification? Finding an effective balance isn’t easy, but it’s a challenge we can’t afford to ignore. Because it’s not only the benefits of AI that scale…it’s also the cost of errors.