Auto-Generated Data Stories

andrea b
high stakes design
Published in
10 min readJan 8, 2020

--

Andrea speaks with Sean Byrnes, founder and CEO of Outlier.ai about auto-generating “stories” from data, building users’ trust, and Maslow’s hierarchy of AI acceptance.

****

ANDREA: How did you get to where you are now? And why did you start Outlier?

SEAN: I went to undergrad in robotics and grad school for artificial intelligence. I graduated in 2001 and back then there weren’t a lot of jobs in AI. It was the depths of the dot-com bubble having burst. After a few years I started a company called Flurry. We did analytics for Android and iPhone mobile apps. The Android app store launched in 2008 and then all of a sudden Flurry became one of the largest data companies in the world. I think at our peak we were tracking software on 98 or 99% of smartphones. It was a massive data set of every user interaction. We ended up building a big business on top of that, using AI. We could predict the next app that you would install with over 80% accuracy. It was bananas. And it wasn’t because the data science was so good, it was because the dataset was so comprehensive.

But customers always asked the same question: “What am I supposed to look for in all of this data?” Current tools focus on answering the questions that we know to ask. I became convinced that the next generation of tools is going to help us ask questions — they’re going to bring the questions to us. That’s the idea that Outlier was founded on.

ANDREA: How does Explainable AI fit into this? How do you define that term at Outlier?

SEAN: One lesson we learned early is that people aren’t ready to trust AI systems. Today, if I go online, buy a plane ticket and get an e-ticket, I’m fairly certain that if I show up at the gate I can get on the plane. But years ago, without a ticket in hand a lot of people were anxious. They weren’t ready to trust the system without talking to a person to confirm they had a seat on the plane. Today, we’re at that stage with AI. People aren’t ready to trust that they understand how it works.

We learned very early on that we needed to add explainability into Outlier. If Outlier explores tens of millions of dimensions of data and finds an insight, can it explain enough of that journey that you feel comfortable trusting it? And if so, is it something you can communicate to someone else?

ANDREA: By “you,” do you mean the user?

SEAN: Yes. Because you, the user, are going to take that insight and share it with somebody else and they’re going to ask, “How did Outlier know that?” It isn’t enough for you to understand it, it has to be easy for you to explain it to somebody else. You become the hidden evangelist for the insight.

We found we had to spend a lot of time in visualization development to come up with ways to represent relationships between things that a user — without a degree in statistics, who had no understanding about data visualization — could understand. We had a lot of false starts creating things that were too complicated. And we realized that in some cases, producing the most high-powered insight was not the best approach because there was no way to explain it. So we started to select the kinds of insights that can be explained.

The final thing we learned is that you need to talk to people in a language that they understand. We learned that if Outlier produced an insight with text that explained what was going on in human-readable language, it fit users’ expectations of what an insight looks like. That’s what a human would use — a graphic and an explanation. So today, the insights we generate have a chart and an English-language explanation. I think people know that if you can explain something in sentences, there’s a lot of understanding behind the scenes.

Outlier explains each AI-generated insight to business users via a chart and an automatically-generated narrative description.

ANDREA: How generalizable is Outlier’s approach to explainability?

SEAN: Explainability depends a lot on who the reader is. I would argue the hardest part is probably coming up with a uniform definition of explainability given heterogenous users. Does it mean explainability to an expert? Explainability to an average user? What is an average user? I don’t even know what that would mean. You can narrowly define what explainability is for certain products, or you can sample explainability for certain audiences, but there is no universal definition.

ANDREA: You mentioned that in some cases you reject high-powered insights in favor of another type of insight that is easier to explain. Can you say more about that?

SEAN: You have to choose the best way to present something. For example, there may be no way to explain a four-dimensional relationship between things. Say you’re e-commerce shopping and you’re going to Italy. We might know that you’re going to buy these four things. But if we can’t explain how we know you need these four things, then telling you is irrelevant because you won’t act on it.

So what we actually do is show you a product and three recommended items that people also buy. The also buy is not as powerful as showing somebody why a basket of items are related, but it’s easier to explain. We’re compromising on the richness of the insight to make sure that it’s accessible and actionable for the end user.

ANDREA: You also said that many of your early visualizations were false starts. What did you try and why did it fail?

SEAN: Think about the financial part of the paper. Whenever they move beyond a line chart — to bubble charts or scatter charts — it falls flat. People don’t have the training and expertise to understand how to interpret them. And they have a short attention span. Everybody’s busy. If it requires 15 or 20 minutes of looking at a chart to really grasp it, that’s 15 or 20 minutes they don’t have. They need to be able to figure out what you’re trying to explain within seconds. Frankly, there are concepts that you cannot convey that quickly and that becomes the great challenge of what we do.

These days it’s not hard to plug your data into an open source anomaly detection system and generate hundreds or thousands of anomalies a day. And you can run another open source system and cluster those anomalies into dozens of clusters — which I guarantee nobody will understand. But building a high-fidelity system is enormously difficult. A lot of what we do in automated analysis is around fidelity — producing the most useful five or six insights out of your data. That’s so important because you can’t take the easy route of telling people everything. You can’t just give them a bunch of enormously complex things.

For more on the benefit of simplicity see this recent post.

Let’s say you have tens of millions of dimensions of data. You boil that down into six key insights, each of which has a dozen dimensions. You have to dig into each one recursively, to try extract the signal from the noise and make it easy to understand. There is no brute-force approach. And the bigger your dataset, or the more dimensions, the harder the problem.

ANDREA: I completely agree. Someone — or something — has to be able to translate that anomaly into an insight that is meaningful. And that’s not so easy. I’ve spent a lot of my career trying to do that translation piece between data science teams and business leaders.

SEAN: Business leaders are in charge of running business. They need to be good at things besides statistics and data visualization.

ANDREA: Exactly. One thing that seems really novel about Outlier’s approach is the use of narrative. Why did you choose that as your explanation strategy?

SEAN: If you imagine a blip on a chart, a spike in the number of users on a website — that’s only the start of a question. You want to know: What else happened?

ANDREA: Or, why did that happen?

SEAN: But why is a rabbit hole. So, what else happened? Maybe a bunch of other things blipped. What did they have in common? If you ask enough questions, you eventually come to the point where you know what the answer might be. In Outlier, a “story” is enough of the narrative to help you know where the answer lives.

ANDREA: Can you give me an example?

SEAN: We have a spike in users in our website. That’s a top-level observation. If we give customers that insight, most will tell us, “that’s not actionable.” What they’re really saying is, “I don’t understand and I don’t know where to look to figure out what’s going on. And I’m busy.”

So we tell them that these other things changed, too. And it looks like they all had this common characteristic. And that hasn’t happened before. All of the sudden the narrative is composing a question that they can act on. Like, why did we see all these people coming in from Oakland, California via this Facebook campaign? That is actually the question, but we didn’t start there; we started somewhere else.

Now they have the information they need to triage. Do I care about Oakland? Or Facebook? Maybe they care because they just ran a new campaign. With enough questions we create a narrative, but the ending will always be missing because we are not in the answers business at Outlier. We don’t answer the questions. We lead customers as close to the answer as possible but the answer probably lives in some system of record — Salesforce, Marketo, Tableau, some other database. Our goal is to go down the rabbit hole enough that the question is fully formed. So it can be answered.

This Outlier story explains an insight about a new demographic trend — including the length of the trend, the relative increase over time and potential causes.

ANDREA: What would you say to people who are looking for explanations of how an underlying model is working?

SEAN: I think that they’re asking the wrong question. What they’re trying to ask is, “why should I trust the model?”

ANDREA: So, why should people trust Outlier?

SEAN: Because I’m a really honest guy! No, because you can run it and see for yourself — explainability through demonstration. Showing is better than explaining. If you find yourself having to explain how your system works, you’ve already lost. There’s no convincing through explanation — it doesn’t happen. Skepticism is too deep.

ANDREA: It’s interesting that you didn’t say transparency. Instead, you’re talking about demonstrating that your software is effective. I think that’s a really good insight. We need more transparency into some aspects of some systems, but that might not actually be enough to build trust.

SEAN: I think of it like Maslow’s hierarchy, but of AI acceptance. The hierarchy — which I think of as a staircase — is essentially our attack on the product user’s skepticism. Explainability is about working up that staircase of trust. When they ascend, they’ve fully adopted an AI solution.

For users the very first thing is, What the hell does this do? Does the product do something useful for me? If it doesn’t do something useful, no one will care about anything else. If it does something useful, then they will ask, Can I trust it? Why should I trust it? If we say we are going to save users three hours a week, we have to prove that they won’t have to spend those three hours checking the work. Then, once they trust it, the question is, How this is going to evolve? Maybe in 5 or 10 years when this is more widely accepted — like buying airplane tickets online — people will start higher in the hierarchy, but right now they all start at the bottom. And demonstration is the only way to address the lowest tier.

ANDREA: One last question: What do you think that policymakers need to understand about AI that they don’t understand today?

SEAN: First, that AI is not a problem we can ignore or avoid. AI systems are already making decisions for us. It’s already happened — past tense. The fact that we haven’t adjusted our expectations is irrelevant.

The second thing is that we cannot expect these system to simplify to the point that people will understand them. Data is like a river. You dam it at one place and it just flows somewhere else. It’s too complicated to control deterministically. So, the big question is how do you make sure incentives are aligned, so that the river flows the way you want it to flow?

Tomorrow, the machine learning systems will be more powerful than they are today. And it will be more powerful a decade from now. To get ahead of that, what incentives will we need? Between the internet and smartphone revolution we have great examples of technologies that outstrip our incentives. As a result, they were widely abused. Facebook is a weapon of mass destruction. It was not built that way. That wasn’t what Mark Zuckerberg intended. It wasn’t even what most people thought of using it for in the first four or five years. But the incentives were aligned wrong. And as a result of those incentives, it was manipulated to the point where it is actually a weapon of mass destruction, and has been wielded as such. And now, can you fix those incentives? It’s probably too late. If you don’t align the incentives correctly from the beginning, these systems will grow to the point that you can’t dam the river because the river’s become an ocean.

If you’re a policymaker today, you have to start thinking about how to put the incentives in place so, this doesn’t become the next class of weapons of mass destruction that we can’t control.

****

This post is part of a series of interviews that IQT Labs is conducting with technologists and thought leaders about Explainable AI. The original interview with Sean took place on July 15, 2019; this Q&A contains excerpts that were edited for clarity and approved by Sean.

Product screenshots provided by Outlier.
Illustrations by
Andrea.

--

--

andrea b
high stakes design

Andrea is a designer, technologist & recovering architect, who is interested in how we interact with machines. For more info, check out: andreabrennen.com