AI-first design: Design thinking for cyborgs and centaurs
What does it mean to design products in the context of AI? The slides at the bottom of this post focus the ways that AI technologies augment humans, creating what chess champion Garry Kasparov would call “centaurs” and what others might call “cyborgs.” This is true for both designers and the people they design for.
Computers can process a lot more data than humans can. But it’s humans who choose what a machine optimizes and it’s humans who are the most creative when machines get stuck.
For example, it would be a human business user who would say “I want to show one of 10 different offers to potential customers and I want that optimized for conversion”. Is conversion the right metric? What if urban millennials hate all 10 offers? It’s humans who will be sorting out answers to those questions.
AI systems create and require large amounts of data. That gives designers a lot more insight into the ways people use their products. The system itself will often not care about outliers, but investigating outliers is a great use of a designer’s time.
That’s because outliers are not aberrations to designers, they are people who interact differently with products than everyone else. Such people open paths to possibilities: new understandings about a product and the people who use it.
The worst thing about focus groups is “group think” from people being too alike or too polite. Having access to a lot of data can help you create unfocus groups so you pull from very different types of users. Constructive conflict is more inspiring and informative than a lot of head-nodding.
An example (not in the deck)
To understand what design looks like in AI, I’ll go to something that isn’t an AI — a teapot. (We have a compilation of definitions of artificial intelligence for laypeople in this post.)
If you’re going to make any kind of teapot, you need to think about who is going to use it and how. For example, if you make it too heavy only people with strong wrists can use it. If you make the handle metal, it may get too hot to touch. Is it for individual cups or tea parties — should it be big or small?
A lot of the core insight of design thinking doesn’t change in an AI context: problem statements and their solutions are not deduced from logic. Rather, they come from observing, empathizing, and doing.
So what changes if we make an AI teapot? The first is that an AI teapot is going to be collecting data. You’re going to learn all sorts of things about how it is used. You may drown in the data. This is a common problem in AI systems that are made for both businesses and consumers: you accidentally reduce humans who are (or might be) your users into data points.
Many of the people we design products and services for are also pulled in multiple directions. The teapot whistles, the phone rings, it smells like the crumpets are burning, and by the way, have you checked Facebook or Instagram lately?
That is, attention is scarce all around: for designers and users alike. There are probably not enough questions and methods in traditional design thinking devoted to figuring out when the best time is to ask for human attention.
If you don’t consider when it’s actually worth drawing a user’s attention, then your product may be great in theory and usability labs, but problematic in the real world.
When it works, collecting data from users means that we can design products and services that adjust to the different ways that individuals use them: personalized recommendations. If you learn that I almost always make green tea, then you could default to a lower temperature than Captain Picard, who wants his earl grey (very) hot. In other words, you move away from “personas” (which were usually just lousy stereotypes) and closer to the actual users.
Consider Netflix: if you know that I love mysteries, you should summon Sherlock to my home page. But you also want to try things you aren’t sure work both to give me some novelty as well as to learn more about what I do/don’t like. If you remember the early days of Pandora, it was easy to hit like-like-like on your favorite songs and end up in a teeny tiny puddle of songs that were way too repetitive. If your system only gives me what it’s pretty sure I’ll like, it’ll be hard for it to get smarter.
It’s not design thinking if there aren’t iterations. Likewise, it’s not an AI system if it doesn’t get smarter.
Presentation slides from StartupFest (and a quick conclusion)
The slides below tackle some of these issues and give some additional ways of thinking of the centaurs/cyborgs that we and are users have become. The ethical issues of designing in AI contexts are briefly addressed, but for more on that I’d recommend this page on ethics and design or even better our practical guide to building ethics, privacy, and security into AI/machine learning projects/products/systems.
Tyler Schnoebelen (@TSchnoebelen) is principal product manager at integrate.ai. Prior to joining integrate, Tyler ran product management at Machine Zone and before that, founded an NLP company, Idibon. He holds a PhD in linguistics from Stanford and a BA in English from Yale. Tyler’s insights on language have been featured in places like the New York Times, the Boston Globe, Time, The Atlantic, NPR, and CNN. He’s also a tiny character in a movie about emoji and a novel about fairies.