Designing for Intelligence: What Do We Mean by Intelligent? (Part 1 / 5ish)

The age of intelligent software has arrived. From anthropomorphic assistants like Siri and Alexa, to events from your email that show up in your calendar, to contextual app suggestions on your home screen, today’s apps do more than just execute commands and respond to clicks: they adapt, predict, anticipate.

But when we say software is “smart,” what do we mean? It’s not just algorithmic sophistication, machine learning, or artificial intelligence. Some of the smartest products achieve their intelligence by brute force; and some of the most sophisticated systems come across as pointless.

For example, consider the task of entering a date while searching for flights. It can be smart, or dumb, in a variety of ways:

These examples vary in technical complexity, and the most complex aren’t always the most useful. The formatting error (1), common though it is, is very easy to fix (2). Avoiding dates in the past (3 & 4) is even easier — just a simple date comparison.

The last three examples are much more technically sophisticated, requiring machine intelligence and/or extensive data. But again, that doesn’t make them smart. One could implement (2) or (4) in a fraction of the time required for (5), with far better results. Even (6), which might be useful at times, wouldn’t have nearly the impact of those easier options.

Defining Intelligence

So what does it mean to be smart? Software appears intelligent when it exceeds user expectations of its capabilities in ways that assist with task completion. Let’s pick that apart:

Exceeds User Expectations

A user approaches a product with multiple layers of expectations: her understanding of device and platform capabilities; the functionality she’s seen elsewhere in this product or other, similar products; and her mental model of technology overall. A Google Search user, for instance, may believe (based on past experience) that it requires a concise collection of keywords (“401k max 2017” rather than “What’s the max I can put in my 401k this year”), and has come to expect that results will begin appearing as soon as she starts typing. She’ll bring those expectations to other Search experiences, too.

Smart is relative to these expectations — and, since software is still rife with basic “dumbness,” the bar for “smart” — for exceeding expectations — is low. The following does not occur amongst humans:

  • You: “Hey, when are you coming to visit?”
  • Me: “March 6.”
  • You: “Wow, that’s next month! Nice!”
  • Me: “Dude, great work understanding that date, even though I didn’t include the year.”

But this is exactly example (2) above, and our low expectations might lead us to think of it as “smart” in software. Simple rules, thoughtful error-handling, and thoughtful use of data can go a long way.

Note, however, that exceeding expectations is often a form of breaking them, and requires that we manage them. For example, if Google replaced its instant auto-completing results with something unequivocally better, they’d still need to help users through the transition. With many forms of intelligence, it’s even harder: Google has supported much more natural, sentence-like queries for a long time, but I suspect many of us still go with keywords.

Assists with Task Completion

Products don’t earn points just for knowing stuff. It’s technologically impressive when we can understand user intent, pull in relevant data, draw conclusions, and make predictions. But if the resulting insights aren’t helpful to the task at hand, we look like four-year-olds interrupting the grown-ups:

  • Mom: “Hey, wanna do a date night tomorrow at our usual spot?”
  • Dad: “Sounds great.”
  • App: “Mom! Mom! Mom! I looked at your calendar and you’re free then! And here’s an article on the top three date spots in New York!”
  • Mom: “That’s nice, dear.”
  • App: “And tomorrow is also your brother-in-law’s second cousin’s wife’s birthday.”

To move the science of machine intelligence forward, we sometimes have to start with research goals rather than product goals. And that’s fine…as long as we don’t confuse successful research with successful product strategy and launch solutions to problems that don’t exist.

But it’s also not a binary thing. In the date-night example above, the offered assistance might be worthwhile in some cases and not others. If we can’t address that by improving the technology, we may be able to do so by tailoring the UX to the algorithm — tuning it so it compensates for the algorithm’s limitations.

Upcoming Posts

Designing for intelligence requires new approaches, just as designing for mobile did. On the one hand, it’s nothing more than the application of existing principles to a new medium. But that medium is different: probabilistic and uncertain, adaptive and personal, and error-prone in new ways.

This is the first of a five-ish part series exploring the topic. In future posts I’ll cover:

  • Avoiding technology for its own sake, and staying grounded in user needs
  • Embracing uncertainty
  • Co-evolving the UX with the algorithms
  • Rethinking research
  • Assistants and bots

…and whatever else comes up in the meantime, based on your feedback. I’ll add links to subsequent posts here as I publish them. Enjoy!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.