Designing AI That Doesn’t Scare Your Parents
I was the first design hire for an R&D team, outnumbered 20:1 by computer vision and machine learning engineers.
In my second week, someone strung a banner above our cluster of desks after hours. A clip-art approximation of Michelangelo’s The Creation of Adam, only God was replaced by a robot. Just below their outstretched hands read ‘Zero Human Effort’ in deeply unserious WordArt type. It suggested a sanctity in automation, or perhaps our rebirth in the image of AI, all with the reverence of a yard sale.
The banner was easy enough for us to ignore, dangling just above eye level. But from across the office it hovered like a tooltip for anyone wondering what we were working on in R&D.
We were zeroing human labor, more or less. There was tension between the more and the less — with some leaping to replace users wholesale, and others looking for incremental ways to assist them. The closer you were to the UI, the more likely you sided with the latter.
This conflict has existed since at least 1968. That year, Doug Engelbart gave The Mother of All Demos — debuting the mouse, hyperlinks, and multiplayer UI. He presented a digital shopping list, adding and removing items, creating and collapsing hierarchy, then forgot to save and lost his progress.
In response to the demo, Marvin Minsky debuted trolling. He asked Engelbert:
“Why a shopping list? We’ll have machines that will do all this for us in just a decade.”
Minsky was a visionary himself. He imagined computers that think and operate like us, better than us. Engelbart was only interested in using tech to complement and amplify our natural abilities. AI — Artificial Intelligence vs. IA — Intelligence Augmentation.
Minsky’s dream seems inevitable, but when? Full human replacement is surely on the horizon, as it was in 1968. A target every bit as elusive as a literal horizon. In the meantime, humans will use the tools we build.
After all, I still use a shopping list.
IA aims to make humans as efficient as possible. This means automating some of the work, of course. The difference is in approach: do we remove the human from the task, or the task from the human? Both roads lead to full automation, given enough time — but the latter is a much nicer ride.
—
Below are some principles I settled into over a few years of trial and error. I’m not an HCI grad or an automation engineer, just a product designer that was thrown in the deep end. I imagine a lot more of us will be in that position soon.
Given that AI is the established catchall for all sorts of automation, I’ll use that term from now on, even if these ideas align more with IA.
Humans are robust, machines are accurate
I heard this from an automation engineer. It nails how friendly AI experiences are structured.
Any AI with an interface is a composite process. A mix of human and machine contributions, more than the sum of each. It’s important to assign tasks correctly.
Humans are good at:
· Reading context
· Thinking in abstract
· Symbol manipulation
· Associative indexing
· Fuzzy logic
· Solving edge cases
A ribbon tied around a branch means nothing intrinsically, but a hiker will immediately understand it as proof that another human was there, even if they’ve never seen it done before. Humans are the right tool for an ill-defined task.
Machines are good at:
· Accuracy
· Sensing
· Operation speed
· Duty cycle
· Data processing and recall
· Validations
· Multi-pass trials
A human at an archery range always fires in the right direction, but they won’t consistently hit the bullseye. A machine will always hit the bullseye, even when its screen-printed on an employee’s uniform. A good composite process puts the human in charge of target selection and the machine in charge of accuracy.
Automate pain points first
You don’t have to memorize the lists above. If you watch a user work through a problem, it should be obvious which task is hardest. Automate just that. Then look for another. Avoid tasks they’re good at or enjoy, even if they are easy to automate.
Machines should react to humans
As a system becomes increasingly complex, the value of a user’s contribution depends on how much freedom they have to work in a disorderly way. Likewise, the flexibility of an AI determines the value of its contribution. This usually means reacting to the user. Avoid dictating a flow that suits the algorithm’s logic.
Predicting intent is not something I’ve had any success with in AI. Meet a user where they’re working, operate on their terms.
Build at human scale
Humans learn and work in small steps that build on each other. We can only process so much information at a time. There’s a natural distrust of anything that operates above our scale. If your AI feels like a black box, try breaking it into smaller interactions.
A basic capability is something we consciously do, made up of smaller unconscious actions. This is a good enough starting point for sizing interactions.
There is some engineering work in breaking up an algorithm into discrete services, if it wasn’t architected that way. Seems to me that, like web components, this only makes the work more useful.
Basic Capability Example
In the face of crippling imposter syndrome, I’ve opened a text editor and started writing an article about AI. I don’t think about typing the letters, I hardly notice the words. I think in statements, I’m trying to nail down a complete thought before moving onto another.
Letter-by-letter spelling corrections would make it hard to write anything coherent. Suggesting a correction as I finish typing a word is fine, but not ideal. I’m writing a statement, not a word, so the machine is interrupting me.
I’d rather the spell-checker hold suggestions until I pause to regroup. That’s the smallest task I’m performing consciously.
Friendly AI is collaborative
When AI combines too many basic capabilities without checking in, you get a black box. Keep the feedback loop tight.
When helping a user with something in real time, normal feedback frequency applies, <100ms will feel interactive.
In non-interactive tools, give the human something to do, even at the cost of slowing down. An engaging tool is a well-used tool. Losing user focus is a good way to invite mistakes and frustration.
Don’t show off
If a tool is meant to be used daily, it should feel like an extension of the user, disappearing behind the task. Surprises are best left for novelty experiences. Do you want users to be invested or impressed?
To the average user, AI presents as just another piece of UI. That UI can be specific and helpful, or broad and unwieldy. Focus on the task, not the tool.
Give the user control of scope
Algorithms handle complexity for us. Friendly ones are like an iceberg—they show only what we can manage. Let the user specify what they need, and when. Don’t make the mistake of mirroring the code in the UI. Normal progressive disclosure principles apply here.
Allow fuzzy logic
Don’t force a user to operate like a machine, allow fudged inputs where it’s safe to. The AI should conform to how humans think, not the other way around.
Slow down
It’s easy for AI to make a user feel like the weak link. If your tool takes the fun out of a task, they might decide not to do it at all.
Make tools that are fulfilling to use. Don’t be afraid to sacrifice functionality or speed in the process.
—
If you’d like to dig deeper, Engelbart’s “Human Intellect” white paper is worth the read.