Designing Automation Systems to Be Calm: Five Principles
A Future for AI & Automation We Can Live With, Part III
Say you’re a smart home designer who wants to build an automated system that’s able to tell when someone in a large house is cooking. How would you do it?
I posed this challenge to developers and engineers at conferences, dinner parties, and meetups. “That’s simple,” one engineer told me, “just install a security camera in the kitchen to see if the stove is being used, or if there are people in the kitchen for awhile.” Most of the responses were the same. Install a video camera. Use a privacy-invading machine-vision algorithm, and send the data to the cloud to analyze it. Problem solved?
The next question I’d ask is this: “How would you build the same system on a budget of $40? Could you do it without algorithms, cloud computing and storage?” And then, “How would the system check for false positives? For instance, if a crowd of people is in the kitchen during a party, but not necessarily using the oven, would the alarm still go off?”
Use the Least Amount of Automation to Get the Job Done
The co-founder of my second startup actually created this system when he was in college. His small budget forced ingenuity through constraint. Instead of using face recognition to determine who was in the kitchen, he looked for small signals that might indicate behavior. He finally noticed that when the oven was turned on, the temperature in his kitchen increased by an average of 10 degrees. He installed a cheap temperature indicator next to the oven, and connected that to the house’s local area network and IRC. Whenever the temperature in the kitchen increased by 10 degrees above the norm, the computer would say, “Yum! What’s cooking?” His system always worked.
All too often, we assume automated systems must be complex, and imposing. We take the damage it can do to cultures and peoples for granted, as a necessary evil for better efficiency.
A Calm Technology approach aims to achieve more efficiency by making automation simple and unobtrusive — and searching for friction points where it is not. In the case of home automation, it should give us peace of mind. It might only let us know when a problem is critical, as when a pipe might burst from freezing, or that there’s a leak in the sink (or soon will be). But sensors running on a local box can tell us that — with no need for a network connection, let alone machine learning.
Here are four more Calm Technology design principles:
Improve Efficiency Before Introducing Automation
Efficiency is almost always the primary argument for implementing automation, but that can overlook two key factors:
- Automation often covers over deeper rooted issues: If you’ve worked at a large company, you know that during a retreat or company-wide meeting, management might gain some crucial understanding of a systemic problem, and how to fix it. But then a “fire” starts, and the fix is back-burnered. Actually addressing underlying problems can reduce the cost of automation in the first place. Consider what’s crucial to the company, and don’t automate things that aren’t core to the company mission to begin with. For similar reasons, be leery about replacing personnel with automation, especially those at the bottom — they tend to be the people with the most direct insights on improving efficiency at a more substantial level.
- Automation inherently introduces new inefficiencies: How many times have you seen retail employees apologize on behalf of a slow chip card reader, or a confusing touchpad system? In all likelihood, they didn’t have buy-in with the automated system that was proposed from the top down, or there was no trial period of systems, so employees could select the one that worked best. The problem is made worse when automation is imposed on the customer. I’ve been in numerous automated grocery store checkout aisles where the line is longer than those with the human cashiers, since customers struggle and stumble with the self-check user interface — until an employee must come help them complete the automated task they were once well-trained to do.
This last point take takes us to the next principle:
Avoid Imposing Automation on the Wrong People
We must realize that automation systems are almost always created by people who have never grown up with them or used them on a regular basis, yet still make assumptions about how people want to live.
I often think about a upper middle class startup CEO who discussed automation with a BART station ticket agent. An interviewer from the Marketplace podcast went with him to record the conversation. They descended the steps to the transit system and walked up to the window:
As if on cue, a frantic looking lady with a suitcase [ran up to the window]
The woman explained she was rushing to the airport and couldn’t figure out the ticket machine.
Landry [the station attendant] came out of her booth to help, and with the press of a few buttons handed the thankful woman a ticket and wished her a good day.
Then Landry turned back to us, smiling.
“See, everything can’t be automated,” she smiled. “A lot of people do feel better having another person there.”
San Francisco-based ticket counter agents make an average base salary of about $64,000, not including overtime, health care or pension benefits, which the startup CEO felt was “a lot of money for something he had in his pocket”.
Automation often caters unconsciously to the needs of able-bodied people, or younger people raised on smartphones, while ignoring everyone else.
Not everyone has a phone in their pocket, a country-wide data plan, or an inherent understanding of how to buy a transit ticket on their phone. Customer service agents often step in when the technology fails, or there is a human/technology mismatch. This is why the job of the ticket agent is so crucial, and should be well-paid.
If we are forced to automate, how can we ensure these systems will help those with lost phones, battery failure, or network access issues? What about visitors to the city? I’d rather talk with a ticket agent in a new city than bury my head in the slow loading dystopia of an international data plan. And I still can’t understand how automation might provide a friendly smile across cultures. By stepping away from the assumption that everyone can access technology, we can understand the human-centric role of the ticket agent.
Recognize That Automation Can Never Account for Every Real World Scenario
The web is easy to analyze because every click and key tap can be captured. The real world requires countless sensors, but there will never be enough to calculate every condition which emerges from the millions of micro-interactions that happen every hour in a single office building, let alone on a single city block. Instead we must design for failure, and with backups.
This point was painfully illustrated to me during a recent speaking trip. At a European airport, I had to pass through a corridor with two locked gates which wouldn’t open unless you waved your airline ticket at an automated sensor. But I had folded the ticket into my pocket during the flight, bending it just enough so the sensor couldn’t identify it. I was trapped in this limbo walkway between the two gates, and couldn’t get out until I saw a security camera. The only thing I could do was jump up and down, frantically waving until an actual human noticed my predicament.
When people assume they can design a perfect automation system, they forget the real world has all kinds of ways to induce failure. Automated system design must start by assuming (and imagining) the worst case scenarios. We can’t assume battery life will always be good, or that computing resources will always be available. We have to design technology that still works when it breaks.
When escalators lose power, they become stairs. Any automated system which doesn’t have a similar non-technical fallback is ripe for disaster.
Amplify What Humans Do Best — and Amplify What Machines Do Best
Humans excel at curation, context, service, creativity, compassion. Automation is at its best when helping us make a decision by presenting likely options (or many multiple options, like Google), pre-filling in forms, or making suggestions (as discussed in Part II, this is a centaur relationship between human and computer). Because automation can help discover patterns amid massive troves data, people can leverage those patterns to help understand or predict trends. Through machine learning, credit card fraud companies know precisely what percentage of fraud they can automatically track, and what percentage doesn’t fit typical patterns — those are sent to humans.
Automation is at its best when it help inspires people be more creative and gain new perspectives. It’s also for this reason that humans must always check the results of automation manually, to root out human-created algorithmic biases that inevitably can enter and propagate through the system. As my colleague Joy Buolomani powerfully demonstrated, we can claim an “AI” is neutral, but in reality, it’s highlighting the unconscious biases of the creators.
There’s a certain, essential human honor to doing complex physical tasks and doing them well — it creates a “flow state” that seems essential to happiness. This is less true for white collar workers and people in the knowledge/creative economy, but we still see it in roles that rely on motor coordination and acute awareness of the material environment; with plumbers, woodworkers, and contractors, for example. It was key to my mother’s job satisfaction as a Master Control Operator. These are roles which don’t make sense to fully automate.
But a contractor using the latest in automation to determine the layout of electrical wiring in a house from the 1980s? That’s using technology as a tool, alongside us. This is how humans have always evolved — alongside technology.
How do you think we can make automation better, or use it alongside us as a helpful system, while respecting the human? Do you have a good example of one of these systems, past, present or future? I’d love to hear your thoughts on Twitter!
Article note: Microsoft’s Inclusivity toolkit has a process for determining whether a system makes sense for a variety of contexts and abilities.