Autonomy, Dark Patterns, and Death

Michael Moreau
The Startup
Published in
6 min readJun 12, 2019
What are the dangers of having an incomplete picture of your plane?

Redundancy, Redundancy, Redundancy

Earlier this year, I interviewed a chemical engineer as part of a research exercise on how our users think about their equipment. I had asked him about the kind of data he tracks which led to talking about the sensors that are built into the system. He described a large, chemical tank that was at the beginning of the process where the major data point they were interested in was the level of material in the tank. The tank had multiple sensors just to measure this value. In fact, it wasn’t just multiple redundant sensors of the same type, the sensors themselves used different techniques. One bounced sound waves off the top of the material and measured the distance, another measured the pressure underneath the tank and used that to calculate weight and volume, and a third was literally just a camera mounted on the tank so that an operator could do a manual, visual check.

This kind of redundancy in a chemical processing plant, not just in the number of sensors but in the methods of detection, is an absolute necessity to prevent possibly dangerous errors. That is why it came as such a shock to me when I was reading about the recent crashes in Ethiopia and Indonesia that the airplane involved, a Boeing 737 Max, only had a single sensor to determine if a specialized algorithm should pitch the nose of the plane down, even going so far as to fight the pilots’ actions. Even more distressing, the Max didn’t come with a means of informing the pilots of what it was doing: that feature cost extra.

The Disparaging Commonality of Dark Patterns

Previously, when I have written about design and industrial machinery, it was in the context of failure to prevent errors. While this is an issue in all software, it’s especially important in the industrial domain where mistakes can lead to damage to equipment or, even worse, loss of life. Don Norman describes this issue in “The Design of Everyday Things” in regards to the nuclear power plant incident at Three Mile Island: “the plant’s control rooms were so poorly designed that error was inevitable: design was at fault, not the operators.”

This is usually an issue of information: too much information is presented, information is not prioritized correctly, too many non-essential alerts are shown resulting in alarm fatigue, or the consequences of an action are not properly conveyed to the user. What happened with the 737 Max, though, was not just bad design (although it was that, too): it was a dark pattern.

“Dark patterns”, a term coined by Harry Brignull, describe designs that trick a user into doing something they didn’t want to do to the benefit of the business. It’s something we have all had to deal with. Sometimes it’s a download button that’s not actually the right download button but it can also be much more surreptitious, such as an app that collects personal data through something that should otherwise be innocuous, like a game or flashlight.

While most dark patterns are annoying, and unethical, they rarely cross the line to being dangerous (although, there are those too: see stalking apps, malware, etc.). Businesses, most businesses, don’t want to kill their users. But what happens when safety is compromised due to business desires?

This is hardly the first time that we have seen a company make a poor design decision based on money. When we have seen this in the past, though, it usually takes the form of a company choosing a cheaper option in order to save money, or rushing a product forward without proper testing, or even just not thinking about the consequences of a new feature. What’s remarkable about this situation is that:

  1. It involved an autonomous system acting on the part of the user
  2. The system didn’t inform the user of what it is doing, unless the user paid in advance

This is a new kind of dark pattern. This isn’t tricking a user into making a poor choice, it’s a system that makes the choice on behalf of the user, but doesn’t tell the user why, or possibly even that it has made a decision, or doesn’t reveal the information it used to make that decision. This is a dark pattern in an autonomous system.

Side Note:

While not the central theme of this article, it’s telling that the two airlines that were affected by this design decision were from Ethiopia and Indonesia. Ethiopia’s GDP is about $80B while American Airlines has an annual revenue of $44B [Source: Wikipedia]. I don’t believe it’s a stretch to say that cost conscious airlines are going to be the first to decline “extras”, but when those extras are necessary to the safety of the people on board, it’s the most vulnerable people who will suffer.

Increased data is supposed to lead to increased safety. At what point does charging for that safety become unethical?

Autonomy and Ethics

When writing about ethics and autonomous vehicles, the focus is often how an autonomous system makes a choice in an ambiguous situation or one where there isn’t a “right answer”. Does an autonomous car perform an evasive maneuver that could kill the passenger in order to avoid hitting a pedestrian? These decisions are usually framed around something like the Trolley Problem: do you pull a switch on train tracks saving the lives of five people, but killing one? Now, what if the autonomous system made the decision based not on a moral question but on which scenario would result in a less expensive out-of-court settlement for the auto manufacturer? What happens when the decisions an autonomous system makes are based on business decisions, and not what’s in the best interest of the user?

To start, it’s important to specify what we mean by “autonomy.” Nathan Shedroff and Christopher Noessel discuss the difference between agency and autonomy in their book Make it So: Interaction Design Lessons from Science Fiction: “Agency … refers to a system’s ability to carry out known actions per predefined parameters. Autonomy refers to a system’s ability to decide to initiate new actions to help achieve a goal.”

Many of us are familiar with agency when we set the cruise control on our car to a certain speed. Some cars even have advanced safety features, such as those offered by Toyota’s “Pre-Collision System” which, in certain situations, will apply the brakes for you in order to avoid a crash. Even’s Tesla’s Autopilot, which is capable of dodging another car that is merging into your lane, relies on agency and is assistive, not autonomous.

Planes have a lot of these kinds of features, too. A pilot I talked to described the ability to set a plane’s heading, throttle, and altitude (generally) while in flight. Larger, commercial planes might even have an “autoland” feature which, while very advanced, still has a specific function that requires input from a user. As we move closer to autonomous systems, how will they make decisions on our behalf?

Imagine that you asked your autonomous car to drive you to McDonald’s, but the car manufacturer had a deal with Burger King, and drove you there instead. Or an electronic medical record system in a hospital changed a doctor’s order for generic ibuprofen to a brand name painkiller (but didn’t tell anyone) because the software maker has a deal with the pharmaceutical company? What if instead of ibuprofen, it prescribed a potentially addictive opioid instead?

Let’s say an oil rig has an autonomous safety system. The safety system detects a sudden spike in pressure coming from a pipe. It can divert that pressure so that it will: a) kill a worker or b) destroy an expensive piece of equipment. What if it’s cheaper to pay a death settlement than idle the rig to replace the equipment?

We can’t say for sure if the pilots had had more information, if it would have prevented those crashes, but these are questions that need to be asked. To paraphrase Don Norman, when an autonomous system is in use, some very important pieces of information need to be communicated to the persons on whose behalf it is acting:

  1. That an action is being performed
  2. What the consequences of that action are
  3. What input is being used to decide to take that action
  4. How to stop that action immediately

And you can’t charge extra for that information.

Original article that inspired this:
https://www.npr.org/2019/03/26/707050572/boeing-737-max-software-fix-and-report-on-fatal-crash-expected-this-week

“The Design of Everyday Things”:
https://www.nngroup.com/books/design-everyday-things-revised/

“Make it So: Interaction Design Lessons from Science Fiction”:
https://rosenfeldmedia.com/books/make-it-so/

Harry Brignull’s Dark Patterns:
https://www.darkpatterns.org/

--

--

Michael Moreau
The Startup

I’m passionate about Participatory Design and looking to find my voice.