On designers and machines (and the people that live among them)

Fernanda Bonilla
Uncommon Design Strategy
7 min readJan 16, 2018

Not so long ago, “design” was confined to very specific tasks. It was easy to understand that it was a designer who created that poster you had in your room, or that dress whose price you were sure should not have that amount of zeros next to it. But to most people, it would have been difficult to imagine that design could in fact have significant economic, social, cultural and environmental impact. Until recently, the common perception was that designers and their methodologies were simply there to make products more attractive.

However, since the 1960’s, design has evolved and inserted itself into more influential conversations. It has moved from a discipline whose main purpose was to create objects, to one that uses its curiosity and openness to empathize with people, understand them and apply this information to business strategies and technology development.

Nowadays, as design ventures into new territories, it is confronted with more difficult problems than ever before. We live in a world that is vastly reliant on technology and in a time where most of us depend on computers for both banal and important tasks. It is in this scenario that designers have been called upon to collaborate with engineers and psychologists to help improve the machines that surround us and our interactions with them.

Designing with our cognitive needs in mind

One field where it is obvious how critical it is to understand what people need before developing new technologies, is aviation. In 2018 we are so used to the idea of planes “pretty much flying themselves”, that we seldom include autopilot in our discussions about the potential safety of self-driving vehicles. We trust that while airlines do have many (MANY!) failures, they are mostly just inconvenient –never unsafe. And for the most part, we are right to believe this. Air travel is incredibly safe and a big part of that is due to automation.

However this doesn’t mean that technology isn’t fixing one thing at the expense of another. In her article for The New Yorker, Maria Konnikova explains that since automation was first introduced into the cockpit there were concerns about the effects it would have on pilots. In this thorough exploration of the topic, she talks about the work that Earl Wiener, a pioneer of human-factors and automation research in aviation, published in the late 1970’s and 80’s.

According to Konnikova, after analyzing the interplay among automation, pilot error and accidents, Wiener concluded that many of the innovations designed to fix human error had come with unintended consequences. His studies indicated that as pilots were being freed of some responsibilities, they were struggling to remain focused and this lack of attention was leading to accidents.

The fact that computers had improved flying in many ways –including making navigation easier and increasing airplane stability– was never in dispute. Nevertheless, Wiener worried that, because of the difficulty in identifying and assessing these new conditions, pilots’ susceptibility to boredom and complacency might prove to be even more problematic than the issues they replaced. In the end, he believed that companies were not looking at the overall picture and were not paying enough attention to pilots’ abilities and needs.

Konnikova also mentions Steve Casner, a research psychologist who discussed his ideas on the topic in an article published in Slate in 2014. In Dumbing It Down in the Cockpit, Casner talks about the results of the study he and his airline pilot collaborator, Richard Geven, conducted at NASA Ames Research Center. The participants were 16 Boeing 747–400 pilots who were required to fly a full-motion simulator. The intention was to test both their stick-and-rudder skills and their cognitive skills. While the first were discovered to be “a bit out of shape, but mostly intact”, the latter didn’t fare so well.

Casner’s conclusions were not that pilots struggled mentally keeping track of where they were or failed to realize that their airspeed indicator had become unreliable, because they were getting lazy. But rather, that what was being asked of them went against human nature. He cited mind-wandering experts, such as Jonathan Smallwood, and explained that our minds are restless and if they are not given something stimulating to think about, they naturally drift onto something else. In his own words:

If someone asks you to monitor a light that is known to turn red once every few hundred hours, you find it mostly impossible to not think about something else more pressing or interesting. Many have pointed out that “sitting and staring” at a computer that does our job for us is not something that creative, interactive, problem-solving humans are cut out to do.

He then talks about one of the most complicated dilemmas that the people currently designing automation have. He alludes to the work of cognitive scientist Edwin Hutchins, and explains that the road ahead seems to lead to systems that are “imperfect enough to require pilots to take over when problems arise, but too complex for pilots to understand and too reliable for them to successfully stare at for any length of time.” Which leaves its designers with two options, either make something that involves the pilots in a more meaningful way –an autopilot that flies the plane with them instead of for them– or design systems that are so sophisticated that they eradicate the need for human pilots altogether.

The first would require us to change the way we think about automation and reassess the kind of tasks that we need pilots to concentrate on and the ones we should relieve them from.

The second, regardless of how desirable it might seem, is still far off. In the meantime, we cannot keep ignoring pilots’ cognitive needs while we still require them to be able to respond in case of emergencies. Casner proposes that a way to attend to these needs is to train pilots through the use of learning apps. He even calls on game designers to collaborate with researchers in order to identify the remaining needs that went unexplored in his study and respond to them with something that is actually fun to play with.

Understanding our emotional responses

Our restless minds are not the only thing that teams of designers, engineers and psychologist are struggling to understand and respond to. People’s propensity to form emotional bonds with whatever we interact with –animals, plants and now robots– has also been getting a lot of attention recently.

In her paper “My Roomba is Rambo”: Intimate Home Appliances, Ja-Young talks about a research study she conducted on how people built affection for their Roomba (a small automated vacuum cleaner). Through phone and email interviews, Ja-Young and her team discovered that people gave their Roomba’s names, promoted them with their relatives (one participant recalled taking his Roomba to meet his parents) and felt incredibly protective of them.

Although the first version of this product was relatively faulty, four years after it was released 2 million units had been sold. People had adopted this clumsy machine into their lives to the extent that some users admitted to rearranging their furniture to make their houses more accommodating for their Roomba.

Why had people formed such an emotional connection with these machines? Well, there were some aspects of the Roomba that were specifically designed to be cute and a little bit reminiscent of R2D2. In the first model, the Roomba even made sounds; when it accidentally bumped into a wall it said “uh-oh.” But apparently even the fact that it didn’t work perfectly aided in its adoption. Owners found the vacuum cleaner’s flaws endearing and these imperfections helped them see the Roomba less as a machine and more as a pet.

But the Roomba is only one example of machines that humans form emotional connections with. Our response to robots has been proven to be so strong that some researchers fear that soldiers might compromise outcomes on the battlefield because of the attachment they have to the EOD (explosive ordinance disposal) robots they operate.

This is why understanding what makes one machine more relatable than other is so important. Not only does it give an automated vacuum cleaner a competitive advantage, but it also helps people in life-or-death situations make the best decisions without being distracted by emotions.

Thanks to many proponents who have moved design into different environments and to academics that have found multiple applications for its methodologies, its boundaries have been blurred. Nowadays, design itself has become something difficult to define as it transforms from a discipline to a way of looking at the world and a humanistic approach to problem solving.

It is now more important than ever to participate in multidisciplinary teams and deepen our understanding of the impact our designs can have. Because in an era where so many world-shaping technologies are being created and as machines become more intertwined with our lives, all of humanity’s quirks need to be taken into consideration and designed for.

--

--