The machines’ hierarchy of needs
(This is an adaptation of a short talk I gave at the Copenhagen Institute of Interaction Design)
We humans invent tools and machines to augment our capabilities: we invented the axe to augment our physical capabilities, the car to augment our locomotion capabilities, and the calculator to augment our math and logic capabilities.
Machines augment us. But that’s only one way of looking at this relationship. We humans also augment machines.
When we take a Lyft or an Uber ride, we hire two machines (a smartphone and a car), and one human — the driver.
The smartphone tells the driver where to go, and the driver tells the car where to go. The driver is a link between a machine that gives instructions, and a machine that takes instructions. There will be a moment where the two machines will be able to talk to each other without the need of the driver, and become one autonomous machine.
The reason this isn’t happening yet is because cars are not good enough at understanding and reacting to the world around them. Cars need the human’s senses, understanding and judgement to effectively navigate our messy roads without causing trouble. Today, we humans are augmenting cars.
We can design better machines if we know what they need.
As interaction designers, we focus on designing interfaces and relationships between people and technology, or humans and machines. We paint this relationship as one-to-one, but we tend to put our focus on the human side.
We often use a design approach called human-centered design, where we look at user pain points and user needs. To do that, one of the main skills of a designer is being able to empathize with people — understand their contexts, motivations and needs by putting ourselves in their place. And this makes sense, since we’re designing for people.
However, this is a rather anthropogenic way at looking at the relationship—we’re also designing for machines. Shouldn’t we try to understand machines’ contexts, motivations and needs as we do with humans?
To explore this, I used a hierarchical framework like the one A. Maslow used to explain the human’s needs and motivations.
The machines’ hierarchy of needs
Machines have five needs, with the most fundamental being at the bottom of the pyramid, and the most aspirational being at the top. The machine’s motivations to fulfill each of these needs share a common goal: to augment humans.
Similar to how Maslow describes the human needs, multiple levels of motivation can occur at the same time, but one dominates over the others and needs to be fulfilled to focus on the next one.
1. Be used
The first need for a machine is to be used. It’s like breathing, eating, or sleeping for humans. To be, machines need to be used.
The second need is to interact. By interacting with us and the environment, machines are able to accomplish more complex tasks.
Two things come with machines being interactive: they have states (On/Off, Playing/Recording, etc.), and an interface for the machine to communicate those states, and for humans to modify them. Interaction design is focusing on this particular need of machines.
The third need is to connect. Similarly to humans, machines are able to achieve more when they’re connected. Once they’re part of a network, they can exchange information, and reach humans that are not in the same physical space, bringing interactivity to another level. With connectivity, machines can interact at scale.
The fourth need is to learn. With learning, machines become smart, being able to take decisions by themselves. Most machines at this stage are connected, which makes learning more efficient since they’re able to share patterns and learnings with each other. Being interactive is also key to their learning—our input helps them build a criteria so they can take decisions that resemble the ones we’d take.
What’s next? They have one last need to fulfill before they can fully augment humans autonomously, their ultimate goal.
The machine’s top level need is to understand. They need to understand us (our needs, capabilities, intent), and the world around us, and around them. That’s where many machines are focusing their efforts on today.
Let’s come back to the example of autonomous cars:
- They’re are able to drive by themselves because they are able to understand the world around them: roads, signs, pedestrians, other cars, etc.
- They’re able to understand because in the past few years, they’ve been learning a lot, both from the surroundings and from us—how we drive and how we react to different situations.
- They’re able to learn really fast because they’re are connected. The data they collect is aggregated and the learnings are shared. Also, because a car is connected since day one, it knows as much as any other more experienced car. This way, the first time a car sees a cyclist, it knows it’s a cyclist — as opposed to a child, which will take her multiple instances of seeing a cyclist to be able to identify one.
- They’ve also been learning from us interacting with them, a) collecting data about how we drive and deal with different situations and b) thanks to human annotation of data they collect, but still don’t understand—another example of us augmenting them.
Cars are moving up the pyramid of needs with a single goal: to augment human mobility, without the need of another human.
The same way we designers try to understand people’s motivations and needs to better design for them, I believe that we will design better machines if we understand what they need, and why.
Photo credits: axe by Jeff Cress, rideshare app by Noel Tock.