by Alexandra Chang
As automated machines — driverless cars, robot vacuums, aisle-cleaning robots — become increasingly present, so do questions about how best to design these machines to interact with a world of people. Wendy Ju, Information Science, Jacobs Technion-Cornell Institute at Cornell Tech, studies this human-robot interaction. She spends a lot of time watching and parsing people’s actions.
“It’s delightful for me to watch people move around in a space,” says Ju. “We are not always thinking about the communicative signaling of our actions, and yet we know very well what we’re doing.”
Ju identifies and thinks about these signals, and then use them to inform design choices in machines. Her research goal is to understand what is required from various machines in order to create the seamlessness and ease people have with each other in day-to-day interactions.
Cars in Motion, and How People Interact with Them
One area ripe for research is the automotive industry. Ju has long studied how people interact with cars. In 2016 while at the Center for Design Research at Stanford, Ju worked on a project dubbed Ghost Driver. Her team observed how people on the street interacted with a driverless vehicle. How do they feel when they see a car with no driver? How can the car signal what it’s going to do?
The research required innovation in experiment design, since Ju wanted to observe how people interacted with the cars but not have anyone appear to be in the car. Her approach was to have the driver wear a car seat costume. “There’s a theater aspect to it,” says Ju. “But we can use that to design the interactions.”
Ju found that people pay much more attention to the motion of the car than to the driver. In crosswalks, pedestrians look at the wheel and then the bumper. When they sense that the car is going to stop, they don’t look further. If there is a breach in this interaction — for example, the car looks like it’s braking but eases up near the crosswalk — people will look up at the driver’s seat; but they still cross the road. “Walking down the road is so automatic that we do that when we are doing all sorts of other things,” says Ju. “Our priority is to safely get across the road, then post analyze the situation.”
Prior to Ju’s study, the automotive industry considered including signs and lights on driverless cars to signal the car’s intentions to pedestrians. Ju’s experiment showed, however, that those would not be effective, since people are determining what the car will do before they could see those signals. Instead, a driverless car would simply need to slow down as it approached a person for a seamless interaction.
In-Car Voice Interactions
In a continuation of automotive research, Ju has more recently studied the timing of voice interaction while driving. In a study called “Is Now a Good Time?” Ju and a team of researchers, including industry members from Toyota Research Institute, analyzed more than 60 drivers’ responses to 3,000 in-car voice interactions.
This study aimed to predict the best times to initiate in-car voice interactions, as well as times when such a system should not interrupt drivers. Currently, drivers must initiate voice interaction with a car’s system. As cars become smarter, they may begin initiating voice interactions, offering information such as the news or updates on the car’s maintenance. The timing of such interactions can be critical. “The commute can be boring, so people welcome activity,” says Ju. “But if you poorly time a voice interaction, it can be lethal.”
Ju’s study found that people don’t want to talk to a voice agent if they’re lost, have missed a light, or misunderstood a direction. “Once they’re not on the course they want, they don’t want to hear anything,” says Ju. Another time to avoid voice interaction is when a driver is coming to a stop sign or light. Yet, while drivers wait for a stop sign or light, it can be a very good time to interact. “Those are geographically close to each other, but we discovered they are situationally very different,” says Ju.
All of this research feeds back into the automotive industry, as Ju works closely with industry partners. “This is a moment where change is afoot in the auto industry,” says Ju. “There’s an important role academia plays in a very applied space. There are day-to-day implications for the work that we’re doing. It’s an exciting space to be in, and it’s easy to stay motivated.”
Robotic Chairs, They Could Do Many Things for Us
In another current project, entirely separate from cars, Ju is studying how people interact with robotic chairs. How can an automatically rearranging chair communicate its movements to a person in a shared space? And how will people navigate around such a chair?
“If you’re passing a chair in a hallway, you don’t run into it,” says Ju. Though the statement is almost funny in its simplicity, yet the humor indicates how implicitly we engage with everyday objects. For Ju, the next question was, “Is it more threatening if a chair has biological motion?”
“Everyone has the same reaction. From a purely logical perspective, it doesn’t make sense. The chair doesn’t really see you.”
A robotic chair could do anything — offer a person a seat, usher people to follow them, or rearrange themselves in a space, trying to navigate around people. Each of these tasks requires different types of gestures. In order to figure out how people will respond, Ju and her team strapped chairs on robotic vacuums to control movement. They are now working on a hover board base.
“We do a lot of improvisation and exploration, because there are so many possible degrees of freedom,” says Ju. For example, in testing how a chair might communicate that it wants a person to move out of the way, the researchers had the chair simply pause in front of a person, move in a side-to-side gesture, or move in a forward-back gesture. What they found was that the pause and the side-to-side gesture had slightly less success in getting people to move out of the way. The forward-back gesture, although very clear, was often perceived as aggressive.
Ju also says that people do not feel comfortable passing a chair unless it is turned straight toward them. “The performed gaze makes them feel acknowledged and that they can move,” she says. “Everyone has the same reaction. From a purely logical perspective, it doesn’t make sense. The chair doesn’t really see you. But it works.”
“We’re going through the world with a lot of assumed rules,” says Ju. “They’re so assumed, we couldn’t even articulate them, but in the moment, we follow them. We have to teach machines these rules, and also how to change under different circumstances and environments.”