Why Don’t We Trust Robots?

Q&A with assistant professor Dr. Neera Jain

--

It’s not our fault we’re nervous about the potential of robot overlords, but Purdue assistant professor Dr. Neera Jain is working on ways to help us feel more comfortable. The rise of automation, artificial intelligence and its integration into everyday life is driving an important conversation about how to improve human-machine relationships. Dr. Jain’s research is looking at ways to measure trust in real-time to design robots and machines that can sense humans’ levels of trust in them and adjust their behaviors accordingly. But how do you measure something as subjective as trust?

Dr. Neera Jain

Dr. Jain is an assistant professor at Purdue University’s School of Mechanical Engineering. She has a bachelor’s in mechanical engineering from the Massachusetts Institute of Technology, as well as a master’s and PhD from the University of Illinois at Urbana-Champaign.

We talked to Dr. Jain about all things robot and her research on measuring human trust.

Why is the relationship between humans and intelligent machines so important?

I think everyone has noticed the increasing prevalence of collaboration between humans and machines, particularly through self-driving vehicles and autopilot features in newer car models. Several companies are developing autonomous technologies that are having, and will have, a significant impact on our lives. These technologies have the potential to help humans in their jobs and daily activities, such as robotic assistants to help elderly individuals in their homes or lift a heavy object on the manufacturing floor. But in order to have that impact, humans must be willing to engage and interact with these machines.

What are the challenges to attaining societal acceptance of robots and machines?

It has long been the case that humans and society at large can show reluctance to adopt any new type of technology. Often, when a technology fails, it is because factors which affect how humans perceive technology, such as culture, were not studied by the engineers or others involved in creating the technology itself. In other words, a lack of user-inspired design can be the pitfall of new technology. My collaborator in this research, professor Tahira Reid, specializes in incorporating human-centered considerations into human-machine systems and the engineering design process.

With robots and machines, there is the additional challenge that they embody some level of “intelligence,” and that can be scary for some people! When we ask humans to interact with machines that have the authority to make their own decisions, we might be giving up our own control in some situations and relying on the autonomous system to make the right decisions for us. The same way humans are wary of a human partner or teammate who might “drop the ball,” humans are wary of robots who may do the same.

How do you measure human trust?

I would be remiss to say that we measure trust; I don’t know if we can ever be certain that we have “measured” it. What we are trying to do is estimate human trust based on signals that we can measure.

My collaborators and I designed an experiment in which participants were told to imagine they were driving a vehicle that had an intelligent sensor attached to the front that could detect obstacles. In the experiments, participants couldn’t see the road ahead and instead had to rely on the information from the sensor and were given a report from the sensor that either the road ahead was clear or there was an obstacle. The humans then had to decide whether to “trust” the sensor’s report and act accordingly. During these experiments, we collected psychophysiological data by measuring electrical brain activity and how the skin reacts to different emotional states to draw our conclusions. We recently published our findings in a special issue of the ACM Transactions on Interactive and Intelligent Systems (TiiS) on Trust and Influence.

While the model we have developed is technically only valid for the specific scenario in which we collected the human data, we are working to understand how far we can extend this approach.

Tell us about “self-correction” in robots. How will it help improve human’s trust in them?

If a robot or machine has an estimate of a human’s trust or workload, the machine can respond accordingly to improve their performance. For example, we recently designed a control algorithm for a virtual robot which helps the robot decide how to adjust its transparency — the amount and type of information it shares with the human — based on the human’s estimated trust and workload. If the human is very distrusting, providing the human with more clarity (i.e. greater transparency) as to how the robot arrived at its recommendation, can help to build trust. On the flip side, if a human is at risk of “over-trusting” — or blindly following the robot’s recommendation — providing high transparency can also help the human to make a more informed decision. Alternatively, if the human’s workload is very high, the machine may try to show slightly less information so as not to overburden the human.

This is just one example of how the machine can “respond” to the human based on their cognitive states. We are excited to continue researching different ways this can and should happen.

What will improved trust levels mean for human-machine collaboration?

Our goal is to improve trust calibration — in other words, to help the human determine when they should trust the machine and when they should not. In doing so, we hope that we can decrease human mistrust in a machine or robot because of some misplaced bias, as opposed to distrust, which could be reasonable if a robot is behaving unreliably. Ultimately, we want machines to be a tool that humans can partner with in a safe manner so that they truly have a positive impact on society.

--

--