The Role of Trust in Simulation

Shield AI
3 min readDec 26, 2018

A conversation with Ali Momeni, Senior Principal Scientist and Director of User Services & Experience

This post is a continuation of our conversation On Human Computer Interaction. Read the first part here.

How does trust factor into what you do with simulation?

Trust is what we’re trying to create — the ability for a person to trust that a robot will do what is expected.

I think of a lot of the user experience team’s work as building trust. These interfaces have to feel intuitive and reliable. They have to make the user feel empowered to control complex systems, even when things are outside of their physical control. There’s a lot of nuance to that trust.

With simulation, I would say that the trust equation is fairly transparent. Our goal is to have a system where if something works in simulation, we can trust that it’s going to work in real life. And if it doesn’t work in real life, but it works in simulation, then there’s an opportunity to learn — that’s what we are striving for. Because if you try something that you expect to work and it doesn’t, then there is insight in that as well. That’s a data-point. At that point something that you weren’t able to model is affecting the robot’s behavior. We seek to reach the edge cases that simulation actually missed.

What is your perspective on creating simulations that people could use for training? How does responsibility factor into creating simulations that people could train on in terms of protecting lives?

We have to do it. The natural next step is to build upon this system to enable richer training experiences for our customers.

How do you see the development of those simulations progressing?

In the short term, we can create a very high-fidelity experience of what it feels like to be the robot, or to follow the robot on a mission. The next step is to create engaging experiences of what it feels like to launch the robot from a distance. Some of the things that are much harder to reach are the teams of people involved in a mission. The interpersonal interactions that would happen in a real use case will be very difficult to recreate in simulation. It’s much easier to create a multi-personal video game with VR — and you see other organizations pursuing that. But those games are very controlled scenarios. A game that’s made for the entertainment industry can have a lot of rules that we can’t have.

Stay tuned for the next post on the Role of Trust.

We’re hiring! Join the Shield AI team!

--

--

Shield AI

Our mission at Shield AI is to protect service members and civilians with artificially intelligent systems.