Prototyping the future at Element AI’s Applied Research Lab

Sharlene McKinnon
Dec 10, 2018 · 4 min read

Element AI’s new Applied Research Lab is a key part of our efforts to understand the human side of artificial intelligence. Researchers in the lab are developing theories and testing prototypes to explore the interaction of humans and AI systems as AI transforms the future of work.

In the ARL, we take tough research questions and attempt to substantiate them using code, technology, and machines. We turn theory into action, and during the course of our work discover new questions, fail often, and move rapidly in a dynamic environment.

When we explore the interaction between humans and AI-enabled systems, we’re studying two different kinds: physical, involving systems or robots that use AI prediction to prevent problems in the physical world, and cognitive, where a system or robot is able to discern a human’s emotional or mental state.

Through physical interactions with robots, we are able to create safety and compliance. When working with machines that use food preparation knives, a robot can anticipate danger and stop moving before injuring a person. In other situations, robots support the activities of a human worker, such as when lifting heavy objects like construction plywood or car parts.

The common notion that in the future, robots will function autonomously without a human companion isn’t the most likely outcome. In reality, it’s more likely a mix of human and robotic skills: blending the best of human intuition and the dexterity and strength of robots.

We’re already seeing this mixed-skills future in action. Canadian Tire added semi-autonomous vehicles to their distribution centre that had robots do the heavy lifting while drivers moved to other positions within the facility. At MIT, researchers used natural language to command autonomous forklifts. When a robot fails in an autonomous situation, production stops. By teaching them to ask a human for help, we create a positive symbiotic relationship where everyone benefits.

In the realm of cognitive interactions, a robot is able to determine the human’s mental state, and attributes such as fatigue or attentiveness, and step in to assist when they make mistakes, are frail, or seem unable to perceive or respond to a potentially dangerous situation properly.

In building robots and AI that are able to perceive and react to certain emotional states, we can create a rich situational understanding that would benefit people in need, such as helping children with autism spectrum disorders understand social situations. Researchers from Japanese conglomerate SoftBank did just that with Pepper, an emotional support robot.

Interestingly, Pepper’s creators faced a common challenge that all robot and AI creators face: user acceptance. SoftBank says Pepper’s curved design is meant to ensure a “high level of acceptance by users” because they’ve discovered that a human-like shape is a barrier.

The uneasiness that is felt between humans and robots and when we build a robot that’s close to a human — the so-called Uncanny Valley — is nothing new. It could be biological, and figuring out a holistic approach to human-AI interaction is critical to the future of AI. In order to do this, we need to better understand the brain and what it means to be human.

How does all this relate to building the future in the Applied Research Lab?

One area we explore is accelerated human-in-the-loop model training. Training an AI model is a long and time consuming process; especially when it involves natural language. We have experimented using augmented reality goggles to speed up the training process. This came from one simple question: when you already have a good classifier, is there a way for a human to use augmented reality to accelerate the capture of new training information?

We also experiment with drones that capture video of subjects with the goal of creating 3D models of people from 2D graphics. Again the goal here is to use existing trained models to increase the speed at which we can train more complex models. Using such technology, we’re getting closer to the creation of 3D models that we can compress and send to people in other locations. This would allow for robust, real-time, augmented video interactions with people in 3D format (like you see in movies like Star Wars).

Every day, we’re helping define the future of human-AI interaction and solving some of the toughest real-world problems in artificial intelligence and robotics. If that sounds interesting to you, click here to see our open positions. Join us in creating the future!

Element AI Lab

Scientists and developers at Element AI discuss the state of the art in artificial intelligence research and deployment.

Sharlene McKinnon

Written by

Geek | Anthropologist | Photographer | Dog Whisperer

Element AI Lab

Scientists and developers at Element AI discuss the state of the art in artificial intelligence research and deployment.