Perceptive Automata and Honda Xcelerator showcase self-driving technology at CES

We’ve had a very exciting year at Perceptive Automata, from launching out of stealth in July to our Series A funding in October. Our goal has always been to partner with as many automated driving ecosystem companies as possible to ensure that self-driving cars have the human intuition they need to drive safely and smoothly in our human-dominated road environments. To that end, we’re excited to announce that we’re collaborating with Honda Xcelerator at CES 2019.

Attendees visiting Honda Xcelerator’s exhibit at CES will get to sit in a ‘driving simulator’ with an interactive demo of our technology. Participants will experience what it’s like to encounter pedestrians, cyclists, and motorists from the perspective of a self-driving car as it tries to understand what humans might do next. Check out Honda’s teaser video!

Through this collaboration, we hope to demonstrate the ways in which Honda and other partners can use Perceptive Automata’s advanced behavioral science-based artificial intelligence, which can detect and assess, for example, the intent and awareness of pedestrians, cyclists, and motorists, to enable a safer and smoother driving experience for humans and self-driving vehicles.

Our models use data from a vehicle’s sensors, including cameras and lidar, to make judgments in real-time about what people might do next, allowing automated vehicles to act and adjust safely and smoothly on human-dominated roads very much the way a human driver would.

At Honda Xcelerator’s exhibit, CES attendees will get a look at how our technology mitigates some potentially dangerous scenarios from the perspective of a self-driving car.

In the scenario above, our software can tell that the child is not aware of the car but still wants to run across the road to his mother, based on the same subtle body language cues that humans effortlessly use to ‘read’ other humans.

And our software can detect this well before the child’s motion changes from walking along the side of the road to running across the street. Typical physics-only systems have to wait until the sensors observe the actual crossing of the child before taking action, which reduces the time and distance available for the self-driving car to react.

Another all-too-familiar road scenario is predicting the intention of cyclists before they act. In the above example, the cyclist hand-signals to indicate that she wants to turn left and move into the car’s path. Our software detects the cyclist’s intention in real-time before she starts moving in front of the car so that the self-driving vehicle can already slow down smoothly in anticipation of the cyclist’s likely action. Again, self-driving systems that rely on physics only would have to brake later and much harder because they can only react once they detect a change in motion.

We’re excited to show how our technology embedded in automated vehicles handles variations on these scenarios and many others that, up to this point, have been difficult and dangerous for ‘physics-only’ self-driving cars to handle.

Hundreds of innovative technologies will be presented at CES this year, and we’re ecstatic to be among them — especially in partnership with the Honda Xcelerator program. We’d also like to give a shout-out to the wonderful team at Honda Xcelerator for making this possible, especially Yusuke “Bob” Narita, Toshiyuki “KJ” Kaji, Raymond Zheng, Angela Rose, and Pavan Soni!

If you’re interested in following our journey, connect with us on Twitter @perceptive_auto. If you’d like to join us in pursuing our mission, check out job openings at

Honda Xcelerator’s booth location will be North Hall #7900. See you on the strip!