Oryx Vision Eyes Fully-Autonomous Driving

Bessemer Venture Partners is actively investing in critical technologies for autonomous machines and services. Last week, with Oryx Vision coming out of stealth, I can finally talk about our interest in this sector.

Fully-autonomous vehicles aren’t expected to hit the market before 2020/2021. When they do, they will require depth sensors with a range and precision an order of magnitude greater than possible today with mechanical scanning LiDARs (light radar). In fact, vehicle manufacturers and suppliers say that even the newer “solid state” LiDAR technology will fall short of the performance requirements demanded for mass market deployment. They expect that LiDAR’s future may be restricted to 3D-mapping and testing. While 3D-mapping and testing are essential and provide a foundation for autonomous driving, that opportunity will be minor in comparison to the billion-dollar component market.

We believe that the lack of a high performance depth sensor is the primary reason big names in the industry are starting to hedge claims that mass market autonomous driving is around the corner. Without the requisite data, deep learning-based autonomous brains cannot make the decisions a human driver could make, thereby impeding the market’s advancement.
Vehicle makers in Detroit, Germany and Silicon Valley know that “99% autonomous” isn’t enough. Most fatal accidents occur in the remaining 1%. Given that, it’s no surprise that Google cars are restricted to very specific, low speed, fully-mapped urban environments, and Tesla recommends its “auto-pilot” for two specific purposes: reliable highway driving and parallel parking. It also explains why Otto, with its narrow, yet viable focus on long-haul trucking, was quickly acquired by Uber.

Everything outside simple urban and highway driving is extremely difficult for machines to self-navigate. How does an autonomous car merge onto a highway with a fast moving motorcycle coming from behind? It’s hard enough for human drivers to accomplish this routine maneuver! Want to drive autonomously in dense San Francisco fog? Even differentiating a bridge from a truck’s trailer can be challenging when facing the glaring sun. Partially-autonomous driving that is billed as fully-autonomous could result in horrible accidents or subject drivers to the irritation of start-and-stop driving every time the system is challenged. Car makers know that deaths and irritation/inconvenience will be fatal to a mass-market rollout. Consumers expect nothing less than perfect performance.

Although autonomous vehicles will benefit from multiple onboard sensors including radars, infrared and camera through sensor fusion, there must be a primary data source to feed the autonomous brain. Advanced depth-sensing devices will be an integral and strategic core of the autonomous vehicles that the automakers are designing today. We believe that the right technology and product stands to win a substantial portion of what will be a multi-billion-dollar component market as the first wave of commercially sold autonomous vehicles hit the market.

Unlike many other technology markets, autonomous driving is not a race to be first, because the earliest autonomous cars will likely be geo-fenced and restricted to heavily mapped regions. And although fully-autonomous vehicles won’t be mass market for several years, the key design decisions made now will set the industry on course.

Oryx Vision is bridging the technology gap to make autonomous driving a mass-market reality. Oryx uses nano-antennas at a safe 10µm wavelength to build a highly detailed visual representation of a car’s environment. At 10µm it can peer into the glaring sun and right through the dense fog. Oryx’s superior technology will replace the bulky and expensive scanning LiDARs, such as those found atop Google cars.

Six years into development, the company has already demonstrated its technology and discussed implementation with most of the world’s leading car manufacturers and Tier-1 suppliers. Their enthusiastic reception compelled us to lead this Series A investment with a $16M check.

Oryx’s first class R&D team is led by cofounder and CTO, David Ben Bassat. David is an experienced entrepreneur in his own right. He spent the last six years perfecting and stabilizing the company’s core technology, which hinges on the quantum mechanics and the duality theory of light. Given his own experience in the automotive component industry at Vishay, David needed to be convinced of the technology’s manufacturing viability before seeking investment.

The company is led by someone we’ve bet on before. CEO Rani Wellingstein isn’t one to beat his chest based on personal reputation, but we’re big fans. He not only sold Intucell for $475M 3.5 years after founding, but managed to deploy Intucell tech at more than 10 of the largest cellular operators making the company cash flow positive on only $6M of venture funding. 
Clearly, we believe in Oryx Vision, both the team and the technology. To make an early investment in this space we needed to have certainty in the technology, confidence in the ability to deliver product, and the patience to help them build a company years before the product will hit the proverbial road. We think Rani and David will rapidly establish Oryx Vision as the premier sensor vendor in the autonomous automotive space.

Share your thoughts below or connect with me on Twitter.