Helping Make Robots Smarter

Welcoming Realtime Robotics to the Toyota AI Ventures portfolio

By James Kuffner, Chief Technology Officer, Toyota Research Institute

Visions of the future often feature robots. Whether working in factories, attending to humans, or piloting our cars, they compose a large part of our futurescape. While we imagine how robots could vastly improve our lives, a challenge roboticists inevitably come up against is: how do we build robots that can safely move in our complex world? The field of robot motion planning has emerged with the goal of addressing this challenge.

In a factory or laboratory environment that is “structured,” robot motions can be pre-programmed to avoid obstacles. In homes and offices, the environment is “unstructured” and full of previously unknown objects. Because of this, the robot must compute a safe motion plan online to avoid obstacles in real-time. This is a difficult problem, which can involve an enormous amount of computational resources. While human beings are able to quickly learn how to process and move through unstructured environments as young children, modern robots still face formidable, practical challenges in order to achieve safe, efficient motion planning.

That is why I am excited about Toyota AI Ventures’ recent investment in Realtime Robotics. I believe that the technology being developed by this Boston-based company could be game-changing for the field of robot motion planning. Founded by two Duke University professors, George Konidaris and Dan Sorin, Realtime Robotics focuses on speeding up the time a robot takes to make decisions on how to move through space. According to co-founders Kondaris and Sorin, their team has developed a specialized motion planning processor that enables robots to perform complex motion planning tasks up to 10,000 times faster than their predecessors, while using significantly less power [1].

What exactly does that mean for robots?

Both in scientific research and business, ideas are sometimes “ahead of their time.” Limitations due to technology, infrastructure, market size, or engineering feasibility can often make a good idea impossible to implement. However, as science and technology progresses, limitations can be overcome and then suddenly the impossible becomes possible.

As an example, in the early days of computer graphics in the 1960s and 1970s, one of the important problems in image rendering involved developing efficient algorithms for visible surface determination to compute images involving large numbers of potentially occluding objects in a scene [2][3]. Early computers did not have sufficient memory to store large image raster buffers, so compact surface representations and vector-based rendering were often used. However, dramatic improvements in storage and memory technology enabled special purpose hardware with large image buffers to practically implement a new class of visible surface determination based on depth buffers, commonly known as Z-buffers [4]. Modern real-time graphics processors have been using variants of Z-buffers ever since. Z-buffer techniques are now a key, practical enabling technology for efficient, real-time graphics rendering. This important evolution happened because practical memory buffer limitations were overcome, and graphics systems could exploit the classic computing time vs. memory space tradeoffs.

I believe that a similar evolution may be happening in the field of robot motion planning. State-of-the-art algorithms for motion planning in high dimensions typically involve online sampling-based tree search, such as RRT-Connect [5]. Back in 2002, an algorithm was proposed by Leven and Hutchinson based on using large memory buffers to store precomputed mappings between environment-free space and robot configurations [6]. Unfortunately, at that time, computing systems did not have sufficient memory nor computing resources to efficiently store large enough mappings to solve interesting problems of real-world complexity.

Fast forward fifteen years, Realtime Robotics has developed a practical implementation of a variant of this technique in specialized hardware — exploiting recent advances in memory and computing hardware. The result is the potential to deliver incredibly efficient, real-time motion planning for complex, real-world applications. Existing modern processors typically consume 200–300 Watts of power, taking anywhere from hundreds of milliseconds to tens of seconds to compute a safe motion plan. With Realtime Robotics’ technology, that same planning decision can often be made in less than a millisecond, using less than 10 watts of power, with a six degrees of freedom robotic arm.

I believe that Realtime Robotics’ fast and efficient motion planning processor opens up a whole array of possibilities for automation. Their technology enables new applications for industrial robots, as well as potential applications to automated vehicles, such as efficient navigation and motion planning in complex urban traffic environments.

All of us here at the Toyota Research Institute (TRI) are excited that our venture capital subsidiary Toyota AI Ventures has invested in Realtime Robotics. TRI and Toyota AI Ventures are both committed to improving human life through robot technology, and an investment in Realtime Robotics is a meaningful step in that direction.

References:

[1] http://pratt.duke.edu/news/robotic-motion-planning-real-time

[2] https://en.wikipedia.org/wiki/Hidden_surface_determination

[3] Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John. Computer Graphics: Principles and Practice (2nd ed.). Addison-Wesley. ISBN 978–0–201–12110–0. (15 June 1990)

[4] https://en.wikipedia.org/wiki/Z-buffering

[5] Kuffner J; LaValle S. RRT-Connect: An efficient approach to single-query path planning. In Proc. IEEE Int’l Conf. on Robotics and Automation (ICRA’2000). April 2000.

[6] Leven P.; Hutchinson S. A framework for real-time path planning in changing environments. The International Journal of Robotics Research. 2002.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.