Meet Realtime Robotics’ CEO Peter Howard and Chief Roboticist George Konidaris
Founded in 2016 by Duke University professors Dan Sorin and George Konidaris, Realtime Robotics makes motion planning processors that enable robots and autonomous vehicles to instantly react to their environments by computing how, and where, to move as their surroundings change. Toyota AI Ventures invested in Realtime Robotics (read the announcement here) because we believe in the team and think their technology could be game-changing, opening up new possibilities for automation.
In this Q&A, Chief Roboticist George Konidaris and CEO Peter Howard discuss how Realtime’s technology is enabling robots to generate collision-free motion in dynamic environments, unlocking the potential for complex use cases, like driving autonomous vehicles at normal speeds through urban environments. They also share tips for academics on vetting university policy governing intellectual property (IP) rights, and Peter offers advice on startup-building based on his experience taking two companies public and launching hundreds of products worldwide.
Tell us a bit about your backgrounds, and your work before Realtime Robotics.
George Konidaris: I am an academic, who studies how to build intelligent machines. That interest has taken me all the way from Johannesburg, South Africa, where I was born, to U.S. academia, with a stop in the middle in the U.K. for a master’s degree. It’s been a fascinating ride. When we started Realtime, I was a professor at Duke, and the motion planning processor was one of the first collaborative projects I started there. It became very clear, very quickly that we had a real solution to an important problem on our hands. So, we became determined to actually make it happen, rather than just writing a paper about it.
Peter Howard: I have a particular passion and interest in business formation and the process of creating order and value out of formative chaos. My roles have included entrepreneur-CEO, investor, and board director. As CEO, I have founded and successfully grown five companies, leading two to IPOs, one to strategic sale, and another to major technology license. In between these roles, I have had the good fortune to support the launch of hundreds of innovative products while leading R&D and manufacturing services businesses in the US, Japan, EU, Singapore, and China.
How did you come to work together at Realtime Robotics?
GK: I was a professor at Duke in 2016. One day, I went to an engineering faculty meeting and struck up a conversation with Dan Sorin, a computer architect who became our co-founder. He seemed like a nice guy, so we went for lunch the next week to talk about how we might collaborate, and one thing led to another.
PH: In early 2017, Duke established an Office of Entrepreneurship and hired Bill Walker, a successful serial entrepreneur and Duke graduate, to lead it. After several months working with Realtime’s founders, they asked him to help identify suitable CEO candidates. Bill knew me through my acquisition of one of his ventures. He knew I had founded an earlier robotics company, and was still deeply involved in venture support and leadership, so he reached out to me for leads. After reviewing Realtime’s information, I was so excited that I flew out to Duke to meet the team. At the end of that meeting, I told them that I was interested in providing initial seed funding, and in joining as CEO.
George, in addition to your role at Realtime Robotics, you’re an assistant professor at Brown University. What advice would you give other researchers in a university setting who want to launch a startup?
GK: One of the things I’ve always loved about academia is the constant push to look beyond what’s possible right now — the attempt to imagine better technology, and then make the technology real. Robotics is an especially exciting field in that sense, and I’ve learned an incredible amount from the academic community about how to take a stray thought that pops into your head one day, and turn it into a real thing that makes something happen in the world.
I have been extremely fortunate in that Duke and Brown have both been very supportive of my work at Realtime, both in terms of being reasonable about IP rights and in viewing a research-based startup as a valid and important academic outcome. One piece of advice is that, when you are deciding on which university to join, find out their policies and attitude. You never know when something you invent will turn out to be immediately applicable to a real problem, and it’s challenging enough to get that off the ground without having to battle university administrators. I know other people at other universities who eventually just gave up out of frustration.
Continuing on that theme of applying ideas to the real world, what problem does Realtime Robotics solve?
GK: Fundamentally, motion planning is about how to generate movement on the fly. If the robot is in an environment — say a kitchen, with pots and pans and cupboards and counter space — and it wants to move its body to pick up a cup, how should it do so?
In motion planning, the robot perceives the objects around it, decides where it would like to be , and must find a trajectory through space that moves its body from where it is to where it wants to be — without smashing into any of the obstacles. This seems like an easy task — you do it hundreds, or even thousands, of times per day. But, it turns out to be very computationally challenging. We in robotics have been trying to solve it for 30 years.
It’s also important to note that a robot that can’t generate motion in real time, in cluttered scenes that have not been arranged for its comfort, will never be useful outside of a very carefully controlled factory. Any home robot that can’t reach into your fridge to pull out a beer has failed in its primary job.
Our solution was to build a specialized processor — like the GPU (graphics processing unit) is specialized for computer graphics — that dedicates custom-purpose circuitry to solve this very important problem. With just the right design — involving carefully co-designed software and hardware — we’ve found a solution that is widely applicable. Once our processor is in everybody’s robots, they’ll be much more useful and able to operate in any kitchen, any office, any factory — without requiring that the whole space be built around the robots’ inability to move without smashing into things.
PH: Interestingly, we have solved a problem that most robot system designers have just accepted as a fact of life so they employ groups of roboticists to create additional structure to tame the problem. The trouble with that is that any change to the resulting workflow structure requires recalling the scarce, expensive roboticists back to make new tweaks, sometimes causing system users to just give up and sideline the robots. Our technology, paired with the right vision systems, gives system makers enormous flexibility to adapt to workflow changes to make the adjustments in real-time, all of the time.
Tell us more about how your technology and some of its primary applications. What markets are you going after?
PH: We started with pick and place applications in internet retail. Robots in that space have already made strong inroads in carrying goods in, around, and out of warehouses, but are only beginning to penetrate the pick and sort, and pick and pack operations because of the greater complexity and variability.
We have two broad goals going forward: 1) we want to give autonomous vehicle (AV) makers a capability that enables these vehicles to drive safely at normal speeds in busy urban environments, and 2) we want to make increasingly flexible robots accessible to non-roboticists for an enormous range of applications.
Today’s AVs move slowly in urban environments. Enabling safe AVs that don’t clog urban roadways requires going beyond today’s AI-informed driving policies into real-time predictive analysis of circumstances changing from moment to moment. We are working to make that capability a reality and to put that tool in the hands of AV system makers.
To build a flexible robotic system today, even with our bottleneck-breaking motion planner, still requires sophisticated knowledge about vision systems, manipulators, task planning, robotic arms, and motion control. In other words, you still need a staff of PhD holders to knit it all together into a working system suitable for a given task or family of tasks.
Following the release of our motion planning solution, which we’ll announce in the near future, we will be working on expanding the solution envelope to encompass the thorniest remaining elements of the robotic stack, successively simplifying implementation. Our ultimate goal is to enable anyone with solid knowledge of a task they want to perform robotically to describe that task through software tools, which in turn generate design candidates with appropriate configuration files, removing the PhD requirement from the process.
Your team claims that Realtime Robotics’ processor can enable robots to perform complex motion planning tasks up to 10,000 times faster than other processors. For those not familiar with motion planning in robotics, why is that type of speed so significant?
GK: We’ve benchmarked against other software and GPU solutions. Prior to our work, GPU solutions were the best in class. The difference is significant because motion planning will have to happen several times for any single task the robot might have to do; waiting seconds for each component movement is just never going to be feasible. I’ve spent many days in the lab watching my robots take seconds between very simple movements, and I can tell you that’s very frustrating! We’ve essentially zeroed out that time.
It’s also worth emphasizing that a really robust and intelligent robot would not just make a single plan per movement. It might want to create ten, or a hundred, plans and evaluate the alternatives before actually committing to a move. That was inconceivable before — you used to have to architect the whole robot software around following a motion plan that took five or ten seconds to create per movement. Now, the robot can make a thousand plans and pick the best one. It completely changes the way we think about generating behavior.
Peter, you’ve been part of several startups, including two that have gone public. What are some of the lessons you’ve learned that you’re applying at Realtime Robotics?
PH: I think the strongest lesson that prior startup experience has taught me, both my own and the many that I’ve supported as an outsourced partner, is to focus all resources on resolving the technical “unknowns” first. Then, engage engineering resources on completing a narrow initial product that fills a known need, even if it’s modest. Once this credibility beachhead is established, it is much easier to get investor support, hire top talent to expand sales reach, and fill out product offerings that disrupt existing markets and rapidly expand them.
What other area of robotics are you most excited about?
GK: There’s so much to be excited about in robotics and AI at the moment. In the near term, I think we’re close to general-purpose grasping solutions, which is something the industry sorely lacks. In the longer term, much of my own research is focused on understanding how robots can go from having to think and plan in terms of the high-dimensional input and output that is their innate sensorimotor space towards much more abstract reasoning and planning processes. We know we have to get there because most daily tasks are just wildly computationally infeasible by thinking at the level of pixels and motors. But there’s a huge open question about how to do it.
Any closing thoughts about what’s next for the company?
GK: I am confident that robotics — that is, the actual behavior of deployed robots out there in the world — will look very different when we’re done. It’s going to be a whole new ballgame.
PH: Fasten your seat belts! Over the next 24 months, we have a series of releases of successively more powerful, more encompassing, and more accessible intelligent robotic solutions.