The past decades have seen tremendous advances in robotics, with many tech gurus predicting that a revolution in robotics, with a scale rivaling that of the PC revolution, is just around the corner. Robots are starting to find economic success in areas outside of their traditional place in factory automation, such as in homes, medicine, space, environmental, and military applications. Large companies, such as Google, Amazon, Toyota, and Softbank, and government funding agencies, such as NSF, DARPA, and the European Commission, are banking on future growth, with billions of dollars of investments being made in robotics over the last few years. Technology gurus are conjuring up tantalizing visions of a more convenient, safer, and healthier robot-enhanced future, packed with self-driving cars, delivery drones, household helpers, medical assistants, environmental cleanup crews, and emergency first responders. With so much hype, a layperson has every right to ask, “What’s taking them so long?” (Or, to put it more bluntly, “Where’s my damn robot servant already?”)
This question is perfectly justified and important — not only to laypersons, but to funding agencies, investors, educators, and policymakers who need to make serious economic decisions. Is robotics the real deal, or just hype? When, where, and how large will the economic impacts be?
Below I will cover a few of the major technical hurdles to progress in robotics. I won’t discuss the economic future of the field and its impact on society… let’s save that for another time. It’s hard to predict how the current robotics boom will play out, as it will involve a complex weave of technological progress, scientific research, business decisions in industry, behavioral effects in technology adoption or technophobia, impacts on labor practices, and ventures into regulatory and legal uncharted territory. I would be skeptical of anyone who would claim to divine such a future.
The Top 10 Hurdles
- Long battery life. Mobile manipulators still cannot work for more than a few hours without recharging, and drones last only a few tens of minutes.
- Strong, safe, precise, and light actuators. Actuators have nowhere near the same power density as biological muscle.
- Human-like sense of touch. Tactile sensing is finicky and low resolution, and even with good sensors, we wouldn’t know how to create proper reflexes.
- Manipulating stuff: not just grasping. Robots should handle clothes, bags, and piles of objects, and to scrub things. (No more mugs on tables, please.)
- Real-time optimal control / motion planning. Fast optimization with nonlinear dynamics and complex obstacles could be run in tight control loops, or to make task-and-motion planning realistic.
- Tractable probabilistic planning. Coping with uncertainty in motion and observation is still intractable in continuous spaces.
- Accurate understanding of the world. Robots need 3D maps that include physical (object shapes), semantic (identities), and dynamic properties (movement predictions) of objects.
- Predicting human responses to robot behavior. Algorithms for HRI need to understand proximity, timing, and inter-subject variability.
- Making reinforcement learning precise and reliable. Despite recent advances, RL leads to much “sloppier” controllers than traditional techniques.
- “Small data” learning. Data is expensive in robotics, and we would like to learn from few examples, like humans.
- (+1 bonus hurdle) Integration of reliable systems. With so many complex software and hardware components, how can we engineer and certify a system to hit 99.99…% reliability?
With a properly focused effort on each problem, I would expect that each item can be addressed and brought to a viable product within a decade or so of R&D.
What is no longer a technical hurdle?
Robotics has made enormous leaps over the last decades. A decade ago when I was in graduate school, you could get a PhD (and even a coveted faculty position) in robotics without even touching a robot. Any serious system implementation work would take years of team effort, and systems were so brittle that you prayed that the camera was rolling the one time it did work. Today, system integration has gotten much easier, and hence it is very hard to do high-impact research without demonstrating your ideas on a real system. Robots are cheaper, more reliable, and easier to use (for a researcher) than ever. However, as I mentioned above, the path from a lab demo to near-100% reliability is long and fraught with peril.
Robotics continues to borrow many advances from computing, the Internet, and mobile devices. Computers are cheap, small, and powerful, reducing barriers to entry and allowing more powerful algorithms to be run. High-bandwidth networks and specialized chips help us deal with copious amounts of data, e.g., high-resolution video. Storage is dirt-cheap, letting us store huge maps and experience databases. Infrastructure for offloading computation onto offboard resources, e.g., wireless and cloud computing, is beginning to be exploited when even our existing CPUs and GPUs are not powerful enough.
Sensors are getting smaller, more reliable, and far cheaper. A major shift in robotics came from the development of reliable inexpensive depth sensors like the Kinect, as well as cell phone cameras and MEMS sensors like accelerometers.
The last two decades or so has also seen breakthroughs in algorithms, such as in simultaneous localization and mapping (SLAM), 3D mapping, and motion planning. However, more complex forms of these problems are still unsolved.
Computer vision has seen remarkable improvement with the use of deep learning techniques. It is amazing to be able to get a vision system working “pretty well” enough for demos. Getting to 100% is still a challenge, and vision is still somewhat unreliable outside of controlled conditions.
It is also much easier to prototype new robot designs with the advent of 3D printing and easier-to-use CAD software.