COGNITIVE ROBOTICS: LEARNING ENVIRONMENT PERCEPTION
For robots to successfully perceive and understand their environment, they must be taught to act in a goal-directed way. While mapping environments geometry is a necessary prerequisite for many mobile robot applications, understanding the semantics of the environment will enable novel applications, which require more advanced cognitive abilities.
Sven Behnke, Head of Autonomous Intelligent Systems Group at the University of Bonn, is tackling this area of robotics by combining dense geometric modelling and semantic categorization. Through this, 3D semantic maps of the environment are built. Sven’s team have demonstrated the utility of semantic environment perception with cognitive robots in multiple challenging application domains, including domestic service, space exploration, search and rescue, and bin picking.
At the Machine Intelligence Summit in Amsterdam on 28–29 June, Sven will share expertise on methods that he and his team have developed for learning tasks like the categorization of surfaces, the detection, recognition, and pose estimation of objects, and the transfer of manipulation skills to novel objects. I asked him a few questions ahead of the summit to learn more.
Can you tell us a bit more about your work?
The Autonomous Intelligent Systems group at University of Bonn investigates cognitive robotics and deep learning. We developed multiple robotic systems for semi-structured domains, such as domestic service, search and rescue, aerial inspection, space exploration, bin picking, and playing soccer. The main focus of our work is perception, i.e. interpreting the measurements of robot sensors like 3D laser scanners and RGB-D cameras to obtain an environment model that is suitable for planning robot actions.
We also investigate efficient planning of locomotion and manipulation as well as learning in robotic systems. We integrate our developments in systems that perform complex tasks autonomously. The robots of our team NimbRo excelled at multiple robotics competitions and challenges, including the DARPA Robotics Challenge, RoboCup Soccer and @Home, the European Robotics Challenges, the Amazon Picking Challenge, the DLR SpaceBot Cup, and MBZIRC.
What do you feel are the leading factors enabling recent advancements and uptake of cognitive robotics?
Advances in sensing, like affordable RGB-D cameras and small 3D laser scanners, and computing, like tiny PCs and programmable GPUs, form the basis for semantic perception of the environment. Semantic perception is enabled by the collection of large annotated data sets and deep learning methods that perform, e.g., categorization of images, object detection, pose estimation, semantic segmentation, etc. These semantic percepts are combined with efficient methods for simultaneous localization and mapping (SLAM) to obtain 3D semantic maps.
Another enabling factor is lighter, more compliant actuation, which allows for human-robot collaboration and physical contact. Advances in planning, e.g. through hierarchical, compositional methods make the robust generation of adaptive robot actions in real time possible. Finally, better connectivity and cloud services embed cognitive robots into larger infrastructures.
What present or potential future applications of cognitive robots excites you most?
Currently, advanced driver assistance systems and self-driving cars are certainly the application of cognitive robotics with the highest impact. Flexible industrial automation using collaborative robots is gaining momentum. I am most excited about cognitive service robots that combine robust mobility in semi-structured environments, human-like manipulation skills, and intuitive multimodal human-robot interaction. Such robots could revolutionize professional service industries like restaurants and health care, but also perform assistance and household chores in everyday environments.
Which industries do you feel will be most disrupted by cognitive robotics in the future?
All industries with repetitive human labor will be affected. Automation will increase substantially in industrial production, agriculture, transportation, and logistics. Also professional services such as cleaning, resale, restaurants, care facilities, etc. will rely more and more on cognitive robotic assistants. Once cognitive robots will become affordable, they will also provide assistance in our homes.
What developments can we expect to see in cognitive robotics in the next 5 years?
I expect an increase of capabilities and a decrease of costs, which will enable more and more applications domains and create a cognitive robotic industry.
Another exciting development can be the tighter symbiosis between humans and cognitive robotic systems, not only to compensate for physical or cognitive deficits, but also to improve the quality of live and to augment human capabilities.
Sven Behnke will be speaking at the Machine Intelligence Summit, taking place alongside the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam on 28–29 June. Meet with and learn from leading experts about how AI will impact transport, manufacturing, healthcare, retail and more. Tickets are now limited, register to attend here.
Other confirmed speakers include Roland Vollgraf, Research Lead, Zalando Research; Neal Lathia, Senior Data Scientist, Skyscanner; Alexandros Karatzoglou, Scientific Director, Télefonica; Ingo Waldmann, Senior Research Scientist, UCL; and Damian Borth, Director of the Deep Learning Competence Center, DFKI. View more speakers and topics here.
The Machine Intelligence Summit and Machine Intelligence in Autonomous Vehicles Summit will also take place together in Hong Kong on 9–10 November.
Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.