RT/ Animal brain inspired AI game changer for autonomous robots

Paradigm
Paradigm
Published in
29 min readMay 28, 2024

Robotics & AI biweekly vol.95, 14th May — 28th May

TL;DR

  • Researchers developed a drone utilizing neuromorphic image processing, mimicking animal brains’ efficiency in data usage and energy consumption, making it ideal for small drones due to its lightweight design and energy efficiency. The drone’s deep neural network operates up to 64 times faster and consumes three times less energy compared to traditional GPU-based systems, paving the way for drones to match the agility and intelligence of flying insects or birds.
  • Research could pave the way for a prosthetic hand and robot to be able to feel touch like a human hand. The technology could also be used to help restore lost functionality to patients after a stroke.
  • A new article highlights how artificial intelligence stands on the threshold of making monumental contributions to the field of sleep medicine. Through a strategic analysis, researchers examined advancements in AI within sleep medicine and spotlighted its potential in revolutionizing care in three critical areas: clinical applications, lifestyle management, and population health. The committee also reviewed barriers and challenges associated with using AI-enabled technologies.
  • A new machine-learning technique can train and control a reconfigurable soft robot that can dynamically change its shape to complete a task. The researchers also built a simulator that can evaluate control algorithms for shape-shifting soft robots.
  • Researchers have leveraged deep learning techniques to enhance the image quality of a metalens camera. The new approach uses artificial intelligence to turn low-quality images into high-quality ones, which could make these cameras viable for a multitude of imaging tasks including intricate microscopy applications and mobile devices.
  • A group of researchers creates an innovative method that employs central pattern generators — neural circuits located in the spinal cord that generate rhythmic patterns of muscle activity — with deep reinforcement learning. The method not only imitates walking and running motions but also generates movements for frequencies where motion data is absent, enables smooth transition movements from walking to running, and allows for adapting to environments with unstable surfaces.
  • An ongoing research aims to create adaptable safety systems for highly automated off-road mobile machinery to meet industry needs. Research has revealed critical gaps in compliance with legislation related to public safety when using mobile working machines controlled by artificial intelligence.
  • Robotics engineers have worked for decades and invested many millions of research dollars in attempts to create a robot that can walk or run as well as an animal. And yet, it remains the case that many animals are capable of feats that would be impossible for robots that exist today.
  • The study highlights the rapid progress and transformative potential of AI in weather prediction.
  • Researchers are targeting the next generation of soft actuators and robots with an elastomer-based ink for 3D printing objects with locally changing mechanical properties, eliminating the need for cumbersome mechanical joints.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Fully neuromorphic vision and control for autonomous drone flight

by F. Paredes-Vallés, J. J. Hagenaars, J. Dupeyroux, S. Stroobants, Y. Xu, G. C. H. E. de Croon in Science Robotics

A team of researchers at Delft University of Technology has developed a drone that flies autonomously using neuromorphic image processing and control based on the workings of animal brains. Animal brains use less data and energy compared to current deep neural networks running on GPUs (graphic chips). Neuromorphic processors are therefore very suitable for small drones because they don’t need heavy and large hardware and batteries. The results are extraordinary: during flight the drone’s deep neural network processes data up to 64 times faster and consumes three times less energy than when running on a GPU. Further developments of this technology may enable the leap for drones to become as small, agile, and smart as flying insects or birds.

Artificial intelligence holds great potential to provide autonomous robots with the intelligence needed for real-world applications. However, current AI relies on deep neural networks that require substantial computing power. The processors made for running deep neural networks (Graphics Processing Units, GPUs) consume a substantial amount of energy. Especially for small robots like flying drones this is a problem, since they can only carry very limited resources in terms of sensing and computing.

Animal brains process information in a way that is very different from the neural networks running on GPUs. Biological neurons process information asynchronously, and mostly communicate via electrical pulses called spikes. Since sending such spikes costs energy, the brain minimizes spiking, leading to sparse processing.

Photo of the “neuromorphic drone” flying over a flower pattern. It illustrates the visual inputs the drone receives from the neuromorphic camera in the corners. Red indicates pixels getting darker, green indicates pixels getting brighter.

Inspired by these properties of animal brains, scientists and tech companies are developing new, neuromorphic processors. These new processors allow to run spiking neural networks and promise to be much faster and more energy efficient.

“The calculations performed by spiking neural networks are much simpler than those in standard deep neural networks.,” says Jesse Hagenaars, PhD candidate and one of the authors of the article, “Whereas digital spiking neurons only need to add integers, standard neurons have to multiply and add floating point numbers. This makes spiking neural networks quicker and more energy efficient. To understand why, think of how humans also find it much easier to calculate 5 + 8 than to calculate 6.25 x 3.45 + 4.05 x 3.45.”

This energy efficiency is further boosted if neuromorphic processors are used in combination with neuromorphic sensors, like neuromorphic cameras. Such cameras do not make images at a fixed time interval. Instead, each pixel only sends a signal when it becomes brighter or darker. The advantages of such cameras are that they can perceive motion much more quickly, are more energy efficient, and function well both in dark and bright environments. Moreover, the signals from neuromorphic cameras can feed directly into spiking neural networks running on neuromorphic processors. Together, they can form a huge enabler for autonomous robots, especially small, agile robots like flying drones.

In an article, researchers from Delft University of Technology, the Netherlands, demonstrate for the first time a drone that uses neuromorphic vision and control for autonomous flight. Specifically, they developed a spiking neural network that processes the signals from a neuromorphic camera and outputs control commands that determine the drone’s pose and thrust. They deployed this network on a neuromorphic processor, Intel’s Loihi neuromorphic research chip, on board of a drone. Thanks to the network, the drone can perceive and control its own motion in all directions.

“We faced many challenges,” says Federico Paredes-Vallés, one of the researchers that worked on the study, “but the hardest one was to imagine how we could train a spiking neural network so that training would be both sufficiently fast and the trained network would function well on the real robot. In the end, we designed a network consisting of two modules. The first module learns to visually perceive motion from the signals of a moving neuromorphic camera. It does so completely by itself, in a self-supervised way, based only on the data from the camera. This is similar to how also animals learn to perceive the world by themselves. The second module learns to map the estimated motion to control commands, in a simulator. This learning relied on an artificial evolution in simulation, in which networks that were better in controlling the drone had a higher chance of producing offspring. Over the generations of the artificial evolution, the spiking neural networks got increasingly good at control, and were finally able to fly in any direction at different speeds. We trained both modules and developed a way with which we could merge them together. We were happy to see that the merged network immediately worked well on the real robot.”

With its neuromorphic vision and control, the drone is able to fly at different speeds under varying light conditions, from dark to bright. It can even fly with flickering lights, which make the pixels in the neuromorphic camera send great numbers of signals to the network that are unrelated to motion.

“Importantly, our measurements confirm the potential of neuromorphic AI. The network runs on average between 274 and 1600 times per second. If we run the same network on a small, embedded GPU, it runs on average only 25 times per second, a difference of a factor ~10–64! Moreover, when running the network, Intel’s Loihi neuromorphic research chip consumes 1.007 watts, of which 1 watt is the idle power that the processor spends just when turning on the chip. Running the network itself only costs 7 milliwatts. In comparison, when running the same network, the embedded GPU consumes 3 watts, of which 1 watt is idle power and 2 watts are spent for running the network. The neuromorphic approach results in AI that runs faster and more efficiently, allowing deployment on much smaller autonomous robots.,” says Stein Stroobants, PhD candidate in the field of neuromorphic drones.

“Neuromorphic AI will enable all autonomous robots to be more intelligent,” says Guido de Croon, Professor in bio-inspired drones, “but it is an absolute enabler for tiny autonomous robots. At Delft University of Technology’s Faculty of Aerospace Engineering, we work on tiny autonomous drones which can be used for applications ranging from monitoring crop in greenhouses to keeping track of stock in warehouses. The advantages of tiny drones are that they are very safe and can navigate in narrow environments like in between ranges of tomato plants. Moreover, they can be very cheap, so that they can be deployed in swarms. This is useful for more quickly covering an area, as we have shown in exploration and gas source localization settings.”

“The current work is a great step in this direction. However, the realization of these applications will depend on further scaling down the neuromorphic hardware and expanding the capabilities towards more complex tasks such as navigation.”

Spike timing–based coding in neuromimetic tactile system enables dynamic object classification

by Libo Chen, Sanja Karilanova, Soumi Chaki, Chenyu Wen, Lisha Wang, Bengt Winblad, Shi-Li Zhang, Ayça Özçelikkale, Zhi-Bin Zhang in Science

Research at Uppsala University and Karolinska Institutet could pave the way for a prosthetic hand and robot to be able to feel touch like a human hand. The technology could also be used to help restore lost functionality to patients after a stroke.

“Our system can determine what type of object it encounters as fast as a blindfolded person, just by feeling it and deciding whether it is a tennis ball or an apple, for example,” says Zhibin Zhang, docent at the Department of Electrical Engineering at Uppsala University.

He and his colleague Libo Chen performed the study in close cooperation with researchers from the Signals and Systems Division at Uppsala University, who provided data processing and machine learning expertise, and a group of researchers from the Department of Neurobiology, Care Sciences and Society, Division of Neurogeriatrics at Karolinska Institutet.

Drawing inspiration from neuroscience, they have developed an artificial tactile system that imitates the way the human nervous system reacts to touch. The system uses electrical pulses that process dynamic tactile information in the same way as the human nervous system. “With this technology, a prosthetic hand would feel like part of the wearer’s body,” Zhang explains.

The artificial system has three main components: an electronic skin (e-skin) with sensors that can detect pressure by touch; a set of artificial neurons that convert analogue touch signals into electrical pulses; and a processor that processes the signals and identifies the object. In principle, it can learn to identify an unlimited number of objects, but in their tests the researchers have used 22 different objects for grasping and 16 different surfaces for touching.

“We’re also looking into developing the system so it can feel pain and heat as well. It should also be able to feel what material the hand is touching, for example, whether it is wood or metal,” says Assistant Professor Libo Chen, who led the study.

According to the researchers, interactions between humans and robots or prosthetic hands can be made safer and more natural thanks to tactile feedback. The prostheses can also be given the ability to handle objects with the same dexterity as a human hand.

“The skin contains millions of receptors. Current e-skin technology cannot deliver enough receptors, but this technology makes it possible, so we would like to produce artificial skin for a whole robot,” says Chen.

The technology could also be used medically, for example, to monitor movement dysfunctions caused by Parkinson’s disease and Alzheimer’s disease, or to help patients recover lost functionality after a stroke.

“The technology can be further developed to tell if a patient is about to fall. This information can be then used to either stimulate a muscle externally to prevent the fall or prompt an assistive device to take over and prevent it,” says Zhang.

Strengths, weaknesses, opportunities and threats of using AI-enabled technology in sleep medicine: a commentary

by Anuja Bandyopadhyay, Margarita Oks, Haoqi Sun, Bharati Prasad, et al in Journal of Clinical Sleep Medicine

In a new research commentary, the Artificial Intelligence in Sleep Medicine Committee of the American Academy of Sleep Medicine highlights how artificial intelligence stands on the threshold of making monumental contributions to the field of sleep medicine. Through a strategic analysis, the committee examined advancements in AI within sleep medicine and spotlighted its potential in revolutionizing care in three critical areas: clinical applications, lifestyle management, and population health. The committee also reviewed barriers and challenges associated with using AI-enabled technologies.

“AI is disrupting all areas of medicine, and the future of sleep medicine is poised at a transformational crossroad,” said lead author Dr. Anuja Bandyopadhyay, chair of the Artificial Intelligence in Sleep Medicine Committee. “This commentary outlines the powerful potential and challenges for sleep medicine physicians to be aware of as they begin leveraging AI to deliver precise, personalized patient care and enhance preventive health strategies on a larger scale while ensuring its ethical deployment.”

According to the authors, AI has potential uses in the sleep field in three key areas:

  • Clinical Applications:In the clinical realm, AI-driven technologies offer comprehensive data analysis, nuanced pattern recognition and automation in diagnosis, all while addressing chronic problems like sleep-related breathing disorders. Despite understated beginnings, the utilization of AI can offer improvements in efficiency and patient access, which can contribute to a reduction in burnout among health care professionals.
  • Lifestyle Management:Incorporating AI also offers clear benefits for lifestyle management through the use of consumer sleep technology. These devices come in various forms like fitness wristbands, smartphone apps, and smart rings, and they contribute to better sleep health through tracking, assessment and enhancement. Wearable sleep technology and data-driven lifestyle recommendations can empower patients to take an active role in managing their health, as shown in a recent AASM survey, which reported that 68% of adults who have used a sleep tracker said they have changed their behavior based on what they have learned. But, as these AI-driven applications grow ever more intuitive, the importance of ongoing dialogue between patients and clinicians about the potential and limitations of these innovations remains vital.
  • Population Health: Beyond individual care, AI technology reveals a new approach to public health regarding sleep. “AI has the exciting potential to synthesize environmental, behavioral and physiological data, contributing to informed population-level interventions and bridging existing health care gaps,” noted Bandyopadhyay.

The paper also offers warnings about the integration of AI into sleep medicine. Issues of data privacy, security, accuracy, and the potential for reinforcing existing biases present new challenges for health care professionals. Additionally, reliance on AI without sufficient clinical judgment could lead to complexities in patient treatment.

“While AI can significantly strengthen the evaluation and management of sleep disorders, it is intended to complement, not replace, the expertise of a sleep medicine professional,” Bandyopadhyay stated.

Navigating this emerging landscape requires comprehensive validation and standardization protocols to responsibly and ethically implement AI technologies in health care. It’s critical that AI tools are validated against varied datasets to ensure their reliability and accuracy in all patient populations.

“Our commentary provides not just a vision, but a roadmap for leveraging the technology to promote better sleep health outcomes,” Bandyopadhyay said. “It lays the foundation for future discussions on the ethical deployment of AI, the importance of clinician education, and the harmonization of this new technology with existing practices to optimize patient care.”

DittoGym: Learning to Control Soft Shape-Shifting Robots

by Suning Huang, Boyuan Chen, Huazhe Xu, Vincent Sitzmann in Submitted to arXiv

Imagine a slime-like robot that can seamlessly change its shape to squeeze through narrow spaces, which could be deployed inside the human body to remove an unwanted item.

While such a robot does not yet exist outside a laboratory, researchers are working to develop reconfigurable soft robots for applications in health care, wearable devices, and industrial systems. But how can one control a squishy robot that doesn’t have joints, limbs, or fingers that can be manipulated, and instead can drastically alter its entire shape at will? MIT researchers are working to answer that question.

They developed a control algorithm that can autonomously learn how to move, stretch, and shape a reconfigurable robot to complete a specific task, even when that task requires the robot to change its morphology multiple times. The team also built a simulator to test control algorithms for deformable soft robots on a series of challenging, shape-changing tasks. Their method completed each of the eight tasks they evaluated while outperforming other algorithms. The technique worked especially well on multifaceted tasks. For instance, in one test, the robot had to reduce its height while growing two tiny legs to squeeze through a narrow pipe, and then un-grow those legs and extend its torso to open the pipe’s lid. While reconfigurable soft robots are still in their infancy, such a technique could someday enable general-purpose robots that can adapt their shapes to accomplish diverse tasks.

A new machine-learning technique can train and control a reconfigurable soft robot that can dynamically change its shape to complete a task. The researchers, from MIT and elsewhere, also built a simulator that can evaluate control algorithms for shape-shifting soft robots. Credits: Image: Courtesy of the researchers; MIT News

“When people think about soft robots, they tend to think about robots that are elastic, but return to their original shape. Our robot is like slime and can actually change its morphology. It is very striking that our method worked so well because we are dealing with something very new,” says Boyuan Chen, an electrical engineering and computer science (EECS) graduate student and co-author of a paper on this approach.

Chen’s co-authors include lead author Suning Huang, an undergraduate student at Tsinghua University in China who completed this work while a visiting student at MIT; Huazhe Xu, an assistant professor at Tsinghua University; and senior author Vincent Sitzmann, an assistant professor of EECS at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory. The research will be presented at the International Conference on Learning Representations.

Scientists often teach robots to complete tasks using a machine-learning approach known as reinforcement learning, which is a trial-and-error process in which the robot is rewarded for actions that move it closer to a goal. This can be effective when the robot’s moving parts are consistent and well-defined, like a gripper with three fingers. With a robotic gripper, a reinforcement learning algorithm might move one finger slightly, learning by trial and error whether that motion earns it a reward. Then it would move on to the next finger, and so on. But shape-shifting robots, which are controlled by magnetic fields, can dynamically squish, bend, or elongate their entire bodies.

“Such a robot could have thousands of small pieces of muscle to control, so it is very hard to learn in a traditional way,” says Chen.

To solve this problem, he and his collaborators had to think about it differently. Rather than moving each tiny muscle individually, their reinforcement learning algorithm begins by learning to control groups of adjacent muscles that work together. Then, after the algorithm has explored the space of possible actions by focusing on groups of muscles, it drills down into finer detail to optimize the policy, or action plan, it has learned. In this way, the control algorithm follows a coarse-to-fine methodology.

“Coarse-to-fine means that when you take a random action, that random action is likely to make a difference. The change in the outcome is likely very significant because you coarsely control several muscles at the same time,” Sitzmann says.

To enable this, the researchers treat a robot’s action space, or how it can move in a certain area, like an image. Their machine-learning model uses images of the robot’s environment to generate a 2D action space, which includes the robot and the area around it. They simulate robot motion using what is known as the material-point-method, where the action space is covered by points, like image pixels, and overlayed with a grid.

The same way nearby pixels in an image are related (like the pixels that form a tree in a photo), they built their algorithm to understand that nearby action points have stronger correlations. Points around the robot’s “shoulder” will move similarly when it changes shape, while points on the robot’s “leg” will also move similarly, but in a different way than those on the “shoulder.” In addition, the researchers use the same machine-learning model to look at the environment and predict the actions the robot should take, which makes it more efficient.

After developing this approach, the researchers needed a way to test it, so they created a simulation environment called DittoGym. DittoGym features eight tasks that evaluate a reconfigurable robot’s ability to dynamically change shape. In one, the robot must elongate and curve its body so it can weave around obstacles to reach a target point. In another, it must change its shape to mimic letters of the alphabet.

“Our task selection in DittoGym follows both generic reinforcement learning benchmark design principles and the specific needs of reconfigurable robots. Each task is designed to represent certain properties that we deem important, such as the capability to navigate through long-horizon explorations, the ability to analyze the environment, and interact with external objects,” Huang says. “We believe they together can give users a comprehensive understanding of the flexibility of reconfigurable robots and the effectiveness of our reinforcement learning scheme.”

Their algorithm outperformed baseline methods and was the only technique suitable for completing multistage tasks that required several shape changes.

“We have a stronger correlation between action points that are closer to each other, and I think that is key to making this work so well,” says Chen.

While it may be many years before shape-shifting robots are deployed in the real world, Chen and his collaborators hope their work inspires other scientists not only to study reconfigurable soft robots but also to think about leveraging 2D action spaces for other complex control problems.

Deep-learning enhanced high-quality imaging in metalens-integrated camera

by Yanxiang Zhang, Yue Wu, Chunyu Huang, Zi-Wen Zhou, Muyang Li, Zaichen Zhang, Ji Chen in Optics Letters

Researchers have leveraged deep learning techniques to enhance the image quality of a metalens camera. The new approach uses artificial intelligence to turn low-quality images into high-quality ones, which could make these cameras viable for a multitude of imaging tasks including intricate microscopy applications and mobile devices.

Metalenses are ultrathin optical devices — often just a fraction of a millimeter thick — that use nanostructures to manipulate light. Although their small size could potentially enable extremely compact and lightweight cameras without traditional optical lenses, it has been difficult to achieve the necessary image quality with these optical components.

“Our technology allows our metalens-based devices to overcome the limitations of image quality,” said research team leader Ji Chen from Southeast University in China. “This advance will play an important role in the future development of highly portable consumer imaging electronics and can also be used in specialized imaging applications such as microscopy.”

Researchers used deep learning techniques to enhance the image quality of a camera with a metalens integrated directly onto a CMOS imaging chip (left). The metalens manipulates light using an array of 1000-nm tall cylindrical silicon nitride nano-posts (right).

The researchers describe how they used a type of machine learning known as a multi-scale convolutional neural network to improve resolution, contrast and distortion in images from a small camera — about 3 cm × 3 cm × 0.5 cm — they created by directly integrating a metalens onto a CMOS imaging chip.

“Metalens-integrated cameras can be directly incorporated into the imaging modules of smartphones, where they could replace the traditional refractive bulk lenses,” said Chen. “They could also be used in devices such as drones, where the small size and lightweight camera would ensure imaging quality without compromising the drone’s mobility.”

The camera used in the new work was previously developed by the researchers and uses a metalens with 1000-nm tall cylindrical silicon nitride nano-posts. The metalens focuses light directly onto a CMOS imaging sensor without requiring any other optical elements. Although this design created a very small camera the compact architecture limited the image quality. Thus, the researchers decided to see if machine learning could be used to improve the images.

Deep learning is a type of machine learning that uses artificial neural networks with multiple layers to automatically learn features from data and make complex decisions or predictions. The researchers applied this approach by using a convolution imaging model to generate a large number of high- and low-quality image pairs. These image pairs were used to train a multi-scale convolutional neural network so that it could recognize the characteristics of each type of image and use that to turn low-quality images into high-quality images.

“A key part of this work was developing a way to generate the large amount of training data needed for the neural network learning process,” said Chen. “Once trained, a low-quality image can be sent from the device to into the neural network for processing, and high-quality imaging results are obtained immediately.”

To validate the new deep learning technique, the researchers used it on 100 test images. They analyzed two commonly used image processing metrics: the peak signal-to-noise ratio and the structural similarity index. They found that the images processed by the neural network exhibited a significant improvement in both metrics. They also showed that the approach could rapidly generate high-quality imaging data that closely resembled what was captured directly through experimentation.

The researchers are now designing metalenses with complex functionalities — such as color or wide-angle imaging — and developing neural network methods for enhancing the imaging quality of these advanced metalenses. To make this technology practical for commercial application would require new assembly techniques for integrating metalenses into smartphone imaging modules and image quality enhancement software designed specifically for mobile phones.

“Ultra-lightweight and ultra-thin metalenses represent a revolutionary technology for future imaging and detection,” said Chen. “Leveraging deep learning techniques to optimize metalens performance marks a pivotal developmental trajectory. We foresee machine learning as a vital trend in advancing photonics research.”

AI-CPG: Adaptive Imitated Central Pattern Generators for Bipedal Locomotion Learned Through Reinforced Reflex Neural Networks

by Guanda Li, Auke Ijspeert, Mitsuhiro Hayashibe in IEEE Robotics and Automation Letters

Walking and running is notoriously difficult to recreate in robots. Now, a group of researchers has overcome some of these challenges by creating an innovative method that employs central pattern generators — neural circuits located in the spinal cord that generate rhythmic patterns of muscle activity — with deep reinforcement learning. An international group of researchers has created a new approach to imitating human motion through combining central pattern generators (CPGs) and deep reinforcement learning (DRL). The method not only imitates walking and running motions but also generates movements for frequencies where motion data is absent, enables smooth transition movements from walking to running, and allows for adapting to environments with unstable surfaces.

We might not think about it much, but walking and running involves inherent biological redundancies that enable us to adjust to the environment or alter our walking/running speed. Given the intricacy and complexity of this, reproducing these human-like movements in robots is notoriously challenging.

Current models often struggle to accommodate unknown or challenging environments, which makes them less efficient and effective. This is because AI is suited for generating one or a small number of correct solutions. With living organisms and their motion, there isn’t just one correct pattern to follow. There’s a whole range of possible movements, and it is not always clear which one is the best or most efficient.

DRL is one way researchers have sought to overcome this. DRL extends traditional reinforcement learning by leveraging deep neural networks to handle more complex tasks and learn directly from raw sensory inputs, enabling more flexible and powerful learning capabilities. Its disadvantage is the huge computational cost of exploring vast input space, especially when the system has a high degree of freedom.

Another approach is imitation learning, in which a robot learns by imitating motion measurement data from a human performing the same motion task. Although imitation learning is good at learning on stable environments, it struggles when faced with new situations or environments it hasn’t encountered during training. Its ability to modify and navigate effectively becomes constrained by the narrow scope of its learned behaviors.

“We overcame many of the limitations of these two approaches by combining them,” explains Mitsuhiro Hayashibe, a professor at Tohoku University’s Graduate School of Engineering. “Imitation learning was used to train a CPG-like controller, and, instead of applying deep learning to the CPGs itself, we applied it to a form of a reflex neural network that supported the CPGs.”

CPGs are neural circuits located in the spinal cord that, like a biological conductor, generate rhythmic patterns of muscle activity. In animals, a reflex circuit works in tandem with CPGs to provide adequate feedback that allows them to adjust their speed and walking/running movements to suit the terrain.

By adopting the structure of CPG and its reflexive counterpart, the adaptive imitated CPG (AI-CPG) method achieves remarkable adaptability and stability in motion generation while imitating human motion.

“This breakthrough sets a new benchmark in generating human-like movement in robotics, with unprecedented environmental adaptation capability,” adds Hayashibe “Our method represents a significant step forward in the development of generative AI technologies for robot control, with potential applications across various industries.”

A comprehensive approach to safety for highly automated off-road machinery under Regulation 2023/1230

by Marea de Koning, Tyrone Machado, Andrei Ahonen, Nataliya Strokina, Morteza Dianatfar, Fransesco De Rosa, Tatiana Minav, Reza Ghabcheloo in Safety Science

An ongoing research project at Tampere University aims to create adaptable safety systems for highly automated off-road mobile machinery to meet industry needs. Research has revealed critical gaps in compliance with legislation related to public safety when using mobile working machines controlled by artificial intelligence.

As the adoption of highly automated off-road machinery increases, so does the need for robust safety measures. Conventional safety processes often fail to consider the health and safety risks posed by systems controlled by artificial intelligence (AI).

Marea de Koning, a doctoral researcher specialising in automation at Tampere University, conducts research with the aim of ensuring public safety without compromising technological advancements by developing a safety framework specifically tailored for autonomous mobile machines operating in collaboration with humans. This framework intents to enable original equipment manufacturers (OEM), safety & system engineers, and industry stakeholders to create safety systems that comply with evolving legislation.

Anticipating all the possible ways a hazard can emerge and ensuring that the AI can safely manage hazardous scenarios is practically impossible. We need to adjust our approach to safety to focus more on finding ways to successfully manage unforeseen events.

We need robust risk management systems, often incorporating a human-in-the-loop safety option. Here a human supervisor is expected to intervene when necessary. But in autonomous machinery, relying on human intervention is impractical. According to de Koning, there can be measurable degradations in human performance when automation is used due to, for example, boredom, confusion, cognitive capacities, loss of situational awareness, and automation bias. These factors significantly impact safety, and a machine must become capable of safely managing its own behaviour.

“Myapproach considers hazards with AI-driven decision-making, risk assessment, and adaptability to unforeseen scenarios. I think it is important to actively engage with industry partners to ensure real-world applicability. By collaborating with manufacturers, it is possible to bridge the gap between theoretical frameworks and practical implementation,” she says.

The framework intents to support OEMs in designing and developing compliant safety systems and ensure that their products adhere to evolving regulations.

Why animals can outrun robots

by Samuel A. Burden, Thomas Libby, Kaushik Jayaram, Simon Sponberg, J. Maxwell Donelan in Science Robotics

Robotics engineers have worked for decades and invested many millions of research dollars in attempts to create a robot that can walk or run as well as an animal. And yet, it remains the case that many animals are capable of feats that would be impossible for robots that exist today.

“A wildebeest can migrate for thousands of kilometres over rough terrain, a mountain goat can climb up a literal cliff, finding footholds that don’t even seem to be there, and cockroaches can lose a leg and not slow down,” says Dr. Max Donelan, Professor in Simon Fraser University’s Department of Biomedical Physiology and Kinesiology. “We have no robots capable of anything like this endurance, agility and robustness.”

To understand why, and quantify how, robots lag behind animals, an interdisciplinary team of scientists and engineers from leading research universities completed a detailed study of various aspects of running robots, comparing them with their equivalents in animals, for a paper. The paper finds that, by the metrics engineers use, biological components performed surprisingly poorly compared to fabricated parts. Where animals excel, though, is in their integration and control of those components.

Alongside Donelan, the team comprised Drs. Sam Burden, Associate Professor in the Department of Electrical & Computer Engineering at the University of Washington; Tom Libby, Senior Research Engineer, SRI International; Kaushik Jayaram, Assistant Professor in the Paul M Rady Department of Mechanical Engineering at the University of Colorado Boulder; and Simon Sponberg, Dunn Family Associate Professor of Physics and Biological Sciences at the Georgia Institute of Technology.

The researchers each studied one of five different “subsystems” that combine to create a running robot — Power, Frame, Actuation, Sensing, and Control — and compared them with their biological equivalents. Previously, it was commonly accepted that animals’ outperformance of robots must be due to the superiority of biological components.

“The way things turned out is that, with only minor exceptions, the engineering subsystems outperform the biological equivalents — and sometimes radically outperformed them,” says Libby. “But also what’s very, very clear is that, if you compare animals to robots at the whole system level, in terms of movement, animals are amazing. And robots have yet to catch up.”

More optimistically for the field of robotics, the researchers noted that, if you compare the relatively short time that robotics has had to develop its technology with the countless generations of animals that have evolved over many millions of years, the progress has actually been remarkably quick.

“It will move faster, because evolution is undirected,” says Burden. “Whereas we can very much correct how we design robots and learn something in one robot and download it into every other robot, biology doesn’t have that option. So there are ways that we can move much more quickly when we engineer robots than we can through evolution — but evolution has a massive head start.”

More than simply an engineering challenge, effective running robots offer countless potential uses. Whether solving ‘last mile’ delivery challenges in a world designed for humans that is often difficult to navigate for wheeled robots, carrying out searches in dangerous environments or handling hazardous materials, there are many potential applications for the technology.

The researchers hope that this study will help direct future development in robot technology, with an emphasis not on building a better piece of hardware, but in understanding how to better integrate and control existing hardware. Donelan concludes, “As engineering learns integration principles from biology, running robots will become as efficient, agile, and robust as their biological counterparts.”

Do AI models produce better weather forecasts than physics-based models? A quantitative evaluation case study of Storm Ciarán

by Andrew J. Charlton-Perez, Helen F. Dacre, Simon Driscoll, Suzanne L. Gray, Ben Harvey, Natalie J. Harvey, Kieran M. R. Hunt, Robert W. Lee, Ranjini Swaminathan, Remy Vandaele, Ambrogio Volonté in npj Climate and Atmospheric Science

Artificial intelligence (AI) can quickly and accurately predict the path and intensity of major storms, a new study has demonstrated.

The research, based on an analysis of November 2023’s Storm Ciaran, suggests weather forecasts that use machine learning can produce predictions of similar accuracy to traditional forecasts faster, cheaper, and using less computational power. The University of Reading study highlights the rapid progress and transformative potential of AI in weather prediction. Professor Andrew Charlton-Perez, who led the study, said: “AI is transforming weather forecasting before our eyes. Two years ago, modern machine learning techniques were rarely being applied to make weather forecasts. Now we have multiple models that can produce 10-day global forecasts in minutes.

“There is a great deal we can learn about AI weather forecasts by stress-testing them on extreme events like Storm Ciarán. We can identify their strengths and weaknesses and guide the development of even better AI forecasting technology to help protect people and property. This is an exciting and important time for weather forecasting.”

Near-surface wind and MSLP structure on landfall and track of Storm Ciarán.

To understand the effectiveness of AI-based weather models, scientists from the University of Reading compared AI and physics-based forecasts of Storm Ciarán — a deadly windstorm that hit northern and central Europe in November 2023 which claimed 16 lives in northern Europe and left more than a million homes without power in France.

The researchers used four AI models and compared their results with traditional physics-based models. The AI models, developed by tech giants like Google, Nvidia and Huawei, were able to predict the storm’s rapid intensification and track 48 hours in advance. To a large extent, the forecasts were ‘indistinguishable’ from the performance of conventional forecasting models, the researchers said. The AI models also accurately captured the large-scale atmospheric conditions that fuelled Ciarán’s explosive development, such as its position relative to the jet stream — a narrow corridor of strong high-level winds.

The machine learning technology underestimated the storm’s damaging winds, however. All four AI systems underestimated Ciarán’s maximum wind speeds, which in reality gusted at speeds of up to 111 knots at Pointe du Raz, Brittany. The authors were able to show that this underestimation was linked to some of the features of the storm, including the temperature contrasts near its centre, that were not well predicted by the AI systems.

To better protect people from extreme weather like Storm Ciaran, the researchers say further investigation of the use of AI in weather prediction is urgently needed. Development of machine learning models could mean artificial intelligence is routinely used in weather prediction in the near future, saving forecasters time and money.

3D Printing of Double Network Granular Elastomers with Locally Varying Mechanical Properties

by Eva Baur, Benjamin Tiberghien, Esther Amstad in Advanced Materials

EPFL researchers are targeting the next generation of soft actuators and robots with an elastomer-based ink for 3D printing objects with locally changing mechanical properties, eliminating the need for cumbersome mechanical joints.

For engineers working on soft robotics or wearable devices, keeping things light is a constant challenge: heavier materials require more energy to move around, and — in the case of wearables or prostheses — cause discomfort. Elastomers are synthetic polymers that can be manufactured with a range of mechanical properties, from stiff to stretchy, making them a popular material for such applications. But manufacturing elastomers that can be shaped into complex 3D structures that go from rigid to rubbery has been unfeasible until now.

“Elastomers are usually cast so that their composition cannot be changed in all three dimensions over short length scales. To overcome this problem, we developed DNGEs: 3D-printable double network granular elastomers that can vary their mechanical properties to an unprecedented degree,” says Esther Amstad, head of the Soft Materials Laboratory in EPFL’s School of Engineering.

The lab’s DNGE prototype ‘finger’ with rigid ‘bones’ surrounded by flexible ‘flesh’ © Adrian Alberola

Eva Baur, a PhD student in Amstad’s lab, used DNGEs to print a prototype ‘finger’, complete with rigid ‘bones’ surrounded by flexible ‘flesh’. The finger was printed to deform in a pre-defined way, demonstrating the technology’s potential to manufacture devices that are sufficiently supple to bend and stretch, while remaining firm enough to manipulate objects. With these advantages, the researchers believe that DNGEs could facilitate the design of soft actuators, sensors, and wearables free of heavy, bulky mechanical joints.

The key to the DNGEs’ versatility lies in engineering two elastomeric networks. First, elastomer microparticles are produced from oil-in-water emulsion drops. These microparticles are placed in a precursor solution, where they absorb elastomer compounds and swell up. The swollen microparticles are then used to make a 3D printable ink, which is loaded into a bioprinter to create a desired structure. The precursor is polymerized within the 3D-printed structure, creating a second elastomeric network that rigidifies the entire object.

While the composition of the first network determines the structure’s stiffness, the second determines its fracture toughness, meaning that the two networks can be fine-tuned independently to achieve a combination of stiffness, toughness, and fatigue resistance. The use of elastomers over hydrogels — the material used in state-of-the-art approaches — has the added advantage of creating structures that are water-free, making them more stable over time. To top it off, DNGEs can be printed using commercially available 3D printers.

“The beauty of our approach is that anyone with a standard bioprinter can use it,” Amstad emphasizes.

One exciting potential application of DNGEs is in devices for motion-guided rehabilitation, where the ability to support movement in one direction while restricting it in another could be highly useful. Further development of DNGE technology could result in prosthetics, or even motion guides to assist surgeons. Sensing remote movements, for example in robot-assisted crop harvesting or underwater exploration, is another area of application.

Amstad says that the Soft Materials Lab is already working on the next steps toward developing such applications by integrating active elements — such as responsive materials and electrical connections — into DNGE structures.

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--

Paradigm
Paradigm

Published in Paradigm

Paradigm is an ecosystem that incorporates a venture fund, a research agency and an accelerator focused on crypto, DLT, neuroscience, space technologies, robotics, and biometrics — technologies that combined together will alter how we perceive reality.