RT/ Snail-inspired robot could scoop ocean microplastics
Robotics & AI biweekly vol.87, 1st December — 14th December
TL;DR
- Inspired by a small and slow snail, scientists have developed a robot protype that may one day scoop up microplastics from the surfaces of oceans, seas and lakes.
- Engineers developed a robotic replica of the heart’s right ventricle, which mimics the beating and blood-pumping action of live hearts. The device could be used for studying right ventricle disorders and testing devices and therapies aimed at treating those disorders.
- An interdisciplinary research team is using artificial intelligence (AI) to identify microplastics faster and more accurately than ever before.
- In a step toward more autonomous soft robots and wearable technologies, researchers have created a device that uses color to simultaneously sense multiple mechanical and temperature stimuli.
- Unmanned Underwater Vehicles (UUVs) are used around the world to conduct difficult environmental, remote, oceanic, defense and rescue missions in often unpredictable and harsh conditions. A new study has now used a novel bio-inspired computing artificial intelligence solution to improve the potential of UUVs and other adaptive control systems to operate more reliability in rough seas and other unpredictable conditions.
- An inspection design method and procedure by which mobile robots can inspect large pipe structures has been demonstrated with the successful inspection of multiple defects on a three-meter long steel pipe using guided acoustic wave sensors.
- Artificial hands can be operated via app or with sensors placed in the muscles of the forearm. New research shows: a better understanding of muscle activity patterns in the forearm supports a more intuitive and natural control of artificial limbs. This requires a network of 128 sensors and artificial intelligence based techniques.
- Autonomous vehicles require object detection systems to navigate traffic and avoid obstacles on the road. However, current detection methods often suffer from diminished detection capabilities due to bad weather, unstructured roads, or occlusion. Now, a team of researchers has developed a novel Internet-of-Things-enabled deep learning-based end-to-end 3D object detection system with improved detection capabilities even under unfavorable conditions. This study marks a significant step in autonomous vehicle object detection technology.
- Researchers report that they have developed a new composite material designed to change behaviors depending on temperature in order to perform specific tasks. These materials are poised to be part of the next generation of autonomous robotics that will interact with the environment.
- New technology often calls for new materials — and with supercomputers and simulations, researchers don’t have to wade through inefficient guesswork to invent them from scratch. The Materials Project, an open-access database founded at the Berkeley Lab, computes the properties of both known and predicted materials. Researchers can focus on promising materials for future technologies — think lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, or faster transistors for the next generation of computers.
- And more!
Robotics market
The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.
Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):
Latest News & Research
Optimal free-surface pumping by an undulating carpet
by Anupam Pandey, Zih-Yin Chen, Jisoo Yuk, Yuming Sun, Chris Roh, Daisuke Takagi, Sungyon Lee, Sunghwan Jung in Nature Communications
Inspired by a small and slow snail, scientists have developed a robot protype that may one day scoop up microplastics from the surfaces of oceans, seas and lakes.
The robot’s design is based on the Hawaiian apple snail (Pomacea canaliculate), a common aquarium snail that uses the undulating motion of its foot to drive water surface flow and suck in floating food particles.
Currently, plastic collection devices mostly rely on drag nets or conveyor belts to gather and remove larger plastic debris from water, but they lack the fine scale required for retrieving microplastics. These tiny particles of plastic can be ingested and end up in the tissues of marine animals, thereby entering the food chain where they become a health issue and potentially carcinogenic to humans.
“We were inspired by how this snail collects food particles at the [water and air] interface to engineer a device that could possibly collect microplastics in the ocean or at a water body’s surface, “ said Sunghwan “Sunny” Jung, professor in the department of biological and environmental engineering at Cornell University.
Jung is senior author of a study. The prototype, modified from an existing design, would need to be scaled up to be practical in a real-world setting. The researchers used a 3D printer to make a flexible carpet-like sheet capable of undulating.
A helical structure on the underside of the sheet rotates like a corkscrew to cause the carpet to undulate and create a travelling wave on the water. Analyzing the motion of the fluid was key to this research. “We needed to understand the fluid flow to characterize the pumping behavior,” Jung said. The fluid-pumping system based on the snail’s technique is open to the air.
The researchers calculated that a similar closed system, where the pump is enclosed and uses a tube to suck in water and particles, would require high energy inputs to operate. On the other hand, the snail-like open system is far more efficient. For example, the prototype, though small, runs on only 5 volts of electricity while still effectively sucking in water, Jung said. Due to the weight of a battery and motor, the researchers may need to attach a floatation device to the robot to keep it from sinking, Jung said.
Robotic right ventricle is a biohybrid platform that simulates right ventricular function in (patho)physiological conditions and intervention
by Manisha Singh, Jean Bonnemain, Caglar Ozturk, Brian Ayers, Mossab Y. Saeed, Diego Quevedo-Moreno, Meagan Rowlett, Clara Park, Yiling Fan, Christopher T. Nguyen, Ellen T. Roche in Nature Cardiovascular Research
MIT engineers have developed a robotic replica of the heart’s right ventricle, which mimics the beating and blood-pumping action of live hearts.
The robo-ventricle combines real heart tissue with synthetic, balloon-like artificial muscles that enable scientists to control the ventricle’s contractions while observing how its natural valves and other intricate structures function.
The artificial ventricle can be tuned to mimic healthy and diseased states. The team manipulated the model to simulate conditions of right ventricular dysfunction, including pulmonary hypertension and myocardial infarction. They also used the model to test cardiac devices. For instance, the team implanted a mechanical valve to repair a natural malfunctioning valve, then observed how the ventricle’s pumping changed in response. They say the new robotic right ventricle, or RRV, can be used as a realistic platform to study right ventricle disorders and test devices and therapies aimed at treating those disorders.
“The right ventricle is particularly susceptible to dysfunction in intensive care unit settings, especially in patients on mechanical ventilation,” says Manisha Singh, a postdoc at MIT’s Institute for Medical Engineering and Science (IMES). “The RRV simulator can be used in the future to study the effects of mechanical ventilation on the right ventricle and to develop strategies to prevent right heart failure in these vulnerable patients.”
Singh and her colleagues report details of the new design in a paper. Her co-authors include Associate Professor Ellen Roche, who is a core member of IMES and the associate head for research in the Department of Mechanical Engineering at MIT, along with Jean Bonnemain, Caglar Ozturk, Clara Park, Diego Quevedo-Moreno, Meagan Rowlett, and Yiling Fan of MIT, Brian Ayers of Massachusetts General Hospital, Christopher Nguyen of Cleveland Clinic, and Mossab Saeed of Boston Children’s Hospital.
The right ventricle is one of the heart’s four chambers, along with the left ventricle and the left and right atria. Of the four chambers, the left ventricle is the heavy lifter, as its thick, cone-shaped musculature is built for pumping blood through the entire body. The right ventricle, Roche says, is a “ballerina” in comparison, as it handles a lighter though no-less-crucial load.
“The right ventricle pumps deoxygenated blood to the lungs, so it doesn’t have to pump as hard,” Roche notes. “It’s a thinner muscle, with more complex architecture and motion.”
This anatomical complexity has made it difficult for clinicians to accurately observe and assess right ventricle function in patients with heart disease.
“Conventional tools often fail to capture the intricate mechanics and dynamics of the right ventricle, leading to potential misdiagnoses and inadequate treatment strategies,” Singh says.
To improve understanding of the lesser-known chamber and speed the development of cardiac devices to treat its dysfunction, the team designed a realistic, functional model of the right ventricle that both captures its anatomical intricacies and reproduces its pumping function. The model includes real heart tissue, which the team chose to incorporate because it retains natural structures that are too complex to reproduce synthetically.
“There are thin, tiny chordae and valve leaflets with different material properties that are all moving in concert with the ventricle’s muscle.Trying to cast or print these very delicate structures is quite challenging,” Roche explains.
In the new study, the team reports explanting a pig’s right ventricle, which they treated to carefully preserve its internal structures. They then fit a silicone wrapping around it, which acted as a soft, synthetic myocardium, or muscular lining. Within this lining, the team embedded several long, balloon-like tubes, which encircled the real heart tissue, in positions that the team determined through computational modeling to be optimal for reproducing the ventricle’s contractions. The researchers connected each tube to a control system, which they then set to inflate and deflate each tube at rates that mimicked the heart’s real rhythm and motion.
To test its pumping ability, the team infused the model with a liquid similar in viscosity to blood. This particular liquid was also transparent, allowing the engineers to observe with an internal camera how internal valves and structures responded as the ventricle pumped liquid through. They found that the artificial ventricle’s pumping power and the function of its internal structures were similar to what they previously observed in live, healthy animals, demonstrating that the model can realistically simulate the right ventricle’s action and anatomy. The researchers could also tune the frequency and power of the pumping tubes to mimic various cardiac conditions, such as irregular heartbeats, muscle weakening, and hypertension.
“We’re reanimating the heart, in some sense, and in a way that we can study and potentially treat its dysfunction,” Roche says.
To show that the artificial ventricle can be used to test cardiac devices, the team surgically implanted ring-like medical devices of various sizes to repair the chamber’s tricuspid valve — a leafy, one-way valve that lets blood into the right ventricle. When this valve is leaky, or physically compromised, it can cause right heart failure or atrial fibrillation, and leads to symptoms such as reduced exercise capacity, swelling of the legs and abdomen, and liver enlargement
The researchers surgically manipulated the robo-ventricle’s valve to simulate this condition, then either replaced it by implanting a mechanical valve or repaired it using ring-like devices of different sizes. They observed which device improved the ventricle’s fluid flow as it continued to pump.
“With its ability to accurately replicate tricuspid valve dysfunction, the RRV serves as an ideal training ground for surgeons and interventional cardiologists,” Singh says. “They can practice new surgical techniques for repairing or replacing the tricuspid valve on our model before performing them on actual patients.”
Currently, the RRV can simulate realistic function over a few months. The team is working to extend that performance and enable the model to run continuously for longer stretches. They are also working with designers of implantable devices to test their prototypes on the artificial ventricle and possibly speed their path to patients. And looking far in the future, Roche plans to pair the RRV with a similar artificial, functional model of the left ventricle, which the group is currently fine-tuning.
“We envision pairing this with the left ventricle to make a fully tunable, artificial heart, that could potentially function in people,” Roche says. “We’re quite a while off, but that’s the overarching vision.”
Leveraging deep learning for automatic recognition of microplastics (MPs) via focal plane array (FPA) micro-FT-IR imaging
by Ziang Zhu, Wayne Parker, Alexander Wong in Environmental Pollution
An interdisciplinary research team from the University of Waterloo is using artificial intelligence (AI) to identify microplastics faster and more accurately than ever before.
Microplastics are commonly found in food and are dangerous pollutants that cause severe environmental damage — finding them is the key to getting rid of them. The research team’s advanced imaging identification system could help wastewater treatment plants and food production industries make informed decisions to mitigate the potential impact of microplastics on the environment and human health. A comprehensive risk analysis and action plan requires quality information based on accurate identification.
In search of a robust analytical tool that could enumerate, identify and describe the many microplastics that exist, project lead Dr. Wayne Parker and his team, employed an advanced spectroscopy method which exposes particles to a range of wavelengths of light. Different types of plastics produce different signals in response to the light exposure. These signals are like fingerprints that can also be employed to mark particles as microplastic or not.
The challenge researchers often find is that microplastics come in wide varieties due to the presence of manufacturing additives and fillers that can blur the “fingerprints” in a lab setting. This makes identifying microplastics from organic material, as well as the different types of microplastics, often difficult. Human intervention is usually required to dig out subtle patterns and cues, which is slow and prone to error.
“Microplastics are hydrophobic materials that can soak up other chemicals,” said Parker, a professor in Waterloo’s Department of Civil and Environmental Engineering. “Science is still evolving in terms of how bad the problem is, but it’s theoretically possible that microplastics are enhancing the accumulation of toxic substances in the food chain.”
Parker approached Dr. Alexander Wong, a professor in Waterloo’s Department of Systems Design Engineeringand the Canada Research Chair in Artificial Intelligence and Medical Imaging for assistance.
With his help, the team developed an AI tool called PlasticNet that enables researchers to rapidly analyze large numbers of particles approximately 50 per cent faster than prior methods and with 20 per cent more accuracy. The tool is the latest sustainable technology designed by Waterloo researchers to protect our environment and engage in research that will contribute to a sustainable future.
“We built a deep learning neural network to enhance microplastic identification from the spectroscopic signals,” said Wong. “We trained it on data from existing literature sources and our own generated images to understand the varied make-up of microplastics and spot the differences quickly and correctly — regardless of the fingerprint quality.”
Multi-modal deformation and temperature sensing for context-sensitive machines
by Robert Baines, Fabio Zuliani, Neil Chennoufi, Sagar Joshi, Rebecca Kramer-Bottiglio, Jamie Paik in Nature Communications
Robotics researchers have already made great strides in developing sensors that can perceive changes in position, pressure, and temperature — all of which are important for technologies like wearable devices and human-robot interfaces. But a hallmark of human perception is the ability to sense multiple stimuli at once, and this is something that robotics has struggled to achieve.
Now, Jamie Paik and colleagues in the Reconfigurable Robotics Lab (RRL) in EPFL’s School of Engineering have developed a sensor that can perceive combinations of bending, stretching, compression, and temperature changes, all using a robust system that boils down to a simple concept: color.
Dubbed ChromoSense, the RRL’s technology relies on a translucent rubber cylinder containing three sections dyed red, green, and blue. An LED at the top of the device sends light through its core, and changes in the light’s path through the colors as the device is bent or stretched are picked up by a miniaturized spectral meter at the bottom.
“Imagine you are drinking three different flavors of slushie through three different straws at once: the proportion of each flavor you get changes if you bend or twist the straws. This is the same principle that ChromoSense uses: it perceives changes in light traveling through the colored sections as the geometry of those sections deforms,” says Paik.
A thermosensitive section of the device also allows it to detect temperature changes, using a special dye — similar to that in color-changing t-shirts or mood rings — that desaturates in color when it is heated.
Paik explains that while robotic technologies that rely on cameras or multiple sensing elements are effective, they can make wearable devices heavier and more cumbersome, in addition to requiring more data processing.
“For soft robots to serve us better in our daily lives, they need to be able to sense what we are doing,” she says. “Traditionally, the fastest and most inexpensive way to do this has been through vision-based systems, which capture all of our activities and then extract the necessary data. ChromoSense allows for more targeted, information-dense readings, and the sensor can be easily embedded into different materials for different tasks.”
Thanks to its simple mechanical structure and use of color over cameras, ChromoSense could potentially lend itself to inexpensive mass production. In addition to assistive technologies, such as mobility-aiding exosuits, Paik sees everyday applications for ChromoSense in athletic gear or clothing, which could be used to give users feedback about their form and movements. A strength of ChromoSense — its ability to sense multiple stimuli at once — can also be a weakness, as decoupling simultaneously applied stimuli is still a challenge the researchers are working on. At the moment, Paik says they are focusing on improving the technology to sense locally applied forces, or the exact boundaries of a material when it changes shape.
“If ChromoSense gains popularity and many people want to use it as a general-purpose robotic sensing solution, then I think further increasing the information density of the sensor could become a really interesting challenge,” she says.
Looking ahead, Paik also plans to experiment with different formats for ChromoSense, which has been prototyped as a cylindrical shape and as part of a wearable soft exosuit, but could also be imagined in a flat form more suitable for the RRL’s signature origami robots.
“With our technology, anything can become a sensor as long as light can pass through it,” she summarizes.
Learning Adaptive Control of a UUV Using a Bio-Inspired Experience Replay Mechanism
by Thomas Chaffre, Paulo E. Santos, Gilles Le Chenadec, Estelle Chauveau, Karl Sammut, Benoit Clement in IEEE Access
Unmanned Underwater Vehicles (UUVs) are used around the world to conduct difficult environmental, remote, oceanic, defence and rescue missions in often unpredictable and harsh conditions.
A new study led by Flinders University and French researchers has now used a novel bio-inspired computing artificial intelligence solution to improve the potential of UUVs and other adaptive control systems to operate more reliability in rough seas and other unpredictable conditions. This innovative approach, using the Biologically-Inspired Experience Replay (BIER) method, has been published by the Institute of Electrical and Electronics Engineers journal. Unlike conventional methods, BIER aims to overcome data inefficiency and performance degradation by leveraging incomplete but valuable recent experiences, explains first author Dr Thomas Chaffre.
“The outcomes of the study demonstrated that BIER surpassed standard Experience Replay methods, achieving optimal performance twice as fast as the latter in the assumed UUV domain. “The method showed exceptional adaptability and efficiency, exhibiting its capability to stabilize the UUV in varied and challenging conditions.”
The method incorporates two memory buffers, one focusing on recent state-action pairs and the other emphasising positive rewards. To test the effectiveness of the proposed method, researchers conducted simulated scenarios using a robot operating system (ROS)-based UUV simulator and gradually increasing scenarios’ complexity. These scenarios varied in target velocity values and the intensity of current disturbances. Senior author Flinders University Associate Professor in AI and Robotics Paulo Santos says the BIER method’s success holds promise for enhancing adaptability and performance in various fields requiring dynamic, adaptive control systems.
UUVs’ capabilities in mapping, imaging and sensor controls are rapidly improving, including with Deep Reinforcement Learning (DRL), which is rapidly advancing the adaptive control responses to underwater disturbances UUVs can encounter. However, the efficiency of these methods gets challenged when faced with unforeseen variations in real-world applications.
The complex dynamics of the underwater environment limit the observability of UUV manoeuvring tasks, making it difficult for existing DRL methods to perform optimally. The introduction of BIER marks a significant step forward in enhancing the effectiveness of deep reinforcement learning method in general.
Pipe inspection using guided acoustic wave sensors integrated with mobile robots
by Jie Zhang, Xudong Niu, Anthony J. Croxford, Bruce W. Drinkwater in NDT & E International
An inspection design method and procedure by which mobile robots can inspect large pipe structures has been demonstrated with the successful inspection of multiple defects on a three-meter long steel pipe using guided acoustic wave sensors.
The University of Bristol team, led by Professor Bruce Drinkwater and Professor Anthony Croxford, developed approach was used to review a long steel pipe with multiple defects, including circular holes with different sizes, a crack-like defect and pits, through a designed inspection path to achieve 100% detection coverage for a defined reference defect.
In the study, they show how they were able to effectively examine large plate-like structures using a network of independent robots, each carrying sensors capable of both sending and receiving guided acoustic waves, working in pulse-echo mode. This approach has the major advantage of minimizing communication between robots, requires no synchronization and raises the possibility of on-board processing to lower data transfer costs and hence reducing overall inspection expenses. The inspection was divided into a defect detection and a defect localization stage.
Lead author Dr Jie Zhang explained: “There are many robotic systems with integrated ultrasound sensors used for automated inspection of pipelines from their inside to allow the pipeline operator to perform required inspections without stopping the flow of product in the pipeline. However, available systems struggle to cope with varying pipe cross-sections or network complexity, inevitably leading to pipeline disruption during inspection. This makes them suitable for specific inspections of high value assets, such as oil and gas pipelines, but not generally applicable.
“As the cost of mobile robots has reduced over recent years, it is increasingly possible to deploy multiple robots for a large area inspection. We take the existence of small inspection robots as its starting point, and explore how they can be used for generic monitoring of a structure. This requires inspection strategies, methodologies and assessment procedures that can be integrated with the mobile robots for accurate defect detection and localization that is low cost and efficient.
“We investigate this problem by considering a network of robots, each with a single omnidirectional guided acoustic wave transducer. This configuration is considered as it is arguably the simplest, with good potential for integration in a low cost platform.”
The methods employed are generally applicable to other related scenarios and allow the impact of any detection or localization method decisions to be quickly quantified. The methods could be used across other materials, pipe geometries, noise levels or guided wave modes, allowing the full range of sensor performance parameters, defects sizes and types and operating modalities to be explored. Also the techniques can be used to assess the detection and localization performance for specified inspection parameters, for example, predict the minimum detectable defect under a specified probability of detection and probability of false alarm.
Exploring Muscle Synergies for Performance Enhancement and Learning in Myoelectric Control Maps
by K. C. Tse, P. Capsi-Morales, T. Spiegeler Castaneda, C. Piazza in EEE International Conference on Rehabilitation Robotics
Different types of grasps and bionic design: technological developments in recent decades have already led to advanced artificial hands. They can enable amputees who have lost a hand through accident or illness to regain some movements. Some of these modern prostheses allow independent finger movements and wrist rotation. These movements can be selected via a smartphone app or by using muscle signals from the forearm, typically detected by two sensors.
For instance, the activation of wrist flexor muscles can be used to close the fingers together to grip a pen. If the wrist extensor muscles are contracted, the fingers re-open and the hand releases the pen. The same approach makes it possible to control different finger movements that are selected with the simultaneous activation of both flexor and extensor muscle groups.
“These are movements that the patient has to learn during rehabilitation,” says Cristina Piazza, a professor of rehabilitation and assistive robotics at TUM. Now, Prof. Piazza’s research team has shown that artificial intelligence can enable patients to control advanced hand prostheses more intuitively by using the “synergy principle” and with the help of 128 sensors on the forearm.
What is the synergy principle? “It is known from neuroscientific studies that repetitive patterns are observed in experimental sessions, both in kinematics and muscle activation,” says Prof. Piazza. These patterns can be interpreted as the way in which the human brain copes with the complexity of the biological system. That means that the brain activates a pool of muscle cells, also in the forearm. The professor adds: “When we use our hands to grasp an object, for example a ball, we move our fingers in a synchronized way and adapt to the shape of the object when contact occurs.” The researchers are now using this principle to design and control artificial hands by creating new learning algorithms.
This is necessary for intuitive movement: When controlling an artificial hand to grasp a pen, for example, multiple steps take place. First, the patient orients the artificial hand according to the grasping location, slowly moves the fingers together, and then grabs the pen. The goal is to make these movements more and more fluid, so that it is hardly noticeable that numerous separate movements make up an overall process.
“With the help of machine learning, we can understand the variations among subjects and improve the control adaptability over time and the learning process,” concludes Patricia Capsi Morales, the senior scientist in Prof. Piazza’s team.
Experiments with the new approach already indicate that conventional control methods could soon be empowered by more advanced strategies. To study what is happening at the level of the central nervous system, the researchers are working with two films: one for the inside and one for the outside of the forearm. Each contains up to 64 sensors to detect muscle activation. The method also estimates which electrical signals the spinal motor neurons have transmitted.
“The more sensors we use, the better we can record information from different muscle groups and find out which muscle activations are responsible for which hand movements,” explains Prof. Piazza. Depending on whether a person intends to make a fist, grip a pen or open a jam jar, “characteristic features of muscle signals” result, according to Dr. Capsi Morales — a prerequisite for intuitive movements.
Current research concentrates on the movement of the wrist and the whole hand. It shows that most people (eight out of ten) prefer the intuitive way of moving wrist and hand. This is also the more efficient way. But two of ten learn to handle the less intuitive way, becoming in the end even more precise.
“Our goal is to investigate the learning effect and find the right solution for each patient,” Dr. Capsi Morales explains. “This is a step in the right direction,” says Prof. Piazza, who emphasizes that each system consists of individual mechanics and properties of the hand, special training with patients, interpretation and analysis, and machine learning.
A Smart IoT Enabled End-to-End 3D Object Detection System for Autonomous Vehicles
by Imran Ahmed, Gwanggil Jeon, Abdellah Chehri in IEEE Transactions on Intelligent Transportation Systems
Autonomous vehicles hold the promise of tackling traffic congestion, enhancing traffic flow through vehicle-to-vehicle communication, and revolutionizing the travel experience by offering comfortable and safe journeys. Additionally, integrating autonomous driving technology into electric vehicles could contribute to more eco-friendly transportation solutions.
A critical requirement for the success of autonomous vehicles is their ability to detect and navigate around obstacles, pedestrians, and other vehicles across diverse environments. Current autonomous vehicles employ smart sensors such as LiDARs (Light Detection and Ranging) for a 3D view of the surroundings and depth information, RADaR (Radio Detection and Ranging) for detecting objects at night and cloudy weather, and a set of cameras for providing RGB images and a 360-degree view, collectively forming a comprehensive dataset known as point cloud.
However, these sensors often face challenges like reduced detection capabilities in adverse weather, on unstructured roads, or due to occlusion. To overcome these shortcomings, an international team of researchers led by Professor Gwanggil Jeon from the Department of Embedded Systems Engineering at Incheon National University (INU), Korea, has recently developed a groundbreaking Internet-of-Things-enabled deep learning-based end-to-end 3D object detection system.
“Our proposed system operates in real time, enhancing the object detection capabilities of autonomous vehicles, making navigation through traffic smoother and safer,” explains Prof. Jeon.
The proposed innovative system is built on the YOLOv3 (You Only Look Once) deep learning object detection technique, which is the most active state-of-the-art technique available for 2D visual detection. The researchers first used this new model for 2D object detection and then modified the YOLOv3 technique to detect 3D objects. Using both point cloud data and RGB images as input, the system generates bounding boxes with confidence scores and labels for visible obstacles as output.
To assess the system’s performance, the team conducted experiments using the Lyft dataset, which consisted of road information captured from 20 autonomous vehicles traveling a predetermined route in Palo Alto, California, over a four-month period. The results demonstrated that YOLOv3 exhibits high accuracy, surpassing other state-of-the-art architectures. Notably, the overall accuracy for 2D and 3D object detection were an impressive 96% and 97%, respectively.
Prof. Jeon emphasizes the potential impact of this enhanced detection capability: “By improving detection capabilities, this system could propel autonomous vehicles into the mainstream. The introduction of autonomous vehicles has the potential to transform the transportation and logistics industry, offering economic benefits through reduced dependence on human drivers and the introduction of more efficient transportation methods.”
Furthermore, the present work is expected to drive research and development in various technological fields such as sensors, robotics, and artificial intelligence. Going ahead, the team aims to explore additional deep learning algorithms for 3D object detection, recognizing the current focus on 2D image development. In summary, this groundbreaking study could pave the way for a widespread adoption of autonomous vehicles and, in turn, a more environment-friendly and comfortable mode of transport.
Algorithmic encoding of adaptive responses in temperature-sensing multimaterial architectures
by Weichen Li, Yue Wang, Tian Chen, Xiaojia Shelly Zhang in Science Advances
Researchers report that they have developed a new composite material designed to change behaviors depending on temperature in order to perform specific tasks. These materials are poised to be part of the next generation of autonomous robotics that will interact with the environment.
The new study conducted by University of Illinois Urbana-Champaign civil and environmental engineering professor Shelly Zhang and graduate student Weichen Li, in collaboration with professor Tian Chen and graduate student Yue Wang from the University of Houston, uses computer algorithms, two distinct polymers and 3D printing to reverse engineer a material that expands and contracts in response to temperature change with or without human intervention.
“Creating a material or device that will respond in specific ways depending on its environment is very challenging to conceptualize using human intuition alone — there are just so many design possibilities out there,” Zhang said. “So, instead, we decided to work with a computer algorithm to help us determine the best combination of materials and geometry.”
The team first used computer modeling to conceptualize a two-polymer composite that can behave differently under various temperatures based on user input or autonomous sensing.
“For this study, we developed a material that can behave like soft rubber in low temperatures and as a stiff plastic in high temperatures,” Zhang said.
Once fabricated into a tangible device, the team tested the new composite material’s ability to respond to temperature changes to perform a simple task — switch on LED lights.
“Our study demonstrates that it is possible to engineer a material with intelligent temperature sensing capabilities, and we envision this being very useful in robotics,” Zhang said. “For example, if a robot’s carrying capacity needs to change when the temperature changes, the material will ‘know’ to adapt its physical behavior to stop or perform a different task.”
Zhang said that one of the hallmarks of the study is the optimization process that helps the researchers interpolate the distribution and geometries of the two different polymer materials needed.
“Our next goal is to use this technique to add another level of complexity to a material’s programmed or autonomous behavior, such as the ability to sense the velocity of some sort of impact from another object,” she said. “This will be critical for robotics materials to know how to respond to various hazards in the field.”
An autonomous laboratory for the accelerated synthesis of novel materials
by Nathan J. Szymanski, Bernardus Rendy, Yuxing Fei, Rishi E. Kumar, Tanjin He, David Milsted, Matthew J. McDermott, Max Gallant, Ekin Dogus Cubuk, Amil Merchant, Haegyeom Kim, Anubhav Jain, Christopher J. Bartel, Kristin Persson, Yan Zeng, Gerbrand Ceder in Nature
New technology often calls for new materials — and with supercomputers and simulations, researchers don’t have to wade through inefficient guesswork to invent them from scratch.
The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials. Researchers can focus on promising materials for future technologies — think lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, or faster transistors for the next generation of computers.
Now, Google DeepMind — Google’s artificial intelligence lab — is contributing nearly 400,000 new compounds to the Materials Project, expanding the amount of information researchers can draw upon. The dataset includes how the atoms of a material are arranged (the crystal structure) and how stable it is (formation energy).
“We have to create new materials if we are going to address the global environmental and climate challenges,” said Kristin Persson, the founder and director of the Materials Project at Berkeley Lab and a professor at UC Berkeley. “With innovation in materials, we can potentially develop recyclable plastics, harness waste energy, make better batteries, and build cheaper solar panels that last longer, among many other things.”
To generate the new data, Google DeepMind developed a deep learning tool called Graph Networks for Materials Exploration, or GNoME. Researchers trained GNoME using workflows and data that were developed over a decade by the Materials Project, and improved the GNoME algorithm through active learning. GNoME researchers ultimately produced 2.2 million crystal structures, including 380,000 that they are adding to the Materials Project and predict are stable, making them potentially useful in future technologies.
Some of the computations from GNoME were used alongside data from the Materials Project to test A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in making new materials. A-Lab’s first paper, showed that the autonomous lab can quickly discover novel materials with minimal human input. Over 17 days of independent operation, A-Lab successfully produced 41 new compounds out of an attempted 58 — a rate of more than two new materials per day. For comparison, it can take a human researcher months of guesswork and experimentation to create one new material, if they ever reach the desired material at all. To make the novel compounds predicted by the Materials Project, A-Lab’s AI created new recipes by combing through scientific papers and using active learning to make adjustments. Data from the Materials Project and GNoME were used to evaluate the materials’ predicted stability.
“We had this staggering 71% success rate, and we already have a few ways to improve it,” said Gerd Ceder, the principal investigator for A-Lab and a scientist at Berkeley Lab and UC Berkeley. “We’ve shown that combining the theory and data side with automation has incredible results. We can make and test materials faster than ever before, and adding more data points to the Materials Project means we can make even smarter choices.”
The Materials Project is the most widely used open-access repository of information on inorganic materials in the world. The database holds millions of properties on hundreds of thousands of structures and molecules, information primarily processed at Berkeley Lab’s National Energy Research Science Computing Center. More than 400,000 people are registered as users of the site and, on average, more than four papers citing the Materials Project are published every day. The contribution from Google DeepMind is the biggest addition of structure-stability data from a group since the Materials Project began.
“We hope that the GNoME project will drive forward research into inorganic crystals,” said Ekin Dogus Cubuk, lead of Google DeepMind’s Materials Discovery team. “External researchers have already verified more than 736 of GNoME’s new materials through concurrent, independent physical experiments, demonstrating that our model’s discoveries can be realized in laboratories.”
The Materials Project is now processing the compounds from Google DeepMind and adding them into the online database. The new data will be freely available to researchers, and also feed into projects such as A-Lab that partner with the Materials Project.
“I’m really excited that people are using the work we’ve done to produce an unprecedented amount of materials information,” said Persson, who is also the director of Berkeley Lab’s Molecular Foundry. “This is what I set out to do with the Materials Project: To not only make the data that I produced free and available to accelerate materials design for the world, but also to teach the world what computations can do for you. They can scan large spaces for new compounds and properties more efficiently and rapidly than experiments alone can.”
By following promising leads from data in the Materials Project over the past decade, researchers have experimentally confirmed useful properties in new materials across several areas. Some show potential for use:
- in carbon capture (pulling carbon dioxide from the atmosphere)
- as photocatalysts (materials that speed up chemical reactions in response to light and could be used to break down pollutants or generate hydrogen)
- as thermoelectrics (materials that could help harness waste heat and turn it into electrical power)
- as transparent conductors (which might be useful in solar cells, touch screens, or LEDs)
Of course, finding these prospective materials is only one of many steps to solving some of humanity’s big technology challenges.
“Making a material is not for the faint of heart,” Persson said. “It takes a long time to take a material from computation to commercialization. It has to have the right properties, work within devices, be able to scale, and have the right cost efficiency and performance. The goal with the Materials Project and facilities like A-Lab is to harness data, enable data-driven exploration, and ultimately give companies more viable shots on goal.”
Subscribe to Paradigm!
Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.
Main sources
Research articles