RT/ ‘Butterfly bot’ is fastest swimming soft robot yet
Robotics biweekly vol.63, 15th November — 29th November
- Inspired by the biomechanics of the manta ray, researchers have developed an energy-efficient soft robot that can swim more than four times faster than previous swimming soft robots. The robots are called ‘butterfly bots,’ because their swimming motion resembles the way a person’s arms move when they are swimming the butterfly stroke.
- Researchers have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs; and even operate in the dark.
- A multi-institution research team has developed an optical chip that can train machine learning hardware.
- Researchers used electrical pulses to watch nickel oxide undergo two responses, habituation and sensitization, bolstering the case for brain-inspired computing.
- Researchers have developed an alternative positioning system that is more robust and accurate than GPS, especially in urban settings. The working prototype that demonstrated this new mobile network infrastructure achieved an accuracy of 10 centimeter. This new technology is important for the implementation of a range of location-based applications, including automated vehicles, quantum communication and next-generation mobile communication systems.
- Sharing medical data between laboratories and medical experts is important for medical research. However, data sharing is often sufficiently complex and sometimes even impossible due to the strict data regulatory legislation in Europe. Researchers addressed the problem and developed an artificial neural network that creates synthetic x-ray images that can fool even medical experts.
- Researchers have created a ring-shaped soft robot capable of crawling across surfaces when exposed to elevated temperatures or infrared light. The researchers have demonstrated that these ‘ringbots’ are capable of pulling a small payload across the surface — in ambient air or under water, as well as passing through a gap that is narrower than its ring size.
- Bioengineers have developed a mechanically active adhesive named MAGENTA, which functions as a soft robotic device able to extend and contract muscles from the outside. In an animal model, MAGENTA successfully prevented and supported the recovery from muscle atrophy.
- Researchers have developed a new model to describe how biological or technical systems form complex structures without external guidance.
- Researchers have built a robot that can knock a ball into a hole using a club on a putting green on most attempts.
- Robotics upcoming events. And more!
The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.
Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):
Latest News & Research
by Yinding Chi, Yaoye Hong, Yao Zhao, Yanbin Li, Jie Yin in Science Advances
Inspired by the biomechanics of the manta ray, researchers at North Carolina State University have developed an energy-efficient soft robot that can swim more than four times faster than previous swimming soft robots. The robots are called “butterfly bots,” because their swimming motion resembles the way a person’s arms move when they are swimming the butterfly stroke.
“To date, swimming soft robots have not been able to swim faster than one body length per second, but marine animals — such as manta rays — are able to swim much faster, and much more efficiently,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State. “We wanted to draw on the biomechanics of these animals to see if we could develop faster, more energy-efficient soft robots. The prototypes we’ve developed work exceptionally well.”
The researchers developed two types of butterfly bots. One was built specifically for speed, and was able to reach average speeds of 3.74 body lengths per second. A second was designed to be highly maneuverable, capable of making sharp turns to the right or left. This maneuverable prototype was able to reach speeds of 1.7 body lengths per second.
“Researchers who study aerodynamics and biomechanics use something called a Strouhal number to assess the energy efficiency of flying and swimming animals,” says Yinding Chi, first author of the paper and a recent Ph.D. graduate of NC State. “Peak propulsive efficiency occurs when an animal swims or flies with a Strouhal number of between 0.2 and 0.4. Both of our butterfly bots had Strouhal numbers in this range.”
The butterfly bots derive their swimming power from their wings, which are “bistable,” meaning the wings have two stable states. The wing is similar to a snap hair clip. A hair clip is stable until you apply a certain amount of energy (by bending it). When the amount of energy reaches critical point, the hair clip snaps into a different shape — which is also stable.
In the butterfly bots, the hair clip-inspired bistable wings are attached to a soft, silicone body. Users control the switch between the two stable states in the wings by pumping air into chambers inside the soft body. As those chambers inflate and deflate, the body bends up and down — forcing the wings to snap back and forth with it.
“Most previous attempts to develop flapping robots have focused on using motors to provide power directly to the wings,” Yin says. “Our approach uses bistable wings that are passively driven by moving the central body. This is an important distinction, because it allows for a simplified design, which lowers the weight.”
The faster butterfly bot has only one “drive unit” — the soft body — which controls both of its wings. This makes it very fast, but difficult to turn left or right. The maneuverable butterfly bot essentially has two drive units, which are connected side by side. This design allows users to manipulate the wings on both sides, or to “flap” only one wing, which is what enables it to make sharp turns.
“This work is an exciting proof of concept, but it has limitations,” Yin says. “Most obviously, the current prototypes are tethered by slender tubing, which is what we use to pump air into the central bodies. We’re currently working to develop an untethered, autonomous version.”
by Alexander Ziepke, Ivan Maryshev, Igor S. Aranson, Erwin Frey in Nature Communications
LMU researchers have developed a new model to describe how biological or technical systems form complex structures without external guidance.
Amoebae are single-cell organisms. By means of self-organization, they can form complex structures — and do this purely through local interactions: If they have a lot of food, they disperse evenly through a culture medium. But if food becomes scarce, they emit the messenger known as cyclic adenosine monophosphate (cAMP). This chemical signal induces amoebae to gather in one place and form a multicellular aggregation. The result is a fruiting body.
“The phenomenon is well known,” says Prof. Erwin Frey from LMU’s Faculty of Physics. “Before now, however, no research group has investigated how information processing, at a general level, affects the aggregation of systems of agents when individual agents — in our case, amoebae — are self-propelled.” More knowledge about these mechanisms would also be interesting, adds Frey, as regards translating them to artificial technical systems.
Together with other researchers, Frey describes how active systems that process information in their environment can be used — for technological or biological applications. It is not about understanding all details of the communication between individual agents, but about the specific structures formed through self-organization. This applies to amoebae — and also to certain kinds of robots. The research was undertaken in collaboration with Prof. Igor Aronson during his stay at LMU as a Humboldt Research Award winner.
The term “active matter” refers to biological or technical systems from which larger structures are formed by means of self-organization. Such processes are based upon exclusively local interactions between identical, self-propelled units, such as amoebae or indeed robots.
Inspired by biological systems, Frey and his co-authors propose a new model in which self-propelled agents communicate with each other. These agents recognize chemical, biological, or physical signals at a local level and make individual decisions using their internal machinery that result in collective self-organization. This orientation gives rise to larger structures, which can span multiple length scales.
The new paradigm of communicating active matter forms the basis of the study. Local decisions in response to a signal and the transmission of information, lead to collectively controlled self-organization.
Frey sees a possible application of the new model in soft robots — which is to say, robots that are made of soft materials. Such robots are suitable, for example, for performing tasks in human bodies. They can communicate with other soft robots via electromagnetic waves for purposes such as administering drugs at specific sites in the body. The new model can help nanotechnologists design such robot systems by describing the collective properties of robot swarms.
“It’s sufficient to roughly understand how individual agents communicate with each other; self-organization takes care of the rest,” says Frey. “This is a paradigm shift specifically in robotics, where researchers are attempting to do precisely the opposite — they want to obtain extremely high levels of control.” But that does not always succeed. “Our proposal, by contrast, is to exploit the capacity for self-organization.”
by Matthew J. Filipovich, Zhimu Guo, Mohammed Al-Qadasi, Bicky A. Marquez, Hugh D. Morison, Volker J. Sorger, Paul R. Prucnal, Sudip Shekhar, Bhavin J. Shastri in Optica
A multi-institution research team has developed an optical chip that can train machine learning hardware.
Machine learning applications skyrocketed to $165B annually, according to a recent report from McKinsey. But, before a machine can perform intelligence tasks such as recognizing the details of an image, it must be trained. Training of modern-day artificial intelligence (AI) systems like Tesla’s autopilot costs several million dollars in electric power consumption and requires supercomputer-like infrastructure. This surging AI “appetite” leaves an ever-widening gap between computer hardware and demand for AI. Photonic integrated circuits, or simply optical chips, have emerged as a possible solution to deliver higher computing performance, as measured by the number of operations performed per second per watt used, or TOPS/W. However, though they’ve demonstrated improved core operations in machine intelligence used for data classification, photonic chips have yet to improve the actual front-end learning and machine training process.
Machine learning is a two-step procedure. First, data is used to train the system and then other data is used to test the performance of the AI system. IIn a new paper, a team of researchers from the George Washington University, Queens University, University of British Columbia and Princeton University set out to do just that. After one training step, the team observed an error and reconfigured the hardware for a second training cycle followed by additional training cycles until a sufficient AI performance was reached (e.g. the system is able to correctly label objects appearing in a movie). Thus far, photonic chips have only demonstrated an ability to classify and infer information from data. Now, researchers have made it possible to speed up the training step itself.
This added AI capability is part of a larger effort around photonic tensor cores and other electronic-photonic application-specific integrated circuits (ASIC) that leverage photonic chip manufacturing for machine learning and AI applications.
“This novel hardware will speed up the training of machine learning systems and harness the best of what both photonics and electronic chips have to offer. It is a major leap forward for AI hardware acceleration. These are the kinds of advancements we need in the semiconductor industry as underscored by the recently passed CHIPS Act.” — Volker Sorger, Professor of Electrical and Computer Engineering at the George Washington University and founder of the start-up company Optelligence.
“The training of AI systems costs a significant amount of energy and carbon footprint. For example, a single AI transformer takes about five times as much CO2 in electricity as a gasoline car spends in its lifetime. Our training on photonic chips will help to reduce this overhead.”
by Sandip Mondal, Zhen Zhang, A. N. M. Nafiul Islam, Robert Andrawis, et al in Advanced Intelligent Systems
Scientists used the Advanced Photon Source to watch a nonliving material mimic behavior associated with learning, paving the way for better artificial intelligence.
Scientists looking to create a new generation of supercomputers are looking for inspiration from the most complex and energy-efficient computer ever built: the human brain. In some of their initial forays into making brain-inspired computers, researchers are looking at different nonbiological materials whose properties could be tailored to show evidence of learning-like behaviors. These materials could form the basis for hardware that could be paired with new software algorithms to enable more potent, useful and energy-efficient artificial intelligence (AI).
In a new study led by scientists from Purdue University, researchers have exposed oxygen deficient nickel oxide to brief electrical pulses and elicited two different electrical responses that are similar to learning. The result is an all-electrically-driven system that shows these learning behaviors, said Rutgers University professor Shriram Ramanathan. (Ramanathan was a professor at Purdue University at the time of this work.) The research team used the resources of the Advanced Photon Source (APS), a U.S. Department of Energy (DOE) Office of Science user facility at DOE’s Argonne National Laboratory.
The first response, habituation, occurs when the material “gets used to” being slightly zapped. The scientists noticed that although the material’s resistance increases after an initial jolt, it soon becomes accustomed to the electric stimulus.
“Habituation is like what happens when you live near an airport,” said Fanny Rodolakis, a physicist and beamline scientist at the APS. “The day you move in, you think ‘what a racket,’ but eventually you hardly notice anymore.”
The other response shown by the material, sensitization, occurs when a larger dose of electricity is administered.
“With a larger stimulus, the material’s response grows instead of diminishing over time,” Rodolakis said. “It’s akin to watching a scary movie, and then having someone say ‘boo!’ from behind a corner — you see it really jump.”
“Pretty much all living organisms demonstrate these two characteristics,” Ramanathan said. “They really are a foundational aspect of intelligence.”
These two behaviors are controlled by quantum interactions between electrons that can’t be described by classical physics, and that help to form the basis for a phase transition in the material.
“An example of a phase transition is a liquid becoming a solid,” Rodolakis said. “The material we’re looking at is right on the border, and the competing interactions that are going on at the electronic level can easily be tipped one way or another by small stimuli.”
Having a system that can be completely controlled by electrical signals is essential for brain-inspired computing applications, Ramanathan said. “Being able to manipulate materials in this fashion will allow hardware to take on some of the responsibility for intelligence,” he explained. “Using quantum properties to get intelligence into hardware represents a key step towards energy-efficient computing.”
The difference between habituation and sensitization can help scientists overcome a challenge in the development of AI called the stability-plasticity dilemma. Artificial intelligence algorithms can often be, on the one hand, too reluctant to adapt to new information. But on the other, when they do they can often forget some of what they’ve already learned. By creating a material that can habituate, scientists can teach it to ignore or forget unneeded information and thus achieve additional stability, while sensitization could train it to remember and incorporate new information, enabling plasticity.
“AI often has a hard time learning and storing new information without overwriting information that has already been stored,” Rodolakis said. “Too much stability prevents AI from learning, but too much plasticity can lead to catastrophic forgetting.”
One major advantage of the new study involved the small size of the nickel oxide device. “This type of learning had previously not been done in the current generation of electronics without a large number of transistors,” Rodolakis said. “This single junction system is the smallest system to date to show these properties, which has big implications for the possible development of neuromorphic circuitry.”
To detect the atomic-scale dynamics responsible for the habituation and sensitization behaviors, Rodolakis and Argonne’s Hua Zhou used X-ray absorption spectroscopy at beamlines 29-ID-D and 33-ID-D of the APS.
DeepFake knee osteoarthritis X-rays from generative adversarial neural networks deceive medical experts and offer augmentation potential to automatic classification
by Fabi Prezja, Juha Paloneva, Ilkka Pölönen, Esko Niinimäki, Sami Äyrämö in Scientific Reports
Sharing medical data between laboratories and medical experts is important for medical research. However, data sharing is often sufficiently complex and sometimes even impossible due to the strict data regulatory legislation in Europe. Researchers at the University of Jyväskylä Digital Health Intelligence Laboratory addressed the problem and developed an artificial neural network that creates synthetic x-ray images that can fool even medical experts.
A group of researchers from University of Jyväskylä’s AI Hub Central Finland project developed an AI based method to create synthetic knee x-ray images to replace or complement real x-ray images in knee osteoarthritis classification. Researchers used synthetically generated X-ray images to complement a data set of real X-ray images from the osteoarthritis study. The authenticity of the images was then assessed together with specialists from the central Finland healthcare district.
Medical experts were asked to rate osteoarthritis severity without knowing that the data set included synthetic images. In the second phase, experts tried to identify authentic and synthetic images. The results showed that on average, it was improbable even for medical experts to distinguish between real and synthetic x-ray images.
“The use of synthetic data is not subject to the same data protection regulations as real data. Using synthetic data can facilitate collaboration between, for example, research groups, companies and educational institutions,” says Sami Äyrämö, Head of Digital Health Intelligence Laboratory at the University of Jyväskylä.
According to Äyrämö, the use of synthetic data also speeds up authorisation processes and thus, among other things, testing of new ideas.
Data-driven AI methods can be used to support doctors in making diagnoses. Even though the technical potential of AI is huge, the amount of medical data is often insufficient This is a key challenge in developing medically effective methods.
“By mixing real and synthetic x-ray images, we improved AI-based osteoarthritis classification systems,” says Fabi Prezja, the Doctoral Researcher responsible for developing the artificial neural network.
In the future, synthetic data can lead to better results in developing medical methods and patient care, especially for medical conditions where real patient data is limited.
“In addition the neural network is capable of modifying synthetic x-ray images to expert specifications. This capability is very powerful and allows potential future use for medical educational applications, and stress testing for other AI systems,” Prezja adds.
The research was carried out in collaboration with the central Finland healthcare district, whose director, a professor of surgery Juha Paloneva sees AI-based diagnostic methods as a valuable way of transferring the know-how of an experienced doctor to support the work of a younger doctor. Adjacent image show a set of screenshots of an animation showing how a synthetic x-ray can be modified to expert specifications.
“AI can be used to reveal, for example, hard-to-spot signs of early osteoarthritis. However, AI methods for osteoarthritis are still improving, so the work continues,” Paloneva says.
by Ananye Agarwal, Ashish Kumar, Jitendra Malik, Deepak Pathak in arXiv.org
This little robot can go almost anywhere. Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs; and even operate in the dark.
“Empowering small robots to climb stairs and handle a variety of environments is crucial to developing robots that will be useful in people’s homes as well as search-and-rescue operations,” said Deepak Pathak, an assistant professor in the Robotics Institute. “This system creates a robust and adaptable robot that could perform many everyday tasks.”
The team put the robot through its paces, testing it on uneven stairs and hillsides at public parks, challenging it to walk across stepping stones and over slippery surfaces, and asking it to climb stairs that for its height would be akin to a human leaping over a hurdle. The robot adapts quickly and masters challenging terrain by relying on its vision and a small onboard computer.
The researchers trained the robot with 4,000 clones of it in a simulator, where they practiced walking and climbing on challenging terrain. The simulator’s speed allowed the robot to gain six years of experience in a single day. The simulator also stored the motor skills it learned during training in a neural network that the researchers copied to the real robot. This approach did not require any hand-engineering of the robot’s movements — a departure from traditional methods.
Most robotic systems use cameras to create a map of the surrounding environment and use that map to plan movements before executing them. The process is slow and can often falter due to inherent fuzziness, inaccuracies, or misperceptions in the mapping stage that affect the subsequent planning and movements. Mapping and planning are useful in systems focused on high-level control but are not always suited for the dynamic requirements of low-level skills like walking or running over challenging terrains.
The new system bypasses the mapping and planning phases and directly routes the vision inputs to the control of the robot. What the robot sees determines how it moves. Not even the researchers specify how the legs should move. This technique allows the robot to react to oncoming terrain quickly and move through it effectively. Because there is no mapping or planning involved and movements are trained using machine learning, the robot itself can be low-cost. The robot the team used was at least 25 times cheaper than available alternatives. The team’s algorithm has the potential to make low-cost robots much more widely available.
“This system uses vision and feedback from the body directly as input to output commands to the robot’s motors,” said Ananye Agarwal, an SCS Ph.D. student in machine learning. “This technique allows the system to be very robust in the real world. If it slips on stairs, it can recover. It can go into unknown environments and adapt.”
This direct vision-to-control aspect is biologically inspired. Humans and animals use vision to move. Try running or balancing with your eyes closed. Previous research from the team had shown that blind robots — robots without cameras — can conquer challenging terrain, but adding vision and relying on that vision greatly improves the system. The team looked to nature for other elements of the system, as well. For a small robot — less than a foot tall, in this case — to scale stairs or obstacles nearly its height, it learned to adopt the movement that humans use to step over high obstacles. When a human has to lift its leg up high to scale a ledge or hurdle, it uses its hips to move its leg out to the side, called abduction and adduction, giving it more clearance. The robot system Pathak’s team designed does the same, using hip abduction to tackle obstacles that trip up some of the most advanced legged robotic systems on the market.
The movement of hind legs by four-legged animals also inspired the team. When a cat moves through obstacles, its hind legs avoid the same items as its front legs without the benefit of a nearby set of eyes. “Four-legged animals have a memory that enables their hind legs to track the front legs. Our system works in a similar fashion” Pathak said. The system’s onboard memory enables the rear legs to remember what the camera at the front saw and maneuver to avoid obstacles.
by Jeroen C. J. Koelemeij, Han Dun, Cherif E. V. Diouf, Erik F. Dierikx, Gerard J. M. Janssen, Christian C. J. M. Tiberius in Nature
Researchers of Delft University of Technology, Vrije Universiteit Amsterdam and VSL have developed an alternative positioning system that is more robust and accurate than GPS, especially in urban settings. The working prototype that demonstrated this new mobile network infrastructure achieved an accuracy of 10 centimeter. This new technology is important for the implementation of a range of location-based applications, including automated vehicles, quantum communication and next-generation mobile communication systems.
A lot of our vital infrastructure relies on global navigation satellite systems such as the US GPS and EU Galileo. Yet these systems that rely on satellites have their limitations and vulnerabilities. Their radio signals are weak when received on Earth, and accurate positioning is no longer possible if the radio signals are reflected or blocked by buildings.
“This can make GPS unreliable in urban settings, for instance” says Christiaan Tiberius of Delft University of Technology and coordinator of the project, “which is a problem if we ever want to use automated vehicles. Also, citizens and our authorities actually depend on GPS for many location-based applications and navigation devices. Furthermore, so far we had no back-up system.”
The aim of the project entitled SuperGPS was to develop an alternative positioning system that makes use of the mobile telecommunication network instead of satellites and that could be more robust and accurate than GPS.
‘We realized that with a few cutting-edge innovations, the telecommunication network could be transformed into a very accurate alternative positioning system that is independent of GPS,’ says Jeroen Koelemeij of Vrije Universiteit Amsterdam. “We have succeeded and have successfully developed a system that can provide connectivity just like existing mobile and Wi-Fi networks do, as well as accurate positioning and time distribution like GPS.”
One of these innovations is to connect the mobile network to a very accurate atomic clock, so that it can broadcast perfectly timed messages for positioning, just like GPS satellites do with the help of the atomic clocks they carry on board. These connections are made through the existing fiber-optic network.
“We had already been investigating techniques to distribute the national time produced by our atomic clocks to users elsewhere through the telecommunication network,” says Erik Dierikx of VSL. “With these techniques we can turn the network into a nationwide distributed atomic clock — with many new applications such as very accurate positioning through mobile networks. With the hybrid optical-wireless system that we have demonstrated now, in principle anyone can have wireless access to the national time produced at VSL. It basically forms an extremely accurate radio clock that is good to one billionth of a second.”
Furthermore, the system employs radio signals with a bandwidth much larger than commonly used. “Buildings reflect radio signals, which can confuse navigation devices. The large bandwidth of our system helps sorting out these confusing signal reflections, and enables higher positioning accuracy,” Gerard Janssen of Delft University of Technology explains. “At the same time, bandwidth within the radio spectrum is scarce and therefore expensive. We circumvent this by using a number of related small bandwidth radio signals spread over a large virtual bandwidth. This has the advantage that only a small fraction of the virtual bandwidth is actually used and the signals can be very similar to those of mobile phones.”
by Yao Zhao, Yaoye Hong, Fangjie Qi, Yinding Chi, Hao Su, Jie Yin in Advanced Materials
Researchers at North Carolina State University have created a ring-shaped soft robot capable of crawling across surfaces when exposed to elevated temperatures or infrared light. The researchers have demonstrated that these “ringbots” are capable of pulling a small payload across the surface — in ambient air or under water, as well as passing through a gap that is narrower than its ring size.
The ringbots are made of liquid crystal elastomers in the shape of looped ribbon, resembling a bracelet. When you place the ringbot on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion in the ribbon.
Similarly, when researchers shine infrared light on the ringbot, the portion of the ribbon exposed to the light contracts, while the portion shielded from the light does not. This also induces a rolling motion in the ribbon. In practical terms, this means that the crawling ringbot moves from the bottom up when placed on a hot surface. But when exposed to infrared light, the movement begins from the top down.
One of the things that drives this continuous motion is the fact that the ringbots are bistable, meaning that there are two shapes when it is at rest. If the ribbon begins to twist, it will either snap back to its original shape, or snap forward into the other bistable state. Picture a rubber bracelet shaped like a ribbon. If you fold two ends of the bracelet forward a little bit, then let go, it will snap back to its original shape. But if you fold the ends over far enough, it will snap over — essentially folding the bracelet inside out.
In the case of the ringbots, the “folding” is done by applying constant heat or infrared light, causing the elastomer to contract and rotate. If the ring robot is symmetrical, this will essentially make it dance in place.
“But by engineering the shape of the loop, so that one side of the loop is permanently twisted, the structure is asymmetrical,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State. “This means that the loop is exposed to the heat or infrared light unevenly, which causes the soft robot to move laterally across the surface.”
When placed on a hot surface, the end result is that the crawling ringbot pulls itself forward. But when exposed to infrared light, the crawling ringbot pushes itself forward. Think of it as front-wheel drive versus rear-wheel drive.
In demonstrations, the ringbots were capable of pulling a small payload, and worked both in ambient air and underwater. The researchers also demonstrated that a ringbot could adapt its body shape to squeeze through a confined space that is more than 30% narrower than the ringbot’s diameter. And when the gap is too narrow for the soft robot to pass through, it redirects itself to move away from the gap.
“This is a fundamental advance, not something designed with a specific application in mind,” says Yao Zhao, a postdoctoral researcher in Yin’s lab. “We are demonstrating what can be accomplished when ‘physical intelligence’ is engineered into the material and the design of the structure itself, allowing it to move and navigate space without computational input.”
by Sungmin Nam, Bo Ri Seo, Alexander J. Najibi, Stephanie L. McNamara, David J. Mooney in Nature Materials
Muscles waste as a result of not being exercised enough, as happens quickly with a broken limb that has been immobilized in a cast, and more slowly in people reaching an advanced age. Muscle atrophy, how clinicians refer to the phenomenon, is also a debilitating symptom in patients suffering from neurological disorders, such as amyotrophic lateral sclerosis (ALS) and multiple sclerosis (MS), and can be a systemic response to various other diseases, including cancer and diabetes.
Mechanotherapy, a form of therapy given by manual or mechanical means, is thought to have broad potential for tissue repair. The best-known example is massage, which applies compressive stimulation to muscles for their relaxation. However, it has been much less clear whether stretching and contracting muscles by external means can also be a treatment. So far, two major challenges have prevented such studies: limited mechanical systems capable of evenly generating stretching and contraction forces along the length of muscles, and inefficient delivery of these mechanical stimuli to the surface and into the deeper layers of muscle tissue.
Now, bioengineers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a mechanically active adhesive named MAGENTA, which functions as a soft robotic device and solves this two-fold problem. In an animal model, MAGENTA successfully prevented and supported the recovery from muscle atrophy.
“With MAGENTA, we developed a new integrated multi-component system for the mechanostimulation of muscle that can be directly placed on muscle tissue to trigger key molecular pathways for growth,” said senior author and Wyss Founding Core Faculty member David Mooney, Ph.D. “While the study provides first proof-of-concept that externally provided stretching and contraction movements can prevent atrophy in an animal model, we think that the device’s core design can be broadly adapted to various disease settings where atrophy is a major issue.” Mooney leads the Wyss Institute’s Immuno-Materials Platform, and is also the Robert P. Pinkas Family Professor of Bioengineering at SEAS.
One of MAGENTA’s major components is an engineered spring made from nitinol, a type of metal known as “shape memory alloy” (SMA) that enables MAGENTA’s rapid actuation when heated to a certain temperature. The researchers actuated the spring by electrically wiring it to a microprocessor unit that allows the frequency and duration of the stretching and contraction cycles to be programmed. The other components of MAGENTA are an elastomer matrix that forms the body of the device and insulates the heated SMA, and a “tough adhesive” that enables the device to be firmly adhered to muscle tissue. In this way, the device is aligned with the natural axis of muscle movement, transmitting the mechanical force generated by SMA deep into the muscle. Mooney’s group is advancing MAGENTA, which stands for “mechanically active gel-elastomer-nitinol tissue adhesive,” as one of several Tough Gel Adhesives with functionalities tailored to various regenerative applications across multiple tissues.
After designing and assembling the MAGENTA device, the team tested its muscle deforming potential, first in isolated muscles ex vivo and then by implanting it on one of the major calf muscles of mice. The device did not induce any serious signs of tissue inflammation and damage, and exhibited a mechanical strain of about 15% on muscles, which matches their natural deformation during exercise. Next, to evaluate its therapeutic efficacy, the researchers used an in vivo model of muscle atrophy by immobilizing a mouse’s hind limb in a tiny cast-like enclosure for up to two weeks after implanting the MAGENTA device on it.
“While untreated muscles and muscles treated with the device but not stimulated significantly wasted away during this period, the actively stimulated muscles showed reduced muscle wasting,” said first-author and Wyss Technology Development Fellow Sungmin Nam, Ph.D. “Our approach could also promote the recovery of muscle mass that already had been lost over a three-week period of immobilization, and induce the activation of the major biochemical mechanotransduction pathways known to elicit protein synthesis and muscle growth.”
In a previous study, Mooney’s group in collaboration with Wyss Associate Faculty member Conor Walsh’s group found that regulated cyclical compression (as opposed to stretching and contraction) of acutely injured muscles, using a different soft robotic device, reduced inflammation and enabled the repair of muscle fibers in acutely injured muscle. In their new study, Mooney’s team asked whether those compressive forces could also protect from muscle atrophy. However, when they directly compared muscle compression via the previous device to muscle stretching and contraction via the MAGENTA device, only the latter had clear therapeutic effects in the mouse atrophy model.
“There is a good chance that distinct soft robotic approaches with their unique effects on muscle tissue could open up disease or injury-specific mechano-therapeutic avenues,” said Mooney.
To further expand the possibilities of MAGENTA, the team explored whether the SMA spring could also be actuated by laser light, which had not been shown before and would make the approach essentially wireless, broadening its therapeutic usefulness. Indeed, they demonstrated that an implanted MAGENTA device without any electric wires could function as a light-responsive actuator and deform muscle tissue when irradiated with laser light through the overlying skin layer. While laser actuation did not achieve the same frequencies as electrical actuation, and especially fat tissue seemed to absorb some laser light, the researchers think that the demonstrated light sensitivity and performance of the device could be further improved.
“The general capabilities of MAGENTA and fact that its assembly can be easily scaled from millimeters to several centimeters could make it interesting as a central piece of future mechanotherapy not only to treat atrophy, but perhaps also to accelerate regeneration in the skin, heart, and other places that might benefit from this form of mechanotransduction,” said Nam.
by Annika Junker et al in arXiv
Researchers at Paderborn University in Germany have built a robot that can knock a ball into a hole using a club on a putting green on most attempts. Annika Junker, Niklas Fittkau, Julia Timmermann and Ansgar Trächtler have published a paper.
Golf is a notoriously difficult sport — professionals and amateurs alike spend countless hours attempting to improve their game. One of the most difficult parts of the game is putting the ball into the hole. Much of the difficulty lies in the combination of factors at play — the height of the grass and its roughness, the amount of wind and degree of humidity, and worst of all, the terrain. Golf greens are not flat like practice mats on office floors; they have small hills and valleys that play havoc with the speed of the ball. In this new effort, the researchers built a robot to tackle the problem of putting on a lab-based putting green.
Most AI systems learn by studying the work of others, typically humans, looking for patterns in thousands of data samples that lead to desired outcomes. With golf, this approach is impractical because the results would only apply to one shot on one green. For a robot to golf on a variety of greens, it must be able to learn on the fly.
To build such a system, the researchers used a physics-based model that accepted factors such as ball speed and weight, and the ball response to changes in terrain, such as hills. The model also factored in the impact of wind. The researchers placed a 3D camera above their lab-based green and took a snapshot of it, capturing all of its wavy nuances. That snapshot was then sent to the physics model that ran thousands of virtual attempts to knock the virtual ball into the virtual hole using a virtual club — all based on the current spot of the ball.
Once satisfied it had worked out the correct approach, the system sent instructions to the robot, telling it where to place itself, and then how much speed to use when striking the ball. Testing showed the robot able to hit the ball into the hole approximately 60 to 70% of the time under ideal conditions.
- Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jump, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-sized jumping robots.
- The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the Moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the Moon’s shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800 m² ‘lunar’ terrain. The winning team will have the opportunity to have their technology implemented on the Moon.
- The robotics research group Brubotics and polymer science and physical chemistry group FYSC of the university of Brussels have developed together self-healing materials that can be scratched, punctured or completely cut through and heal themselves back together, with the required heat, or even at room temperature.
- Researchers at MIT’s Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.
CoRL 2022: 14–18 December 2022, Auckland, New Zealand
Subscribe to Paradigm!