RT/ Biodegradable artificial muscles: Going green in the field of soft robotics

Paradigm
Paradigm
Published in
30 min readApr 4, 2023

Robotics biweekly vol.71, 17th March — 4th April

TL;DR

  • Scientists have developed fully biodegradable, high-performance artificial muscles. Their research project marks another step towards green technology becoming a lasting trend in the field of soft robotics.
  • Artificial intelligence technologies like ChatGPT are seemingly doing everything these days: writing code, composing music, and even creating images so realistic you’ll think they were taken by professional photographers. Add thinking and responding like a human to the conga line of capabilities. A recent study proves that artificial intelligence can respond to complex survey questions just like a real human.
  • Researchers found that preschoolers prefer learning from what they perceive as a competent robot over an incompetent human. This study is the first to use both a human speaker and a robot to see if children deem social affiliation and similarity more important than competency when choosing which source to trust and learn from.
  • Researchers have demonstrated a caterpillar-like soft robot that can move forward, backward and dip under narrow spaces. The caterpillar-bot’s movement is driven by a novel pattern of silver nanowires that use heat to control the way the robot bends, allowing users to steer the robot in either direction.
  • The push toward truly autonomous vehicles has been hindered by the cost and time associated with safety testing, but a new system shows that artificial intelligence can reduce the testing miles required by 99.99%.
  • Researchers have developed biosensor technology that will allow you to operate devices, such as robots and machines, solely through thought control.
  • Engineers are harnessing artificial intelligence and wireless technology to unobtrusively monitor elderly people in their living spaces and provide early detection of emerging health problems.
  • Synecoculture, a new farming method, involves growing mixed plant species together in high density. However, it requires complex operation since varying species with different growing seasons and growing speeds are planted on the same land. To address this need, researchers have developed a robot that can sow, prune, and harvest plants in dense vegetation growth. Its small, flexible body will help large-scale Synecoculture. This is an important step towards achieving sustainable farming and carbon neutrality.
  • Researchers have developed resilient artificial muscles that can enable insect-scale aerial robots to effectively recover flight performance after suffering severe damage.
  • When multiple drones are working together in the same airspace, perhaps spraying pesticide over a field of corn, there’s a risk they might crash into each other. To help avoid these costly crashes, MIT researchers presented a system called MADER in 2020. This multiagent trajectory-planner enables a group of drones to formulate optimal, collision-free trajectories.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Biodegradable electrohydraulic actuators for sustainable soft robots

by Ellen H. Rumley, David Preninger, Alona Shagan Shomron, et al in Science Advances

Artificial muscles are a progressing technology that could one day enable robots to function like living organisms. Such muscles open up new possibilities for how robots can shape the world around us; from assistive wearable devices that can redefine our physical abilities at old age, to rescue robots that can navigate rubble in search of the missing. But just because artificial muscles can have a strong societal impact during use, doesn’t mean they have to leave a strong environmental impact after use.

The topic of sustainability in soft robotics has now been brought into focus by an international team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart (Germany), the Johannes Kepler University (JKU) in Linz (Austria), and the University of Colorado (CU Boulder), Boulder (USA). The scientists collaborated to design a fully biodegradable, high performance artificial muscle — based on gelatin, oil, and bioplastics. They show the potential of this biodegradable technology by using it to animate a robotic gripper, which could be especially useful in single-use deployments such as for waste collection. At the end of life, these artificial muscles can be disposed of in municipal compost bins; under monitored conditions, they fully biodegrade within six months.

Biodegradable materials for sustainable electrohydraulic soft actuators.

“We see an urgent need for sustainable materials in the accelerating field of soft robotics. Biodegradable parts could offer a sustainable solution especially for single-use applications, like for medical operations, search-and-rescue missions, and manipulation of hazardous substances. Instead of accumulating in landfills at the end of product life, the robots of the future could become compost for future plant growth,” says Ellen Rumley, a visiting scientist from CU Boulder working in the Robotic Materials Department at MPI-IS. Rumley is co-first author of the paper.

Specifically, the team of researchers built an electrically driven artificial muscle called HASEL. In essence, HASELs are oil-filled plastic pouches that are partially covered by a pair of electrical conductors called electrodes. Applying a high voltage across the electrode pair causes opposing charges to build on them, generating a force between them that pushes oil to an electrode-free region of the pouch. This oil migration causes the pouch to contract, much like a real muscle. The key requirement for HASELs to deform is that the materials making up the plastic pouch and oil are electrical insulators, which can sustain the high electrical stresses generated by the charged electrodes.

One of the challenges for this project was to develop a conductive, soft, and fully biodegradable electrode. Researchers atJohannes Kepler University created a recipe based on a mixture of biopolymer gelatin and salts that can be directly cast onto HASEL actuators.

“It was important for us to make electrodes suitable for these high-performance applications, but with readily available components and an accessible fabrication strategy. Since our presented formulation can be easily integrated in various types of electrically driven systems, it serves as a building block for future biodegradable applications,” states David Preninger, co-first author for this project and a scientist at the Soft Matter Physics Division at JKU.

Evaluation of dielectric strength of BOPLA and biopolyester films.

The next step was finding suitable biodegradable plastics. Engineers for this type of materials are mainly concerned with properties like degradation rate or mechanical strength, not with electrical insulation; a requirement for HASELs that operate at a few thousand Volts. Nonetheless, some bioplastics showed good material compatibility with gelatin electrodes and sufficient electrical insulation. HASELs made from one specific material combination were even able to withstand 100,000 actuation cycles at several thousand Volts without signs of electrical failure or loss in performance. These biodegradable artificial muscles are electromechanically competitive with their non-biodegradable counterparts; an exciting result for promoting sustainability in artificial muscle technology.

“By showing the outstanding performance of this new materials system, we are giving an incentive for the robotics community to consider biodegradable materials as a viable material option for building robots,” Ellen Rumley continues. “The fact that we achieved such great results with bio-plastics hopefully also motivates other material scientists to create new materials with optimized electrical performance in mind.”

With green technology becoming ever more present, the team’s research project is an important step towards a paradigm shift in soft robotics. Using biodegradable materials for building artificial muscles is just one step towards paving a future for sustainable robotic technology.

Out of One, Many: Using Language Models to Simulate Human Samples

by Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua R. Gubler, Christopher Rytting, David Wingate in Political Analysis

Artificial intelligence technologies like ChatGPT are seemingly doing everything these days: writing code, composing music, and even creating images so realistic you’ll think they were taken by professional photographers. Add thinking and responding like a human to the conga line of capabilities. A recent study from BYU proves that artificial intelligence can respond to complex survey questions just like a real human.

To determine the possibility of using artificial intelligence as a substitute for human responders in survey-style research, a team of political science and computer science professors and graduate students at BYU tested the accuracy of programmed algorithms of a GPT-3 language model — a model that mimics the complicated relationship between human ideas, attitudes, and sociocultural contexts of subpopulations.

In one experiment, the researchers created artificial personas by assigning the AI certain characteristics like race, age, ideology, and religiosity; and then tested to see if the artificial personas would vote the same as humans did in 2012, 2016, and 2020 U.S. presidential elections. Using the American National Election Studies (ANES) for their comparative human database, they found a high correspondence between how the AI and humans voted.

“I was absolutely surprised to see how accurately it matched up,” said David Wingate, BYU computer science professor, and co-author on the study. “It’s especially interesting because the model wasn’t trained to do political science — it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted.”

The original Pigeonholing Partisans dataset and the corresponding GPT-3-generated words. Bubble size represents relative frequency of word occurrence; columns represent the ideology of list writers. GPT-3 uses a similar set of words to humans.

In another experiment, they conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found high similarity between nuanced patterns in human and AI responses.

This innovation holds exciting prospects for researchers, marketers, and pollsters. Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. It can be used to test surveys, slogans, and taglines as a precursor to focus groups.

“We’re learning that AI can help us understand people better,” said BYU political science professor Ethan Busby. “It’s not replacing humans, but it is helping us more effectively study people. It’s about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging.”

And while the expansive possibilities of large language models are intriguing, the rise of artificial intelligence poses a host of questions — how much does AI really know? Which populations will benefit from this technology and which will be negatively impacted? And how can we protect ourselves from scammers and fraudsters who will manipulate AI to create more sophisticated phishing scams?

While much of that is still to be determined, the study lays out a set of criteria that future researchers can use to determine how accurate an AI model is for different subject areas.

“We’re going to see positive benefits because it’s going to unlock new capabilities,” said Wingate, noting that AI can help people in many different jobs be more efficient. “We’re also going to see negative things happen because sometimes computer models are inaccurate and sometimes they’re biased. It will continue to churn society.”

Busby says surveying artificial personas shouldn’t replace the need to survey real people and that academics and other experts need to come together to define the ethical boundaries of artificial intelligence surveying in research related to social science.

People Do Not Always Know Best: Preschoolers’ Trust in Social Robots

by Anna-Elisabeth Baumann, Elizabeth J. Goldman, Alexandra Meltzer, Diane Poulin-Dubois in Journal of Cognition and Development

Who do children prefer to learn from? Previous research has shown that even infants can identify the best informant. But would preschoolers prefer learning from a competent robot over an incompetent human?

According to a new paper by Concordia researchers, the answer largely depends on age. The study compared two groups of preschoolers: one of three-year-olds, the other of five-year-olds. The children participated in Zoom meetings featuring a video of a young woman and a small robot with humanoid characteristics (head, face, torso, arms and legs) called Nao sitting side by side. Between them were familiar objects that the robot would label correctly while the human would label them incorrectly, e.g., referring to a car as a book, a ball as a shoe and a cup as a dog.

Next, the two groups of children were presented with unfamiliar items: the top of a turkey baster, a roll of twine and a silicone muffin container. Both the robot and the human used different nonsense terms like “mido,” “toma,” “fep” and “dax” to label the objects. The children were then asked what the object was called, endorsing either the label offered by the robot or by the human. While the three-year-olds showed no preference for one word over another, the five-year-olds were much more likely to state the term provided by the robot than the human.

“We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot,” says the paper’s lead author, PhD candidate Anna-Elisabeth Baumann. Horizon Postdoctoral Fellow Elizabeth Goldman and undergraduate research assistant Alexandra Meltzer also contributed to the study. Professor and Concordia University Chair of Developmental Cybernetics Diane Poulin-Dubois in the Department of Psychology supervised the study.

The researchers repeated the experiments with new groups of three- and five-year-olds, replacing the humanoid Nao with a small truck-shaped robot called Cozmo. The results resembled those observed with the human-like robot, suggesting that the robot’s morphology does not affect the children’s selective trust strategies.

Baumann adds that, along with the labelling task, the researchers administered a naive biology task. The children were asked if biological organs or mechanical gears formed the internal parts of unfamiliar animals and robots. The three-year-olds appeared confused, assigning both biological and mechanical internal parts to the robots. However, the five-year-olds were much more likely to indicate that only mechanical parts belonged inside the robots.

“This data tells us that the children will choose to learn from a robot even though they know it is not like them. They know that the robot is mechanical,” says Baumann.

While there has been a substantial amount of literature on the benefits of using robots as teaching aides for children, the researchers note that most studies focus on a single robot informant or two robots pitted against each other. This study, they write, is the first to use both a human speaker and a robot to see if children deem social affiliation and similarity more important than competency when choosing which source to trust and learn from.

Poulin-Dubois points out that this study builds on a previous paper she co-wrote with Goldman and Baumann. That paper shows that by age five, children treat robots similarly to how adults do, i.e., as depictions of social agents.

“Older preschoolers know that robots have mechanical insides, but they still anthropomorphize them. Like adults, these children attribute certain human-like qualities to robots, such as the ability to talk, think and feel,” she says.

“It is important to emphasize that we see robots as tools to study how children can learn from both human and non-human agents,” concludes Goldman. “As technology use increases, and as children interact with technological devices more, it is important for us to understand how technology can be a tool to help facilitate their learning.”

Caterpillar-inspired soft crawling robot with distributed programmable thermal actuation

by Shuang Wu, Yaoye Hong, Yao Zhao, Jie Yin, Yong Zhu in Science Advances

Researchers at North Carolina State University have demonstrated a caterpillar-like soft robot that can move forward, backward and dip under narrow spaces. The caterpillar-bot’s movement is driven by a novel pattern of silver nanowires that use heat to control the way the robot bends, allowing users to steer the robot in either direction.

“A caterpillar’s movement is controlled by local curvature of its body — its body curves differently when it pulls itself forward than it does when it pushes itself backward,” says Yong Zhu, corresponding author of a paper on the work and the Andrew A. Adams Distinguished Professor of Mechanical and Aerospace Engineering at NC State. “We’ve drawn inspiration from the caterpillar’s biomechanics to mimic that local curvature, and use nanowire heaters to control similar curvature and movement in the caterpillar-bot.

“Engineering soft robots that can move in two different directions is a significant challenge in soft robotics,” Zhu says. “The embedded nanowire heaters allow us to control the movement of the robot in two ways. We can control which sections of the robot bend by controlling the pattern of heating in the soft robot. And we can control the extent to which those sections bend by controlling the amount of heat being applied.”

Bioinspired crawling motions.

The caterpillar-bot consists of two layers of polymer, which respond differently when exposed to heat. The bottom layer shrinks, or contracts, when exposed to heat. The top layer expands when exposed to heat. A pattern of silver nanowires is embedded in the expanding layer of polymer. The pattern includes multiple lead points where researchers can apply an electric current. The researchers can control which sections of the nanowire pattern heat up by applying an electric current to different lead points, and can control the amount of heat by applying more or less current.

“We demonstrated that the caterpillar-bot is capable of pulling itself forward and pushing itself backward,” says Shuang Wu, first author of the paper and a postdoctoral researcher at NC State. “In general, the more current we applied, the faster it would move in either direction. However, we found that there was an optimal cycle, which gave the polymer time to cool — effectively allowing the ‘muscle’ to relax before contracting again. If we tried to cycle the caterpillar-bot too quickly, the body did not have time to ‘relax’ before contracting again, which impaired its movement.”

The researchers also demonstrated that the caterpillar-bot’s movement could be controlled to the point where users were able steer it under a very low gap — similar to guiding the robot to slip under a door. In essence, the researchers could control both forward and backward motion as well as how high the robot bent upwards at any point in that process.

“This approach to driving motion in a soft robot is highly energy efficient, and we’re interested in exploring ways that we could make this process even more efficient,” Zhu says. “Additional next steps include integrating this approach to soft robot locomotion with sensors or other technologies for use in various applications — such as search-and-rescue devices.”

Dense reinforcement learning for safety validation of autonomous vehicles

by Shuo Feng, Haowei Sun, Xintao Yan, Haojie Zhu, Zhengxia Zou, Shengyin Shen, Henry X. Liu in Nature

The push toward truly autonomous vehicles has been hindered by the cost and time associated with safety testing, but a new system developed at the University of Michigan shows that artificial intelligence can reduce the testing miles required by 99.99%.

It could kick off a paradigm shift that enables manufacturers to more quickly verify whether their autonomous vehicle technology can save lives and reduce crashes. In a simulated environment, vehicles trained by artificial intelligence perform perilous maneuvers, forcing the AV to make decisions that confront drivers only rarely on the road but are needed to better train the vehicles. To repeatedly encounter those kinds of situations for data collection, real world test vehicles need to drive for hundreds of millions to hundreds of billions of miles.

“The safety critical events — the accidents, or the near misses — are very rare in the real world, and often time AVs have difficulty handling them,” said Henry Liu, U-M professor of civil engineering and director of both Mcity and the Center for Connected and Automated Transportation, a regional transportation research center funded by the U.S. Department of Transportation.

Detail photograph of screens during the virtual reality test run of an autonomous vehicle at Mcity on North Campus of the University of Michigan in Ann Arbor on Wednesday, January 18, 2023. Image credit: Brenda Ahearn/University of Michigan, College of Engineering, Communications and Marketing

U-M researchers refer to the problem as the “curse of rarity,” and they’re tackling it by learning from real-world traffic data that contains rare safety-critical events. Testing conducted on test tracks mimicking urban as well as highway driving showed that the AI-trained virtual vehicles can accelerate the testing process by thousands of times. The study appears on the cover of Nature.

“The AV test vehicles we’re using are real, but we’ve created a mixed reality testing environment. The background vehicles are virtual, which allows us to train them to create challenging scenarios that only happen rarely on the road,” Liu said.

U-M’s team used an approach to train the background vehicles that strips away nonsafety-critical information from the driving data used in the simulation. Basically, it gets rid of the long spans when other drivers and pedestrians behave in responsible, expected ways — but preserves dangerous moments that demand action, such as another driver running a red light. By using only safety-critical data to train the neural networks that make maneuver decisions, test vehicles can encounter more of those rare events in a shorter amount of time, making testing much cheaper.

“Dense reinforcement learning will unlock the potential of AI for validating the intelligence of safety-critical autonomous systems such as AVs, medical robotics and aerospace systems,” said Shuo Feng, assistant professor in the Department of Automation at Tsinghua University and former assistant research scientist at the U-M Transportation Research Institute.

“It also opens the door for accelerated training of safety-critical autonomous systems by leveraging AI-based testing agents, which may create a symbiotic relationship between testing and training, accelerating both fields.”

And it’s clear that training, along with the time and expense involved, is an impediment. An October Bloomberg article stated that although robotaxi leader Waymo’s vehicles had driven 20 million miles over the previous decade, far more data was needed.

“That means,” the author wrote, “its cars would have to drive an additional 25 times their total before we’d be able to say, with even a vague sense of certainty, that they cause fewer deaths than bus drivers.”

Testing was conducted at Mcity’s urban environment in Ann Arbor, as well as the highway test track at the American Center for Mobility in Ypsilanti. Launched in 2015, Mcity, was the world’s first purpose-built test environment for connected and autonomous vehicles. With new support from the National Science Foundation, outside researchers will soon be able to run remote, mixed reality tests using both the simulation and physical test track, similar to those reported in this study.

Noninvasive Sensors for Brain–Machine Interfaces Based on Micropatterned Epitaxial Graphene

by Shaikh Nayeem Faisal, Tien-Thong Nguyen Do, Tasauf Torzo, Daniel Leong, Aiswarya Pradeepkumar, Chin-Teng Lin, Francesca Iacopi in ACS Applied Nano Materials

Researchers from the University of Technology Sydney (UTS) have developed biosensor technology that will allow you to operate devices, such as robots and machines, solely through thought control.

The advanced brain-computer interface was developed by Distinguished Professor Chin-Teng Lin and Professor Francesca Iacopi, from the UTS Faculty of Engineering and IT, in collaboration with the Australian Army and Defence Innovation Hub. As well as defence applications, the technology has significant potential in fields such as advanced manufacturing, aerospace and healthcare — for example allowing people with a disability to control a wheelchair or operate prosthetics.

“The hands-free, voice-free technology works outside laboratory settings, anytime, anywhere. It makes interfaces such as consoles, keyboards, touchscreens and hand-gesture recognition redundant,” said Professor Iacopi.

“By using cutting edge graphene material, combined with silicon, we were able to overcome issues of corrosion, durability and skin contact resistance, to develop the wearable dry sensors,” she said.

A new study outlining the technology has just been published in the peer-reviewed journal ACS Applied Nano Materials. It shows that the graphene sensors developed at UTS are very conductive, easy to use and robust. The hexagon patterned sensors are positioned over the back of the scalp, to detect brainwaves from the visual cortex. The sensors are resilient to harsh conditions so they can be used in extreme operating environments.

The user wears a head-mounted augmented reality lens which displays white flickering squares. By concentrating on a particular square, the brainwaves of the operator are picked up by the biosensor, and a decoder translates the signal into commands. The technology was recently demonstrated by the Australian Army, where soldiers operated a Ghost Robotics quadruped robot using the brain-machine interface. The device allowed hands-free command of the robotic dog with up to 94% accuracy.

“Our technology can issue at least nine commands in two seconds. This means we have nine different kinds of commands and the operator can select one from those nine within that time period,” Professor Lin said.

“We have also explored how to minimise noise from the body and environment to get a clearer signal from an operator’s brain,” he said.

The researchers believe the technology will be of interest to the scientific community, industry and government, and hope to continue making advances in brain-computer interface systems.

AI-Powered Non-Contact In-Home Gait Monitoring and Activity Recognition System Based on mm-Wave FMCW Radar and Cloud Computing

by Hajar Abedi, Ahmad Ansariyan, Plinio P Morita, Alexander Wong, Jennifer Boger, George Shaker in IEEE Internet of Things Journal

Engineers are harnessing artificial intelligence (AI) and wireless technology to unobtrusively monitor elderly people in their living spaces and provide early detection of emerging health problems.

The new system, built by researchers at the University of Waterloo, follows an individual’s activities accurately and continuously as it gathers vital information without the need for a wearable device and alerts medical experts to the need to step in and provide help.

“After more than five years of working on this technology, we’ve demonstrated that very low-power, millimetre-wave radio systems enabled by machine learning and artificial intelligence can be reliably used in homes, hospitals and long-term care facilities,” said Dr. George Shaker, an adjunct associate professor of electrical and computer engineering.

“An added bonus is that the system can alert healthcare workers to sudden falls, without the need for privacy-intrusive devices such as cameras.”

The work by Shaker and his colleagues comes as overburdened public healthcare systems struggle to meet the urgent needs of rapidly growing elderly populations. While a senior’s physical or mental condition can change rapidly, it’s almost impossible to track their movements and discover problems 24/7 — even if they live in long-term care. In addition, other existing systems for monitoring gait — how a person walks — are expensive, difficult to operate, impractical for clinics and unsuitable for homes.

The new system represents a major step forward and works this way: first, a wireless transmitter sends low-power waveforms across an interior space, such as a long-term care room, apartment or home. As the waveforms bounce off different objects and the people being monitored, they’re captured and processed by a receiver. That information goes into an AI engine which deciphers the processed waves for detection and monitoring applications.

The system, which employs extremely low-power radar technology, can be mounted simply on a ceiling or by a wall and doesn’t suffer the drawbacks of wearable monitoring devices, which can be uncomfortable and require frequent battery charging.

“Using our wireless technology in homes and long-term care homes can effectively monitor various activities such as sleeping, watching TV, eating and the frequency of bathroom use,” Shaker said.

“Currently, the system can alert care workers to a general decline in mobility, increased likelihood of falls, possibility of a urinary tract infection, and the onset of several other medical conditions.”

Agricultural Robot under Solar Panels for Sowing, Pruning, and Harvesting in a Synecoculture Environment

by Takuya Otani, Akira Itoh, Hideki Mizukami, Masatsugu Murakami, Shunya Yoshida, Kota Terae, Taiga Tanaka, Koki Masaya, Shuntaro Aotake, Masatoshi Funabashi, Atsuo Takanishi in Agriculture

Synecoculture is a new agricultural method advocated by Dr. Masatoshi Funabashi, senior researcher at Sony Computer Science Laboratories, Inc. (Sony CSL), in which various kinds of plants are mixed and grown in high density, establishing rich biodiversity while benefiting from the self-organizing ability of the ecosystem. However, such dense vegetation requires frequent upkeep — seeds need to be sown, weeds need to be pruned, and crops need to be harvested. Synecoculture thus requires a high level of ecological literacy and complex decision-making. And while the operational issues present with Synecoculture can be addressed by using an agricultural robot, most existing robots can only automate one of the above three tasks in a simple farmland environment, thus falling short of the literacy and decision-making skills required of them to perform Synecoculture. Moreover, the robots may make unnecessary contact with the plants and damage them, affecting their growth and the harvest.

With the rising awareness of environmental issues, such a gap between the performance of humans versus that of conventional robots has spurred innovation to improve the latter. A group of researchers led by Takuya Otani, an Assistant Professor at Waseda University, in collaboration with Sustainergy Company and Sony CSL, have designed a new robot that can perform Synecoculture effectively. The robot is called SynRobo, with “syn” conveying the meaning of “together with” humans. It manages a variety of mixed plants grown in the shade of solar panels, an otherwise unutilized space. The article has been co-authored by Professor Atsuo Takanishi, also from Waseda University, other researchers of Sony CSL, and students from Waseda University.

Otani briefly explains the novel robot’s design. “It has a four-wheel mechanism that enables movement on uneven land and a robotic arm that expands and contracts to help overcome obstacles. The robot can move on slopes and avoid small steps. The system also utilizes a 360o camera to recognize and maneuver its surroundings. In addition, it is loaded with various farming tools — anchors (for punching holes), pruning scissors, and harvesting setups. The robot adjusts its position using the robotic arm and an orthogonal axes table that can move horizontally.”

Developed robot in the field.

Besides these inherent features, the researchers also invented techniques for efficient seeding. They coated seeds from different plants with soil to make equally-sized balls. These made their shape and size consistent, so that the robot could easily sow seeds from multiple plants. Furthermore, an easy-to-use, human-controlled maneuvering system was developed to facilitate the robot’s functionality. The system helps it operate tools, implement automatic sowing, and switch tasks.

The new robot could successfully sow, prune, and harvest in dense vegetation, making minimal contact with the environment during the tasks because of its small and flexible body. In addition, the new maneuvering system enabled the robot to avoid obstacles 50% better while reducing its operating time by 49%, compared to a simple controller.

“This research has developed an agricultural robot that works in environments where multiple species of plants grow in dense mixtures,” Otani tells us. “It can be widely used in general agriculture as well as Synecoculture — only the tools need to be changed when working with different plants. This robot will contribute to improving the yield per unit area and increase farming efficiency. Moreover, its agricultural operation data will help automate the maneuvering system. As a result, robots could assist agriculture in a plethora of environments. In fact, Sustainergy Company is currently preparing to commercialize this innovation in abandoned fields in Japan and desertified areas in Kenya, among other places.”

Such advancements will promote Synecoculture farming, with the combination of renewable energy, and help solve various pressing problems, including climate change and the energy crisis. The present research is a crucial step toward achieving sustainable agriculture and carbon neutrality. Here’s hoping for a smart and skillful robot that efficiently supports large-scale Synecoculture.

Laser-assisted failure recovery for dielectric elastomer actuators in aerial robots

by Suhan Kim, Yi-Hsuan Hsiao, Younghoon Lee, Weikun Zhu, Zhijian Ren, Farnaz Niroui, Yufeng Chen in Science Robotics

Bumblebees are clumsy fliers. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly.

Aerial robots, on the other hand, are not so resilient. Poke holes in the robot’s wing motors or chop off part of its propellor, and odds are pretty good it will be grounded. Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings — but to still fly effectively. They optimized these artificial muscles so the robot can better isolate defects and overcome minor damage, like tiny holes in the actuator. In addition, they demonstrated a novel laser repair method that can help the robot recover from severe damage, such as a fire that scorches the device.

Using their techniques, a damaged robot could maintain flight-level performance after one of its artificial muscles was jabbed by 10 needles, and the actuator was still able to operate after a large hole was burnt into it. Their repair methods enabled a robot to keep flying even after the researchers cut off 20 percent of its wing tip. This could make swarms of tiny robots better able to perform tasks in tough environments, like conducting a search mission through a collapsing building or dense forest.

“We spent a lot of time understanding the dynamics of soft, artificial muscles and, through both a new fabrication method and a new understanding, we can show a level of resilience to damage that is comparable to insects. We’re very excited about this. But the insects are still superior to us, in the sense that they can lose up to 40 percent of their wing and still fly. We still have some catch-up work to do,” says Kevin Chen, the D. Reid Weedon, Jr. Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper on these latest advances.

The tiny, rectangular robots being developed in Chen’s lab are about the same size and shape as a microcassette tape, though one robot weighs barely more than a paper clip. Wings on each corner are powered by dielectric elastomer actuators (DEAs), which are soft artificial muscles that use mechanical forces to rapidly flap the wings. These artificial muscles are made from layers of elastomer that are sandwiched between two razor-thin electrodes and then rolled into a squishy tube. When voltage is applied to the DEA, the electrodes squeeze the elastomer, which flaps the wing. But microscopic imperfections can cause sparks that burn the elastomer and cause the device to fail. About 15 years ago, researchers found they could prevent DEA failures from one tiny defect using a physical phenomenon known as self-clearing. In this process, applying high voltage to the DEA disconnects the local electrode around a small defect, isolating that failure from the rest of the electrode so the artificial muscle still works.

Chen and his collaborators employed this self-clearing process in their robot repair techniques. First, they optimized the concentration of carbon nanotubes that comprise the electrodes in the DEA. Carbon nanotubes are super-strong but extremely tiny rolls of carbon. Having fewer carbon nanotubes in the electrode improves self-clearing, since it reaches higher temperatures and burns away more easily. But this also reduces the actuator’s power density.

“At a certain point, you will not be able to get enough energy out of the system, but we need a lot of energy and power to fly the robot. We had to find the optimal point between these two constraints — optimize the self-clearing property under the constraint that we still want the robot to fly,” Chen says.

However, even an optimized DEA will fail if it suffers from severe damage, like a large hole that lets too much air into the device. Chen and his team used a laser to overcome major defects. They carefully cut along the outer contours of a large defect with a laser, which causes minor damage around the perimeter. Then, they can use self-clearing to burn off the slightly damaged electrode, isolating the larger defect.

“In a way, we are trying to do surgery on muscles. But if we don’t use enough power, then we can’t do enough damage to isolate the defect. On the other hand, if we use too much power, the laser will cause severe damage to the actuator that won’t be clearable,” Chen says.

The team soon realized that, when “operating” on such tiny devices, it is very difficult to observe the electrode to see if they had successfully isolated a defect. Drawing on previous work, they incorporated electroluminescent particles into the actuator. Now, if they see light shining, they know that part of the actuator is operational, but dark patches mean they successfully isolated those areas.

Once they had perfected their techniques, the researchers conducted tests with damaged actuators — some had been jabbed by many needles while other had holes burned into them. They measured how well the robot performed in flapping wing, take-off, and hovering experiments. Even with damaged DEAs, the repair techniques enabled the robot to maintain its flight performance, with altitude, position, and attitude errors that deviated only very slightly from those of an undamaged robot. With laser surgery, a DEA that would have been broken beyond repair was able to recover 87 percent of its performance.

“I have to hand it to my two students, who did a lot of hard work when they were flying the robot. Flying the robot by itself is very hard, not to mention now that we are intentionally damaging it,” Chen says.

These repair techniques make the tiny robots much more robust, so Chen and his team are now working on teaching them new functions, like landing on flowers or flying in a swarm. They are also developing new control algorithms so the robots can fly better, teaching the robots to control their yaw angle so they can keep a constant heading, and enabling the robots to carry a tiny circuit, with the longer-term goal of carrying its own power source.

Robust MADER: Decentralized Multiagent Trajectory Planner Robust to Communication Delay in Dynamic Environments

by Kota Kondo et al in arXiv

When multiple drones are working together in the same airspace, perhaps spraying pesticide over a field of corn, there’s a risk they might crash into each other. To help avoid these costly crashes, MIT researchers presented a system called MADER in 2020. This multiagent trajectory-planner enables a group of drones to formulate optimal, collision-free trajectories. Each agent broadcasts its trajectory so fellow drones know where it is planning to go. Agents then consider each other’s trajectories when optimizing their own to ensure they don’t collide.

But when the team tested the system on real drones, they found that if a drone doesn’t have up-to-date information on the trajectories of its partners, it might inadvertently select a path that results in a collision. The researchers revamped their system and are now rolling out Robust MADER, a multiagent trajectory planner that generates collision-free trajectories even when communications between agents are delayed.

“MADER worked great in simulations, but it hadn’t been tested in hardware. So, we built a bunch of drones and started flying them. The drones need to talk to each other to share trajectories, but once you start flying, you realize pretty quickly that there are always communication delays that introduce some failures,” says Kota Kondo, an aeronautics and astronautics graduate student.

The algorithm incorporates a delay-check step during which a drone waits a specific amount of time before it commits to a new, optimized trajectory. If it receives additional trajectory information from fellow drones during the delay period, it might abandon its new trajectory and start the optimization process over again. When Kondo and his collaborators tested Robust MADER, both in simulations and flight experiments with real drones, it achieved a 100 percent success rate at generating collision-free trajectories. While the drones’ travel time was a bit slower than it would be with some other approaches, no other baselines could guarantee safety.

“If you want to fly safer, you have to be careful, so it is reasonable that if you don’t want to collide with an obstacle, it will take you more time to get to your destination. If you collide with something, no matter how fast you go, it doesn’t really matter because you won’t reach your destination,” Kondo says.

MADER is an asynchronous, decentralized, multiagent trajectory-planner. This means that each drone formulates its own trajectory and that, while all agents must agree on each new trajectory, they don’t need to agree at the same time. This makes MADER more scalable than other approaches, since it would be very difficult for thousands of drones to agree on a trajectory simultaneously. Due to its decentralized nature, the system would also work better in real-world environments where drones may fly far from a central computer.

With MADER, each drone optimizes a new trajectory using an algorithm that incorporates the trajectories it has received from other agents. By continually optimizing and broadcasting their new trajectories, the drones avoid collisions. But perhaps one agent shared its new trajectory several seconds ago, but a fellow agent didn’t receive it right away because the communication was delayed. In real-world environments, signals are often delayed by interference from other devices or environmental factors like stormy weather. Due to this unavoidable delay, a drone might inadvertently commit to a new trajectory that sets it on a collision course.

Robust MADER prevents such collisions because each agent has two trajectories available. It keeps one trajectory that it knows is safe, which it has already checked for potential collisions. While following that original trajectory, the drone optimizes a new trajectory but does not commit to the new trajectory until it completes a delay-check step. During the delay-check period, the drone spends a fixed amount of time repeatedly checking for communications from other agents to see if its new trajectory is safe. If it detects a potential collision, it abandons the new trajectory and starts the optimization process over again.

The length of the delay-check period depends on the distance between agents and environmental factors that could hamper communications, Kondo says. If the agents are many miles apart, for instance, then the delay-check period would need to be longer.

The researchers tested their new approach by running hundreds of simulations in which they artificially introduced communication delays. In each simulation, Robust MADER was 100 percent successful at generating collision-free trajectories, while all the baselines caused crashes. The researchers also built six drones and two aerial obstacles and tested Robust MADER in a multiagent flight environment. They found that, while using the original version of MADER in this environment would have resulted in seven collisions, Robust MADER did not cause a single crash in any of the hardware experiments.

“Until you actually fly the hardware, you don’t know what might cause a problem. Because we know that there is a difference between simulations and hardware, we made the algorithm robust, so it worked in the actual drones, and seeing that in practice was very rewarding,” Kondo says.

Drones were able to fly 3.4 meters per second with Robust MADER, although they had a slightly longer average travel time than some baselines. But no other method was perfectly collision-free in every experiment.

In the future, Kondo and his collaborators want to put Robust MADER to the test outdoors, where many obstacles and types of noise can affect communications. They also want to outfit drones with visual sensors so they can detect other agents or obstacles, predict their movements, and include that information in trajectory optimizations.

Upcoming events

ICRA 2023: 29 May–2 June 2023, London, UK

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--