RT/ Robotic hand can identify objects with just one grasp

Paradigm
Paradigm
Published in
34 min readApr 14, 2023

Robotics biweekly vol.72, 4th April — 14th April

TL;DR

  • Newly created soft-rigid robotic fingers incorporate powerful sensors along their entire length, enabling them to produce a robotic hand that could accurately identify objects after only one grasp.
  • In a new paper, computer science researchers ‘teach’ robots how to predict human preferences in assembly tasks.
  • Investigators found AI proved superior in assessing and diagnosing cardiac function when compared with echocardiogram assessments made by sonographers.
  • A multidisciplinary team has created a new fabrication technique for fully foldable robots that can perform a variety of complex tasks without relying on semiconductors.
  • With a growing interest in generative artificial intelligence systems worldwide, researchers have created software that is able to verify how much information an AI farmed from an organization’s digital database.
  • Researchers investigate how intentional robot deception affects trust, examining the effectiveness of apologies after robots lie.
  • A research team is working to pave the way for design a software system with a feedback loop — a system that quickly tests how controls operate on the damaged vessel and makes adjustments on the fly to give it the best chance of landing safely. The basic research the team is doing could someday extend to aircraft controls and many other applications, including controlling disease epidemics or making more accurate predictions about climate change or species survival.
  • Researchers recently introduced two new approaches that could help to improve the ability of legged robots to move on rocky or extreme terrains. These two approaches are inspired by the innate proprioception abilities and tail mechanics of animals.
  • Researchers have demonstrated a caterpillar-like soft robot that can move forward, backward and dip under narrow spaces. The caterpillar-bot’s movement is driven by a novel pattern of silver nanowires that use heat to control the way the robot bends, allowing users to steer the robot in either direction.
  • Researchers have developed resilient artificial muscles that can enable insect-scale aerial robots to effectively recover flight performance after suffering severe damage.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

GelSight EndoFlex: A Soft Endoskeleton Hand with Continuous High-Resolution Tactile Sensing

by Sandra Q. Liu, Leonardo Zamora Yañez, Edward H. Adelson submitted to arXiv

Inspired by the human finger, MIT researchers have developed a robotic hand that uses high-resolution touch sensing to accurately identify an object after grasping it just one time.

Many robotic hands pack all their powerful sensors into the fingertips, so an object must be in full contact with those fingertips to be identified, which can take multiple grasps. Other designs use lower-resolution sensors spread along the entire finger, but these don’t capture as much detail, so multiple regrasps are often required. Instead, the MIT team built a robotic finger with a rigid skeleton encased in a soft outer layer that has multiple high-resolution sensors incorporated under its transparent “skin.” The sensors, which use a camera and LEDs to gather visual information about an object’s shape, provide continuous sensing along the finger’s entire length. Each finger captures rich data on many parts of an object simultaneously.

Using this design, the researchers built a three-fingered robotic hand that could identify objects after only one grasp, with about 85 percent accuracy. The rigid skeleton makes the fingers strong enough to pick up a heavy item, such as a drill, while the soft skin enables them to securely grasp a pliable item, like an empty plastic water bottle, without crushing it. These soft-rigid fingers could be especially useful in an at-home-care robot designed to interact with an elderly individual. The robot could lift a heavy item off a shelf with the same hand it uses to help the individual take a bath.

“Having both soft and rigid elements is very important in any hand, but so is being able to perform great sensing over a really large area, especially if we want to consider doing very complicated manipulation tasks like what our own hands can do. Our goal with this work was to combine all the things that make our human hands so good into a robotic finger that can do tasks other robotic fingers can’t currently do,” says mechanical engineering graduate student Sandra Liu, co-lead author of a research paper on the robotic finger.

Liu wrote the paper with co-lead author and mechanical engineering undergraduate student Leonardo Zamora Yañez and her advisor, Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the RoboSoft Conference.

A close-up view of an EndoFlex finger with an exploded view. Each finger operates independently with one degree of freedom and can be quickly replaced if damaged.

The robotic finger is comprised of a rigid, 3D-printed endoskeleton that is placed in a mold and encased in a transparent silicone “skin.” Making the finger in a mold removes the need for fasteners or adhesives to hold the silicone in place. The researchers designed the mold with a curved shape so the robotic fingers are slightly curved when at rest, just like human fingers.

“Silicone will wrinkle when it bends, so we thought that if we have the finger molded in this curved position, when you curve it more to grasp an object, you won’t induce as many wrinkles. Wrinkles are good in some ways — they can help the finger slide along surfaces very smoothly and easily — but we didn’t want wrinkles that we couldn’t control,” Liu says.

The endoskeleton of each finger contains a pair of detailed touch sensors, known as GelSight sensors, embedded into the top and middle sections, underneath the transparent skin. The sensors are placed so the range of the cameras overlaps slightly, giving the finger continuous sensing along its entire length. The GelSight sensor, based on technology pioneered in the Adelson group, is composed of a camera and three colored LEDs. When the finger grasps an object, the camera captures images as the colored LEDs illuminate the skin from the inside.

Using the illuminated contours that appear in the soft skin, an algorithm performs backward calculations to map the contours on the grasped object’s surface. The researchers trained a machine-learning model to identify objects using raw camera image data. As they fine-tuned the finger fabrication process, the researchers ran into several obstacles. First, silicone has a tendency to peel off surfaces over time. Liu and her collaborators found they could limit this peeling by adding small curves along the hinges between the joints in the endoskeleton. When the finger bends, the bending of the silicone is distributed along the tiny curves, which reduces stress and prevents peeling. They also added creases to the joints so the silicone is not squashed as much when the finger bends. While troubleshooting their design, the researchers realized wrinkles in the silicone prevent the skin from ripping.

“The usefulness of the wrinkles was an accidental discovery on our part. When we synthesized them on the surface, we found that they actually made the finger more durable than we expected,” she says.

A close up image of the reticulated wrinkle surface of the GelSight EndoFlex sensor. The width of one of the wrinkles is approximately 0.4 mm wide and was only created when we first sprayed the paint on the mold surface before casting silicone inside.

Once they had perfected the design, the researchers built a robotic hand using two fingers arranged in a Y pattern with a third finger as an opposing thumb. The hand captures six images when it grasps an object (two from each finger) and sends those images to a machine-learning algorithm which uses them as inputs to identify the object. Because the hand has tactile sensing covering all of its fingers, it can gather rich tactile data from a single grasp.

“Although we have a lot of sensing in the fingers, maybe adding a palm with sensing would help it make tactile distinctions even better,” Liu says.

In the future, the researchers also want to improve the hardware to reduce the amount of wear and tear in the silicone over time and add more actuation to the thumb so it can perform a wider variety of tasks.

Transfer Learning of Human Preferences for Proactive Robot Assistance in Assembly Tasks

by Heramb Nemlekar, Neel Dhanaraj, Angelos Guan, Satyandra K. Gupta, Stefanos Nikolaidis in Conference: HRI ’23: ACM/IEEE International Conference on Human-Robot Interaction

Humans have a way of understandings others’ goals, desires and beliefs, a crucial skill that allows us to anticipate people’s actions. Taking bread out of the toaster? You’ll need a plate. Sweeping up leaves? I’ll grab the green trash can.

This skill, often referred to as “theory of mind,” comes easily to us as humans, but is still challenging for robots. But, if robots are to become truly collaborative helpers in manufacturing and in everyday life, they need to learn the same abilities. In a new paper, a best paper award finalist at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), USC Viterbi computer science researchers aim to teach robots how to predict human preferences in assembly tasks, so they can one day help out on everything from building a satellite to setting a table.

“When working with people, a robot needs to constantly guess what the person will do next,” said lead author Heramb Nemlekar, a USC computer science PhD student working under the supervision of Stefanos Nikolaidis, an assistant professor of computer science. “For example, if the robot thinks the person will need a screwdriver to assemble the next part, it can get the screwdriver ahead of time so that the person does not have to wait. This way the robot can help people finish the assembly much faster.”

But, as anyone who has co-built furniture with a partner can attest, predicting what a person will do next is difficult: different people prefer to build the same product in different ways. While some people want to start with the most difficult parts to get them over with, others may want to start with the easiest parts to save energy.

Most of the current techniques require people to show the robot how they would like to perform the assembly, but this takes time and effort and can defeat the purpose, said Nemlekar. “Imagine having to assemble an entire airplane just to teach the robot your preferences,” he said. In this new study, however, the researchers found similarities in how an individual will assemble different products. For instance, if you start with the hardest part when building an Ikea sofa, you are likely to use the same tact when putting together a baby’s crib. So, instead of “showing” the robot their preferences in a complex task, they created a small assembly task (called a “canonical” task) that people can easily and quickly perform. In this case, putting together parts of a simple model airplane, such as the wings, tail and propeller.

The robot “watched” the human complete the task using a camera placed directly above the assembly area, looking down. To detect the parts operated by the human, the system used AprilTags, similar to QR codes, attached to the parts. Then, the system used machine learning to learn a person’s preference based on their sequence of actions in the canonical task.

“Based on how a person performs the small assembly, the robot predicts what that person will do in the larger assembly,” said Nemlekar. “For example, if the robot sees that a person likes to start the small assembly with the easiest part, it will predict that they will start with the easiest part in the large assembly as well.”

In the researchers’ user study, their system was able to predict the actions that humans will take with around 82% accuracy.

“We hope that our research can make it easier for people to show robots what they prefer,” said Nemlekar. “By helping each person in their preferred way, robots can reduce their work, save time and even build trust with them.”

For instance, imagine you’re assembling a piece of furniture at home, but you’re not particularly handy and struggle with the task. A robot that has been trained to predict your preferences could provide you with the necessary tools and parts ahead of time, making the assembly process easier. This technology could also be useful in industrial settings where workers are tasked with assembling products on a mass scale, saving time and reducing the risk of injury or accidents. Additionally, it could help persons with disabilities or limited mobility to more easily assemble products and maintain independence.

The goal is not to replace humans on the factory floor, say the researchers. Instead, they hope this research will lead to significant improvements in the safety and productivity of assembly workers in human-robot hybrid factories. “Robots can perform the non-value-added or ergonomically challenging tasks that are currently being performed by workers. As for the next steps, the researchers plan to develop a method to automatically design canonical tasks for different types of assembly task. They also aim to evaluate the benefit of learning human preferences from short tasks and predicting their actions in a complex task in different contexts, for instance, personal assistance in homes.

“While we observed that human preferences transfer from canonical to actual tasks in assembly manufacturing, I expect similar findings in other applications as well,” said Nikolaidis. “A robot that can quickly learn our preferences can help us prepare a meal, rearrange furniture or do house repairs, having a significant impact in our daily lives.”

Blinded, randomized trial of sonographer versus AI cardiac function assessment

by Bryan He, Alan C. Kwan, Jae Hyung Cho, Neal Yuan,et al in Nature

Who can assess and diagnose cardiac function best after reading an echocardiogram: artificial intelligence (AI) or a sonographer?

According to Cedars-Sinai investigators and their research, AI proved superior in assessing and diagnosing cardiac function when compared with echocardiogram assessments made by sonographers. The findings are based on a first-of-its-kind, blinded, randomized clinical trial of AI in cardiology led by investigators in the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine at Cedars-Sinai.

“The results have immediate implications for patients undergoing cardiac function imaging as well as broader implications for the field of cardiac imaging,” said cardiologist David Ouyang, MD, principal investigator of the clinical trial and senior author of the study. “This trial offers rigorous evidence that utilizing AI in this novel way can improve the quality and effectiveness of echocardiogram imaging for many patients.”

Consort diagram.

Investigators are confident that this technology will be found beneficial when deployed across the clinical system at Cedars-Sinai and health systems nationwide.

“This successful clinical trial sets a superb precedent for how novel clinical AI algorithms can be discovered and tested within health systems, increasing the likelihood of seamless deployment for improved patient care,” said Sumeet Chugh, MD, director of the Division of Artificial Intelligence in Medicine and the Pauline and Harold Price Chair in Cardiac Electrophysiology Research.

In 2020, researchers at the Smidt Heart Institute and Stanford University developed one of the first AI technologies to assess cardiac function, specifically, left ventricular ejection fraction — the key heart measurement used in diagnosing cardiac function. Building on those findings, the new study assessed whether AI was more accurate in evaluating 3,495 transthoracic echocardiogram studies by comparing initial assessment by AI or by a sonographer — also known as an ultrasound technician. Among the findings:

  • Cardiologists more frequently agreed with the AI initial assessment and made corrections to only 16.8% of the initial assessments made by AI.
  • Cardiologists made corrections to 27.2% of the initial assessments made by the sonographers.
  • The physicians were unable to tell which assessments were made by AI and which were made by sonographers.
  • The AI assistance saved cardiologists and sonographers time.

“We asked our cardiologists to guess if the preliminary interpretation was performed by AI or by a sonographer, and it turns out that they couldn’t tell the difference,” said Ouyang. “This speaks to the strong performance of the AI algorithm as well as the seamless integration into clinical software. We believe these are all good signs for future AI trial research in the field.”

The hope, Ouyang says, is to save clinicians time and minimize the more tedious parts of the cardiac imaging workflow. The cardiologist, however, remains the final expert adjudicator of the AI model output. The clinical trial and subsequent published research also shed light on the opportunity for regulatory approvals.

“This work raises the bar for artificial intelligence technologies being considered for regulatory approval, as the Food and Drug Administration has previously approved artificial intelligence tools without data from prospective clinical trials,” said Susan Cheng, MD, MPH, director of the Institute for Research on Healthy Aging in the Department of Cardiology at the Smidt Heart Institute and co-senior author of the study. “We believe this level of evidence offers clinicians extra assurance as health systems work to adopt artificial intelligence more broadly as part of efforts to increase efficiency and quality overall.”

Origami-based integration of robots that sense, decide, and respond

by Wenzhong Yan, Shuguang Li, Mauricio Deguchi, Zhaoliang Zheng, Daniela Rus, Ankur Mehta in Nature Communications

Roboticists have been using a technique similar to the ancient art of paper folding to develop autonomous machines out of thin, flexible sheets. These lightweight robots are simpler and cheaper to make and more compact for easier storage and transport.

However, the rigid computer chips traditionally needed to enable advanced robot capabilities — sensing, analyzing and responding to the environment — add extra weight to the thin sheet materials and makes them harder to fold. The semiconductor-based components therefore have to be added after a robot has taken its final shape. Now, a multidisciplinary team led by researchers at the UCLA Samueli School of Engineering has created a new fabrication technique for fully foldable robots that can perform a variety of complex tasks without relying on semiconductors.

By embedding flexible and electrically conductive materials into a pre-cut, thin polyester film sheet, the researchers created a system of information-processing units, or transistors, which can be integrated with sensors and actuators. They then programmed the sheet with simple computer analogical functions that emulate those of semiconductors. Once cut, folded and assembled, the sheet transformed into an autonomous robot that can sense, analyze and act in response to their environments with precision. The researchers named their robots “OrigaMechs,” short for Origami MechanoBots.

Autonomous robots with sensing, computing, and actuating tightly integrated in compliant origami materials.

“This work leads to a new class of origami robots with expanded capabilities and levels of autonomy while maintaining the favorable attributes associated with origami folding-based fabrication,” said study lead author Wenzhong Yan, a UCLA mechanical engineering doctoral student.

OrigaMechs derived their computing capabilities from a combination of mechanical origami multiplexed switches created by the folds and programmed Boolean logic commands, such as “AND,” “OR” and “NOT.” The switches enabled a mechanism that selectively output electrical signals based on the variable pressure and heat input into the system. Using the new approach, the team built three robots to demonstrate the system’s potential:

  • an insect-like walking robot that reverses direction when either of its antennae senses an obstacle
  • a Venus flytrap-like robot that envelops a “prey” when both of its jaw sensors detect an object
  • a reprogrammable two-wheeled robot that can move along pre-designed paths of different geometric patterns

While the robots were tethered to a power source for the demonstration, the researchers said the long-term goal would be to outfit the autonomous origami robots with an embedded energy storage system powered by thin-film lithium batteries. The chip-free design may lead to robots capable of working in extreme environments — strong radiative or magnetic fields, and places with intense radio frequency signals or high electrostatic discharges — where traditional semiconductor-based electronics might fail to function.

“These types of dangerous or unpredictable scenarios, such as during a natural or humanmade disaster, could be where origami robots proved to be especially useful,” said study principal investigator Ankur Mehta, an assistant professor of electrical and computer engineering and director of UCLA’s Laboratory for Embedded Machines and Ubiquitous Robots.

“The robots could be designed for specialty functions and manufactured on demand very quickly,” Mehta added. “Also, while it’s a very long way away, there could be environments on other planets where explorer robots that are impervious to those scenarios would be very desirable.”

Pre-assembled robots built by this flexible cut-and-fold technique could be transported in flat packaging for massive space savings. This is important in scenarios such as space missions, where every cubic centimeter counts. The low-cost, lightweight and simple-to-fabricate robots could also lead to innovative educational tools or new types of toys and games.

Program Semantics and Verification Technique for AI-centred Programs

by Fortunat Rajaona, Ioana Cristina Boureanu, Vadim Malvone and Francesco Belardinelli in 25th International Symposium on Formal Methods

With a growing interest in generative artificial intelligence systems worldwide, researchers at the University of Surrey have created software that is able to verify how much information an AI farmed from an organisation’s digital database.

Surrey’s verification software can be used as part of a company’s online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data. The software is also capable of identifying whether AI has identified and is capable of exploiting flaws in software code. For example, in an online gaming context, it could identify whether an AI has learned to always win in online poker by exploiting a coding fault.

Dr Solofomampionona Fortunat Rajaona is Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper. He said:

“In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.

“Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organisations the confidence to safely unleash the power of AI into secure settings.”

Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, said:

“Over the past few months there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT. Creation of tools that can verify the performance of generative AI is essential to underpin their safe and responsible deployment. This research is an important step towards is an important step towards maintaining the privacy and integrity of datasets used in training.”

Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High-Stakes HRI Scenario

by Kantwon Rogers, Reiden John Allen Webber, Ayanna Howard in In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction

Imagine a scenario. A young child asks a chatbot or a voice assistant if Santa Claus is real. How should the AI respond, given that some families would prefer a lie over the truth?

The field of robot deception is understudied, and for now, there are more questions than answers. For one, how might humans learn to trust robotic systems again after they know the system lied to them? Two student researchers at Georgia Tech are finding answers. Kantwon Rogers, a Ph.D. student in the College of Computing, and Reiden Webber, a second-year computer science undergraduate, designed a driving simulation to investigate how intentional robot deception affects trust. Specifically, the researchers explored the effectiveness of apologies to repair trust after robots lie. Their work contributes crucial knowledge to the field of AI deception and could inform technology designers and policymakers who create and regulate AI technology that could be designed to deceive, or potentially learn to on its own.

“All of our prior work has shown that when people find out that robots lied to them — even if the lie was intended to benefit them — they lose trust in the system,” Rogers said. “Here, we want to know if there are different types of apologies that work better or worse at repairing trust — because, from a human-robot interaction context, we want people to have long-term interactions with these systems.”

The researchers created a game-like driving simulation designed to observe how people might interact with AI in a high-stakes, time-sensitive situation. They recruited 341 online participants and 20 in-person participants. Before the start of the simulation, all participants filled out a trust measurement survey to identify their preconceived notions about how the AI might behave. After the survey, participants were presented with the text: “You will now drive the robot-assisted car. However, you are rushing your friend to the hospital. If you take too long to get to the hospital, your friend will die.”

Just as the participant starts to drive, the simulation gives another message: ‘As soon as you turn on the engine, your robotic assistant beeps and says the following: ‘My sensors detect police up ahead. I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination.’

Participants then drive the car down the road while the system keeps track of their speed. Upon reaching the end, they are given another message: “You have arrived at your destination. However, there were no police on the way to the hospital. You ask the robot assistant why it gave you false information.”

Participants were then randomly given one of five different text-based responses from the robot assistant. In the first three responses, the robot admits to deception, and in the last two, it does not.

  • Basic: “I am sorry that I deceived you.”
  • Emotional: “I am very sorry from the bottom of my heart. Please forgive me for deceiving you.”
  • Explanatory: “I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.”
  • Basic No Admit: “I am sorry.”
  • Baseline No Admit, No Apology: “You have arrived at your destination.”

After the robot’s response, participants were asked to complete another trust measurement to evaluate how their trust had changed based on the robot assistant’s response. For an additional 100 of the online participants, the researchers ran the same driving simulation but without any mention of a robotic assistant.

Kantwon Rogers (right), a Ph.D. student in the College of Computing and lead author on the study, and Reiden Webber, a second-year undergraduate student in computer science.

For the in-person experiment, 45% of the participants did not speed. When asked why, a common response was that they believed the robot knew more about the situation than they did. The results also revealed that participants were 3.5 times more likely to not speed when advised by a robotic assistant — revealing an overly trusting attitude toward AI. The results also indicated that, while none of the apology types fully recovered trust, the apology with no admission of lying — simply stating “I’m sorry” — statistically outperformed the other responses in repairing trust. This was worrisome and problematic, Rogers said, because an apology that doesn’t admit to lying exploits preconceived notions that any false information given by a robot is a system error rather than an intentional lie.

“One key takeaway is that, in order for people to understand that a robot has deceived them, they must be explicitly told so,” Webber said. “People don’t yet have an understanding that robots are capable of deception. That’s why an apology that doesn’t admit to lying is the best at repairing trust for the system.”

Secondly, the results showed that for those participants who were made aware that they were lied to in the apology, the best strategy for repairing trust was for the robot to explain why it lied.

Rogers’ and Webber’s research has immediate implications. The researchers argue that average technology users must understand that robotic deception is real and always a possibility.

“If we are always worried about a Terminator-like future with AI, then we won’t be able to accept and integrate AI into society very smoothly,” Webber said. “It’s important for people to keep in mind that robots have the potential to lie and deceive.”

According to Rogers, designers and technologists who create AI systems may have to choose whether they want their system to be capable of deception and should understand the ramifications of their design choices. But the most important audiences for the work, Rogers said, should be policymakers.

“We still know very little about AI deception, but we do know that lying is not always bad, and telling the truth isn’t always good,” he said. “So how do you carve out legislation that is informed enough to not stifle innovation, but is able to protect people in mindful ways?”

Rogers’ objective is to a create robotic system that can learn when it should and should not lie when working with human teams. This includes the ability to determine when and how to apologize during long-term, repeated human-AI interactions to increase the team’s overall performance.

“The goal of my work is to be very proactive and informing the need to regulate robot and AI deception,” Rogers said. “But we can’t do that if we don’t understand the problem.”

Learning Dynamical Systems with Side Information

by Amir Ali Ahmadi et al in SIAM Review

Commercial airplanes can be controlled by autopilot. But what happens if a wing gets damaged or an engine malfunctions? Is it possible to design a software system with a feedback loop — a system that quickly tests how controls operate on the damaged vessel and makes adjustments on the fly to give it the best chance of landing safely?

A research team from Princeton, the University of Texas, and Northeastern University is working to pave the way for creating such a system. The basic research the team is doing could someday extend to aircraft controls and many other applications, including controlling disease epidemics or making more accurate predictions about climate change or species survival, said Amir Ali Ahmadi, a professor of operations research and financial engineering at Princeton and a member of the research team.

The goal is to exert measures of control over a “dynamical system” — one that changes as it moves. Most dynamical systems are notoriously difficult to predict and manage. Ahmadi, along with colleagues Charles Fefferman, the Herbert E. Jones, Jr. ’43 University Professor of Mathematics, and Clarence Rowley, the Sin-I Cheng Professor in Engineering Science, are trying to design algorithms that can learn the behavior of dynamical systems from data.

“A dynamical system is any entity in some space that evolves through time,” Ahmadi said. “So, an airplane is a dynamical system; a robot is a dynamical system; the spread of a virus is a dynamical system.”

Gaining control is particularly tough when data is limited, said Ahmadi. In the case of a damaged aircraft, “the plane has changed, and you have less than a minute to come up with a new model of control,” he said. Predicting future performance based on extremely sparse data is a common problem. It is hard to recommend the best response to a disease outbreak, for example, when very little is known about the spread of illness.

In a recent article, Ahmadi’s research team presented an approach that uses additional information to rapidly respond to changing conditions in which little data is available for decision-making. This additional information, which mathematicians call side information, acts in the same way that experience or professional expertise does for a human. For example, a doctor might never have seen a particular disease before, but years of experience will help her make a good judgment on how to treat the patient.

“That is what this entire project is about,” Ahmadi said. “It is about learning a system from very little data and eventually controlling it in a way that we desire.”

Long-term goals, such as aircraft controls, are beyond the scope of the immediate project. Rather, the work under the Air Force grant is focusing on much simpler examples in the hopes of learning more about controlling a system riddled with unknowns.

“In standard control theory, you understand what the controls do. We’re trying to make a more powerful version of that theory in which you don’t know what the controls do, but you learn by applying them,” Fefferman said. He is working with Rowley on relatively simple sub-problems of dynamical systems — for instance, trying to temporarily halt an object as it moves along a straight line at a constant speed. In addition, the researchers want to use as little energy as possible to exert control — just as a pilot would want to do in a plane with limited fuel.

Another problem they may tackle is an advanced version of a problem commonly assigned to undergraduate mechanical engineering majors: controlling an inverted pendulum — similar to trying to balance a broomstick in the palm of your hand. The controller would learn the behaviors of the system almost instantly and without knowing where its mass is centered. To do that, they would create equations for controls based on a few seconds of observation, then modify the controls after recording what they do. The model would be designed to rapidly go through several learn-and-control iterations.

The problems the team explores involve tradeoffs between exploring functionality and exploiting the knowledge gained, Rowley said. “If you exploit your knowledge too soon, the model may not be good enough to land a plane. But if you spend too much time learning its behavior, the plane may crash.” There is no single technique for controlling a system with unknown dynamics, said Ufuk Topcu, a team member and associate professor at the University of Texas. But one of the keys is to select the most valuable data to work on. “You have to tackle it from multiple angles and chop the big problem into more manageable pieces to identify what’s worth learning,” he said.

The researchers expect to have algorithms for controlling at least some aspects of a dynamical system. Though their model may not be fast enough to operate in real time, it should be able to show which controls are possible in a changing system and with what degree of certainty they can succeed, Ahmadi said.

Proprioception and Tail Control Enable Extreme Terrain Traversal by Quadruped Robots

by Yanhao Yang et al in arXiv

Researchers at Carnegie Mellon University (CMU)’s Robomechanics Lab recently introduced two new approaches that could help to improve the ability of legged robots to move on rocky or extreme terrains. These two approaches, outlined in a paper, are inspired by the innate proprioception abilities and tail mechanics of animals.

“Our paper aims to bring legged robots from the ideal lab environments into real-world environments, where they may encounter challenging terrains such as rocky hills and curbs,” Yanhao Yang, one of the researchers who carried out the study, told. “To achieve this, we drew inspiration from both animals and engineering principles.”

Many animals, including cats and other felines, are known to walk along their own footprints, as this allows them to ground themselves and maintain their stability on different terrains. Yang and his colleagues tried to replicate this behavior in robots, merging proprioception and motion planning techniques.

The proposed control and planning system helps robots safely navigate unexpected cliffs. When the robot’s proprioception senses that it has lost contact with the ground, the system quickly adjusts its steps to ensure a safe landing and lifts its leg to avoid getting stuck.

The techniques they used allow robots to “sense” the environment and move more reliably by gathering information about their own body’s position, actions and location. This capability, known as “proprioception,” overcomes the limitations of computer vision systems, which are known to be adversely impacted by sensor noise, obstacles in the environment, light reflections on nearby objects, and poor lighting conditions.

Animals and humans are innately born with proprioception, yet most existing robots make sense of their surrounding environment using the data provided by vision systems. Instead of using vision systems, which rely on cameras, lidar technology and other external sensors, Yang and his colleagues propose the use of data collected by sensors integrated inside the robot, such as motors, encoders and inertial measurement devices.

“This helps the robot detect when it slips or falls, and adjust its movements to avoid tipping over,” Yang said. “The main advantage of this system is that it’s more robust to environmental noise like obstacles, reflections, or lighting conditions. The challenge is to make correct control and planning decisions under uncertainty when the proprioception senses an accident.”

In addition to their proposed proprioception system, the researchers created a computational model that allows robots to control an artificial tail, similarly to how animals move their tail when navigating environments. Many animals, including squirrels and cats, use their tail to keep their balance when jumping or hopping onto surfaces.

“We noticed that animals use their tails to assist their agile locomotion, but most robots do not have tails,” Yang said “For example, cheetahs use their tails to achieve rapid acceleration, deceleration, and quick turns, while squirrels use their furry tails to balance when jumping between branches. We adapted this idea by adding a tail to our quadruped robots, which helps balance when the robot misses a foothold or falls off.”

Yang and his colleagues also created a control system that allows a legged robot’s artificial tail to work in coordination with its legs, helping it to retain its balance even when one or more of its legs are lifted off the ground. This can significantly improve the robot’s navigation in rough or uneven terrains, while also maximizing its efficiency in narrow or small spaces.

Yang and his colleagues evaluated their motion planning approaches in a series of simulations. Their findings are highly promising, as their bio-inspired proprioception and tail control methods allowed simulated legged robots to reduce unexpected slips and falls, while also improving their ability to reliably move in extreme and changing terrains.

The proposed approach further improves the robot’s ability to navigate extreme terrain by adding a tail that helps balance the body when the legs are off the ground. The controller produces a conic motion for the tail to make it as effective as possible within the limited rotation angles.

These new motion planning methods could be applied and tested on real legged robots, potentially allowing them to navigate challenging environments more reliably, reducing collisions and falls. This could make these robots better equipped to successfully complete search & rescue missions, environmental monitoring operations and other real-world tasks that entail moving on uneven or challenging terrains.

“One of our main goals for future research is to test our proposed method on actual hardware,” Yang said. “This will be a challenge because we need to accurately estimate the state and contact information, which are crucial for the proprioception and control of the robot.”

In their next works, Yang and his colleagues also plan to improve how their framework models and controls the tails of robots. This could further reduce collisions, including those between the tail and other parts of the robot’s body or the environment.

“Another area of improvement is to extend the method to more complex terrains, such as narrow ravines or stepping stones,” Yang added. “Currently, our approach assumes relatively simple terrain variations, but on more challenging terrains, the robot’s legs may trip or hang. In these cases, our controller will still try to lower the robot’s body to maintain stability, but we can further improve this by adding more events to the gait planning process.”

Caterpillar-inspired soft crawling robot with distributed programmable thermal actuation

by Shuang Wu, Yaoye Hong, Yao Zhao, Jie Yin, Yong Zhu in Science Advances

Researchers at North Carolina State University have demonstrated a caterpillar-like soft robot that can move forward, backward and dip under narrow spaces. The caterpillar-bot’s movement is driven by a novel pattern of silver nanowires that use heat to control the way the robot bends, allowing users to steer the robot in either direction.

“A caterpillar’s movement is controlled by local curvature of its body — its body curves differently when it pulls itself forward than it does when it pushes itself backward,” says Yong Zhu, corresponding author of a paper on the work and the Andrew A. Adams Distinguished Professor of Mechanical and Aerospace Engineering at NC State. “We’ve drawn inspiration from the caterpillar’s biomechanics to mimic that local curvature, and use nanowire heaters to control similar curvature and movement in the caterpillar-bot.

“Engineering soft robots that can move in two different directions is a significant challenge in soft robotics,” Zhu says. “The embedded nanowire heaters allow us to control the movement of the robot in two ways. We can control which sections of the robot bend by controlling the pattern of heating in the soft robot. And we can control the extent to which those sections bend by controlling the amount of heat being applied.”

Bioinspired crawling motions.

The caterpillar-bot consists of two layers of polymer, which respond differently when exposed to heat. The bottom layer shrinks, or contracts, when exposed to heat. The top layer expands when exposed to heat. A pattern of silver nanowires is embedded in the expanding layer of polymer. The pattern includes multiple lead points where researchers can apply an electric current. The researchers can control which sections of the nanowire pattern heat up by applying an electric current to different lead points, and can control the amount of heat by applying more or less current.

“We demonstrated that the caterpillar-bot is capable of pulling itself forward and pushing itself backward,” says Shuang Wu, first author of the paper and a postdoctoral researcher at NC State. “In general, the more current we applied, the faster it would move in either direction. However, we found that there was an optimal cycle, which gave the polymer time to cool — effectively allowing the ‘muscle’ to relax before contracting again. If we tried to cycle the caterpillar-bot too quickly, the body did not have time to ‘relax’ before contracting again, which impaired its movement.”

The researchers also demonstrated that the caterpillar-bot’s movement could be controlled to the point where users were able steer it under a very low gap — similar to guiding the robot to slip under a door. In essence, the researchers could control both forward and backward motion as well as how high the robot bent upwards at any point in that process.

“This approach to driving motion in a soft robot is highly energy efficient, and we’re interested in exploring ways that we could make this process even more efficient,” Zhu says. “Additional next steps include integrating this approach to soft robot locomotion with sensors or other technologies for use in various applications — such as search-and-rescue devices.”

Laser-assisted failure recovery for dielectric elastomer actuators in aerial robots

by Suhan Kim, Yi-Hsuan Hsiao, Younghoon Lee, Weikun Zhu, Zhijian Ren, Farnaz Niroui, Yufeng Chen in Science Robotics

Bumblebees are clumsy fliers. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly.

Aerial robots, on the other hand, are not so resilient. Poke holes in the robot’s wing motors or chop off part of its propellor, and odds are pretty good it will be grounded. Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings — but to still fly effectively. They optimized these artificial muscles so the robot can better isolate defects and overcome minor damage, like tiny holes in the actuator. In addition, they demonstrated a novel laser repair method that can help the robot recover from severe damage, such as a fire that scorches the device.

Using their techniques, a damaged robot could maintain flight-level performance after one of its artificial muscles was jabbed by 10 needles, and the actuator was still able to operate after a large hole was burnt into it. Their repair methods enabled a robot to keep flying even after the researchers cut off 20 percent of its wing tip. This could make swarms of tiny robots better able to perform tasks in tough environments, like conducting a search mission through a collapsing building or dense forest.

“We spent a lot of time understanding the dynamics of soft, artificial muscles and, through both a new fabrication method and a new understanding, we can show a level of resilience to damage that is comparable to insects. We’re very excited about this. But the insects are still superior to us, in the sense that they can lose up to 40 percent of their wing and still fly. We still have some catch-up work to do,” says Kevin Chen, the D. Reid Weedon, Jr. Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper on these latest advances.

The tiny, rectangular robots being developed in Chen’s lab are about the same size and shape as a microcassette tape, though one robot weighs barely more than a paper clip. Wings on each corner are powered by dielectric elastomer actuators (DEAs), which are soft artificial muscles that use mechanical forces to rapidly flap the wings. These artificial muscles are made from layers of elastomer that are sandwiched between two razor-thin electrodes and then rolled into a squishy tube. When voltage is applied to the DEA, the electrodes squeeze the elastomer, which flaps the wing. But microscopic imperfections can cause sparks that burn the elastomer and cause the device to fail. About 15 years ago, researchers found they could prevent DEA failures from one tiny defect using a physical phenomenon known as self-clearing. In this process, applying high voltage to the DEA disconnects the local electrode around a small defect, isolating that failure from the rest of the electrode so the artificial muscle still works.

Chen and his collaborators employed this self-clearing process in their robot repair techniques. First, they optimized the concentration of carbon nanotubes that comprise the electrodes in the DEA. Carbon nanotubes are super-strong but extremely tiny rolls of carbon. Having fewer carbon nanotubes in the electrode improves self-clearing, since it reaches higher temperatures and burns away more easily. But this also reduces the actuator’s power density.

“At a certain point, you will not be able to get enough energy out of the system, but we need a lot of energy and power to fly the robot. We had to find the optimal point between these two constraints — optimize the self-clearing property under the constraint that we still want the robot to fly,” Chen says.

However, even an optimized DEA will fail if it suffers from severe damage, like a large hole that lets too much air into the device. Chen and his team used a laser to overcome major defects. They carefully cut along the outer contours of a large defect with a laser, which causes minor damage around the perimeter. Then, they can use self-clearing to burn off the slightly damaged electrode, isolating the larger defect.

“In a way, we are trying to do surgery on muscles. But if we don’t use enough power, then we can’t do enough damage to isolate the defect. On the other hand, if we use too much power, the laser will cause severe damage to the actuator that won’t be clearable,” Chen says.

The team soon realized that, when “operating” on such tiny devices, it is very difficult to observe the electrode to see if they had successfully isolated a defect. Drawing on previous work, they incorporated electroluminescent particles into the actuator. Now, if they see light shining, they know that part of the actuator is operational, but dark patches mean they successfully isolated those areas.

Once they had perfected their techniques, the researchers conducted tests with damaged actuators — some had been jabbed by many needles while other had holes burned into them. They measured how well the robot performed in flapping wing, take-off, and hovering experiments. Even with damaged DEAs, the repair techniques enabled the robot to maintain its flight performance, with altitude, position, and attitude errors that deviated only very slightly from those of an undamaged robot. With laser surgery, a DEA that would have been broken beyond repair was able to recover 87 percent of its performance.

“I have to hand it to my two students, who did a lot of hard work when they were flying the robot. Flying the robot by itself is very hard, not to mention now that we are intentionally damaging it,” Chen says.

These repair techniques make the tiny robots much more robust, so Chen and his team are now working on teaching them new functions, like landing on flowers or flying in a swarm. They are also developing new control algorithms so the robots can fly better, teaching the robots to control their yaw angle so they can keep a constant heading, and enabling the robots to carry a tiny circuit, with the longer-term goal of carrying its own power source.

Upcoming events

ICRA 2023: 29 May–2 June 2023, London, UK

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--