RT/ ChatGPT designs a robot

Paradigm
Paradigm
Published in
26 min readJun 29, 2023

Robotics biweekly vol.77, 15th June — 29th June

TL;DR

  • Poems, essays, and even books — is there anything the OpenAI platform ChatGPT can’t handle? These new AI developments have inspired researchers to dig a little deeper: For instance, can ChatGPT also design a robot? And is this a good thing for the design process, or are there risks?
  • A new, AI-based technique for measuring fluid flow in the brain could lead to treatments for diseases such as Alzheimer’s.
  • Researchers have discovered how to design materials that necessarily have a point or line where the material doesn’t deform under stress, and that even remember how they have been poked or squeezed in the past. These results could be used in robotics and mechanical computers, while similar design principles could be used in quantum computers.
  • Scientists have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques. The study offered an overview of a hybrid methodology designed to improve how AI-based machinery sense, interact and respond to its environment in real time — as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.
  • By combining inspiration from the digital world of polygon meshing and the biological world of swarm behavior, the Mori3 robot can morph from 2D triangles into almost any 3D object. The research shows the promise of modular robotics for space travel.
  • A team of researchers has developed a new method for controlling lower limb exoskeletons using deep reinforcement learning. The method enables more robust and natural walking control for users of lower limb exoskeletons.
  • Engineers have developed a new model that trains four-legged robots to see more clearly in 3D. The advance enabled a robot to autonomously cross challenging terrain with ease — including stairs, rocky ground and gap-filled paths — while clearing obstacles in its way.
  • Scientists have developed a new AI framework that is better than previous technologies at analyzing and categorizing dialogue between individuals, with the goal of improving team training technologies. The framework will enable training technologies to better understand how well individuals are coordinating with one another and working as part of a team.
  • Researchers recently developed a worm-inspired robot with a body structure based on the oriental paper-folding art of origami. This robotic system is based on actuators that respond to magnetic forces, compressing and bending its body to replicate the movements of worms.
  • MIT engineers envision ways that autonomous vehicles could be deployed with their current shortcomings, without experiencing a dip in safety. Researchers have introduced a framework for how remote human supervision could be scaled to make a hybrid system efficient without compromising passenger safety.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

How can LLMs transform the robotic design process?

by Francesco Stella, Cosimo Della Santina, Josie Hughes in Nature Machine Intelligence

Poems, essays and even books — is there anything the open AI platform ChatGPT can’t handle? These new AI developments have inspired researchers at TU Delft and the Swiss technical university EPFL to dig a little deeper: For instance, can ChatGPT also design a robot? And is this a good thing for the design process, or are there risks?.

What are the greatest future challenges for humanity? This was the first question that Cosimo Della Santina, assistant professor, and PhD student Francesco Stella, both from TU Delft, and Josie Hughes from EPFL, asked ChatGPT. “We wanted ChatGPT to design not just a robot, but one that is actually useful,” says Della Santina. In the end, they chose food supply as their challenge, and as they chatted with ChatGPT, they came up with the idea of creating a tomato-harvesting robot.

The researchers followed all of ChatGPT’s design decisions. The input proved particularly valuable in the conceptual phase, according to Stella. “ChatGPT extends the designer’s knowledge to other areas of expertise. For example, the chat robot taught us which crop would be most economically valuable to automate.” But ChatGPT also came up with useful suggestions during the implementation phase: “Make the gripper out of silicone or rubber to avoid crushing tomatoes” and “a Dynamixel motor is the best way to drive the robot.” The result of this partnership between humans and AI is a robotic arm that can harvest tomatoes.

The robot tested in the research setup at EPFL.

The researchers found the collaborative design process to be positive and enriching. “However, we did find that our role as engineers shifted towards performing more technical tasks,” says Stella. The researchers explore the varying degrees of cooperation between humans and Large Language Models (LLM), of which ChatGPT is one. In the most extreme scenario, AI provides all the input to the robot design, and the human blindly follows it. In this case, the LLM acts as the researcher and engineer, while the human acts as the manager, in charge of specifying the design objectives.

Such an extreme scenario is not yet possible with today’s LLMs. And the question is whether it is desirable. “In fact, LLM output can be misleading if it is not verified or validated. AI bots are designed to generate the ‘most probable’ answer to a question, so there is a risk of misinformation and bias in the robotic field,” Della Santina says. Working with LLMs also raises other important issues, such as plagiarism, traceability and intellectual property.

Della Santina, Stella and Hughes will continue to use the tomato-harvesting robot in their research on robotics. They are also continuing their study of LLMs to design new robots. Specifically, they are looking at the autonomy of AIs in designing their own bodies.

“Ultimately an open question for the future of our field is how LLMs can be used to assist robot developers without limiting the creativity and innovation needed for robotics to rise to the challenges of the 21st century,” Stella concludes.

Artificial intelligence velocimetry reveals in vivo flow rates, pressure gradients, and shear stresses in murine perivascular flows

by Kimberly A. S. Boster, Shengze Cai, Antonio Ladrón-de-Guevara, Jiatong Sun, Xiaoning Zheng, Ting Du, John H. Thomas, Maiken Nedergaard, George Em Karniadakis, Douglas H. Kelley in Proceedings of the National Academy of Sciences

A new artificial intelligence-based technique for measuring fluid flow around the brain’s blood vessels could have big implications for developing treatments for diseases such as Alzheimer’s.

The perivascular spaces that surround cerebral blood vessels transport water-like fluids around the brain and help sweep away waste. Alterations in the fluid flow are linked to neurological conditions, including Alzheimer’s, small vessel disease, strokes, and traumatic brain injuries but are difficult to measure in vivo. A multidisciplinary team of mechanical engineers, neuroscientists, and computer scientists led by University of Rochester Associate Professor Douglas Kelley developed novel AI velocimetry measurements to accurately calculate brain fluid flow.

“In this study, we combined some measurements from inside the animal models with a novel AI technique that allowed us to effectively measure things that nobody’s ever been able to measure before,” says Kelley, a faculty member in Rochester’s Department of Mechanical Engineering.

Overview of two-photon imaging experiments and resulting data.

The work builds upon years of experiments led by study coauthor Maiken Nedergaard, the codirector of Rochester’s Center for Translational Neuromedicine. The group has previously been able to conduct two-dimensional studies on the fluid flow in perivascular spaces by injecting tiny particles into the fluid and measuring their position and velocity over time. But scientists needed more complex measurements to understand the full intricacy of the system — and exploring such a vital, fluid system is a challenge.

To address that challenge, the team collaborated with George Karniadakis from Brown University to leverage artificial intelligence. They integrated the existing 2D data with physics-informed neural networks to create unprecedented high-resolution looks at the system.

“This is a way to reveal pressures, forces, and the three-dimensional flow rate with much more accuracy than we can otherwise do,” says Kelley. “The pressure is important because nobody knows for sure quite what pumping mechanism drives all these flows around the brain yet. This is a new field.”

Non-orientable order and non-commutative response in frustrated metamaterials

by Xiaofei Guo, Marcelo Guzmán, David Carpentier, Denis Bartolo, Corentin Coulais in Nature

Researchers from the UvA Institute of Physics and ENS de Lyon have discovered how to design materials that necessarily have a point or line where the material doesn’t deform under stress, and that even remember how they have been poked or squeezed in the past. These results could be used in robotics and mechanical computers, while similar design principles could be used in quantum computers.

The outcome is a breakthrough in the field of metamaterials: designer materials whose responses are determined by their structure rather than their chemical composition. To construct a metamaterial with mechanical memory, physicists Xiaofei Guo, Marcelo Guzmán, David Carpentier, Denis Bartolo and Corentin Coulais realised that its design needs to be ‘frustrated’, and that this frustration corresponds to a new type of order, which they call non-orientable order.

A simple example of a non-orientable object is a Möbius strip, made by taking a strip of material, adding half a twist to it and then gluing its ends together. You can try this at home with a strip of paper. Following the surface of a Möbius strip with your finger, you’ll find that when you get back to your starting point, your finger will be on the other side of the paper. A Möbius strip is non-orientable because there is no way to label the two sides of the strip in a consistent manner; the twist makes the entire surface one and the same. This is in contrast to a simple cylinder (a strip without any twists whose ends are glued together), which has a distinct inner and outer surface.

Guo and her colleagues realised that this non-orientability strongly affects how an object or metamaterial responds to being pushed or squeezed. If you place a simple cylinder and a Möbius strip on a flat surface and press down on them from above, you’ll find that the sides of the cylinder will all bulge out (or in), while the sides of the Möbius strip cannot do the same. Instead, the non-orientability of the latter ensures that there is always a point along the strip where it does not deform under pressure. Excitingly, this behaviour extends far beyond Möbius strips.

Orientable vs non-orientable order parameter bundles.

‘We discovered that the behaviour of non-orientable objects such as Möbius strips allows us to describe any material that is globally frustrated. These materials naturally want to be ordered, but something in their structure forbids the order to span the whole system and forces the ordered pattern to vanish at one point or line in space. There is no way to get rid of that vanishing point without cutting the structure, so it has to be there no matter what,’ explains Coulais, who leads the Machine Materials Laboratory at the University of Amsterdam.

The research team designed and 3D-printed their own mechanical metamaterial structures which exhibit the same frustrated and non-orientable behaviour as Möbius strips. Their designs are based on rings of squares connected by hinges at their corners. When these rings are squeezed, neighbouring squares will rotate in opposite directions so that their edges move closer together. The opposite rotation of neighbours makes the system’s response analogous to the anti-ferromagnetic ordering that occurs in certain magnetic materials.

Rings composed of an odd number of squares are frustrated, because there is no way for all neighbouring squares to rotate in opposite directions. Squeezed odd-numbered rings therefore exhibit non-orientable order, in which the rotation angle at one point along the ring must go to zero. Being a feature of the overall shape of the material makes this a robust topological property. By connecting multiple metarings together, it is even possible to emulate the mechanics of higher-dimensional topological structures such as the Klein bottle.

Having an enforced point or line of zero deformation is key to endowing materials with mechanical memory. Instead of squeezing a metamaterial ring from all sides, you can press the ring at distinct points. Doing so, the order in which you press different points determines where the zero deformation point or line ends up. This is a form of storing information. It can even be used to execute certain types of logic gates, the basis of any computer algorithm. A simple metamaterial ring can thus function as a mechanical computer. Beyond mechanics, the results of the study suggest that non-orientability could be a robust design principle for metamaterials that can effectively store information across scales, in fields as diverse as colloidal science, photonics, magnetism, and atomic physics. It could even be useful for new types of quantum computers.

Coulais concludes: ‘Next, we want to exploit the robustness of the vanishing deformations for robotics. We believe the vanishing deformations could be used to create robotic arms and wheels with predictable bending and locomotion mechanisms.’

Incorporating physics into data-driven computer vision

by Achuta Kadambi, Celso de Melo, Cho-Jui Hsieh, Mani Srivastava, Stefano Soatto in Nature Machine Intelligence

Researchers from UCLA and the United States Army Research Laboratory have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques.

The study offered an overview of a hybrid methodology designed to improve how AI-based machinery sense, interact and respond to its environment in real time — as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.

Computer vision allows AIs to see and make sense of their surroundings by decoding data and inferring properties of the physical world from images. While such images are formed through the physics of light and mechanics, traditional computer vision techniques have predominantly focused on data-based machine learning to drive performance. Physics-based research has, on a separate track, been developed to explore the various physical principles behind many computer vision challenges.

Achuta Kadambi/UCLA. Graphic showing two techniques to incorporate physics into machine learning pipelines — residual physics (top) and physical fusion (bottom)

It has been a challenge to incorporate an understanding of physics — the laws that govern mass, motion and more — into the development of neural networks, where AIs modeled after the human brain with billions of nodes to crunch massive image data sets until they gain an understanding of what they “see.” But there are now a few promising lines of research that seek to add elements of physics-awareness into already robust data-driven networks. The UCLA study aims to harness the power of both the deep knowledge from data and the real-world know-how of physics to create a hybrid AI with enhanced capabilities.

“Visual machines — cars, robots, or health instruments that use images to perceive the world — are ultimately doing tasks in our physical world,” said the study’s corresponding author Achuta Kadambi, an assistant professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Physics-aware forms of inference can enable cars to drive more safely or surgical robots to be more precise.”

The research team outlined three ways in which physics and data are starting to be combined into computer vision artificial intelligence:

  • Incorporating physics into AI data sets Tag objects with additional information, such as how fast they can move or how much they weigh, similar to characters in video games
  • Incorporating physics into network architectures Run data through a network filter that codes physical properties into what cameras pick up
  • Incorporating physics into network loss function Leverage knowledge built on physics to help AI interpret training data on what it observes

These three lines of investigation have already yielded encouraging results in improved computer vision. For example, the hybrid approach allows AI to track and predict an object’s motion more precisely and can produce accurate, high-resolution images from scenes obscured by inclement weather. With continued progress in this dual modality approach, deep learning-based AIs may even begin to learn the laws of physics on their own, according to the researchers.

Morphological flexibility in robotic systems through physical polygon meshing

by Christoph H. Belke, Kevin Holdcroft, Alexander Sigrist, Jamie Paik in Nature Machine Intelligence

Jamie Paik and her team of researchers at EPFL’s School of Engineering have created an origami-like robot that can change shape, move around and interact with objects and people.

By combining inspiration from the digital world of polygon meshing and the biological world of swarm behavior, the Mori3 robot can morph from 2D triangles into almost any 3D object. The EPFL research shows the promise of modular robotics for space travel.

“Our aim with Mori3 is to create a modular, origami-like robot that can be assembled and disassembled at will depending on the environment and task at hand,” says Jamie Paik, director of the Reconfigurable Robotics Lab. “Mori3 can change its size, shape and function.”

The individual modules of the Mori3 robot are triangular in shape. The modules easily join together to create polygons of different sizes and configurations in a process known as polygon meshing.

“We have shown that polygon meshing is a viable robotic strategy,” says Christoph Belke, a Post-doctoral researcher in robotics. To achieve this, the team had to push the boundaries of various aspects of robotics, including the mechanical and electronic design, computer systems and engineering.

“We had to rethink the way we understand robotics,” explains Belke. “These robots can change their own shape, attach to each other, communicate and reconfigure to form functional and articulated structures.”

This proof of concept is a success as Mori3 robots are good at doing the three things that robots should be able to do: moving around, handling and transporting objects, and interacting with users.

What is the advantage in creating modular and multi-functional robots? Paik explains that, to perform a wide range of tasks, robots need to be able to change their shape or configuration.

“Polygonal and polymorphic robots that connect to one another to create articulated structures can be used effectively for a variety of applications,” she says. “Of course, a general-purpose robot like Mori3 will be less effective than specialized robots in certain areas. That said, Mori3’s biggest selling point is its versatility.”

Mori3 robots were designed in part to be used in spacecraft, which don’t have the room to store different robots for each individual task that needs to be carried out. The researchers hope that Mori3 robots will be used for communication purposes and external repairs.

Robust walking control of a lower limb rehabilitation exoskeleton coupled with a musculoskeletal model via deep reinforcement learning

by Shuzhen Luo, Ghaith Androwis, Sergei Adamovich, Erick Nunez, Hao Su, Xianlian Zhou in Journal of NeuroEngineering and Rehabilitation

A team of researchers has developed a new method for controlling lower limb exoskeletons using deep reinforcement learning. The method enables more robust and natural walking control for users of lower limb exoskeletons. “Robust walking control of a lower limb rehabilitation exoskeleton coupled with a musculoskeletal model via deep reinforcement learning” is available open access.

While advances in wearable robotics have helped restore mobility for people with lower limb impairments, current control methods for exoskeletons are limited in their ability to provide natural and intuitive movements for users. This can compromise balance and contribute to user fatigue and discomfort. Few studies have focused on the development of robust controllers that can optimize the user’s experience in terms of safety and independence.

Existing exoskeletons for lower limb rehabilitation employ a variety of technologies to help the user maintain balance, including special crutches and sensors, according to co-author Ghaith Androwis, PhD, senior research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation and director of the Center’s Rehabilitation Robotics and Research Laboratory. Exoskeletons that operate without such helpers allow more independent walking, but at the cost of added weight and slow walking speed.

Overview of the modular, decoupled RL-based walking control framework of the LLRE with human-in-the-loop.

“Advanced control systems are essential to developing a lower limb exoskeleton that enables autonomous, independent walking under a range of conditions,” said Dr. Androwis. The novel method developed by the research team uses deep reinforcement learning to improve exoskeleton control. Reinforcement learning is a type of artificial intelligence that enables machines to learn from their own experiences through trial and error.

“Using a musculoskeletal model coupled with an exoskeleton, we simulated the movements of the lower limb and trained the exoskeleton control system to achieve natural walking patterns using reinforcement learning,” explained corresponding author Xianlian Zhou, PhD, associate professor and director of the BioDynamics Lab in the Department of Biomedical Engineering at New Jersey Institute of Technology (NJIT). “We are testing the system in real-world conditions with a lower limb exoskeleton being developed by our team and the results show the potential for improved walking stability and reduced user fatigue.”

The team determined that their proposed model generated a universal robust walking controller capable of handling various levels of human-exoskeleton interactions without the need for tuning parameters. The new system has the potential to benefit a wide range of users, including those with spinal cord injuries, multiple sclerosis, stroke, and other neurological conditions. The researchers plan to continue testing the system with users and further refine the control algorithms to improve walking performance.

“We are excited about the potential of this new system to improve the quality of life for people with lower limb impairments,” said Dr. Androwis. “By enabling more natural and intuitive walking patterns, we hope to help users of exoskeletons to move with greater ease and confidence.”

Neural Volumetric Memory for Visual Locomotion Control

by Ruihan Yang, UC San Diego, Ge Yang in arXiv

Researchers led by the University of California San Diego have developed a new model that trains four-legged robots to see more clearly in 3D. The advance enabled a robot to autonomously cross challenging terrain with ease — including stairs, rocky ground and gap-filled paths — while clearing obstacles in its way.

“By providing the robot with a better understanding of its surroundings in 3D, it can be deployed in more complex environments in the real world,” said study senior author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.

The robot is equipped with a forward-facing depth camera on its head. The camera is tilted downwards at an angle that gives it a good view of both the scene in front of it and the terrain beneath it. To improve the robot’s 3D perception, the researchers developed a model that first takes 2D images from the camera and translates them into 3D space. It does this by looking at a short video sequence that consists of the current frame and a few previous frames, then extracting pieces of 3D information from each 2D frame. That includes information about the robot’s leg movements such as joint angle, joint velocity and distance from the ground. The model compares the information from the previous frames with information from the current frame to estimate the 3D transformation between the past and the present.

The model fuses all that information together so that it can use the current frame to synthesize the previous frames. As the robot moves, the model checks the synthesized frames against the frames that the camera has already captured. If they are a good match, then the model knows that it has learned the correct representation of the 3D scene. Otherwise, it makes corrections until it gets it right.

The 3D representation is used to control the robot’s movement. By synthesizing visual information from the past, the robot is able to remember what it has seen, as well as the actions its legs have taken before, and use that memory to inform its next moves.

“Our approach allows the robot to build a short-term memory of its 3D surroundings so that it can act better,” said Wang.

The new study builds on the team’s previous work, where researchers developed algorithms that combine computer vision with proprioception — which involves the sense of movement, direction, speed, location and touch — to enable a four-legged robot to walk and run on uneven ground while avoiding obstacles. The advance here is that by improving the robot’s 3D perception (and combining it with proprioception), the researchers show that the robot can traverse more challenging terrain than before.

“What’s exciting is that we have developed a single model that can handle different kinds of challenging environments,” said Wang. “That’s because we have created a better understanding of the 3D surroundings that makes the robot more versatile across different scenarios.”

The approach has its limitations, however. Wang notes that their current model does not guide the robot to a specific goal or destination. When deployed, the robot simply takes a straight path and if it sees an obstacle, it avoids it by walking away via another straight path.

“The robot does not control exactly where it goes,” he said. “In future work, we would like to include more planning techniques and complete the navigation pipeline.”

Robust Team Communication Analytics with Transformer-Based Dialogue Modeling

by Jason Saville, James Lester, Randall Spain in Artificial Intelligence in Education

Researchers have developed a new artificial intelligence (AI) framework that is better than previous technologies at analyzing and categorizing dialogue between individuals, with the goal of improving team training technologies. The framework will enable training technologies to better understand how well individuals are coordinating with one another and working as part of a team.

“There is a great deal of interest in developing AI-powered training technologies that can understand teamwork dynamics and modify their training to foster improved collaboration among team members,” says Wookhee Min, co-author of a paper on the work and a research scientist at North Carolina State University. “However, previous AI architectures have struggled to accurately assess the content of what team members are sharing with each other when they communicate.”

“We’ve developed a new framework that significantly improves the ability of AI to analyze communication between team members,” says Jay Pande, first author of the paper and a Ph.D. student at NC State. “This is a significant step forward for the development of adaptive training technologies that aim to facilitate effective team communication and collaboration.”

The new AI framework builds on a powerful deep learning model that was trained on a large, text-based language dataset. This model, called the Text-to-Text Transfer Transformer (T5), was then customized using data collected during squad-level training exercises conducted by the U.S. Army.

“We modified the T5 model to use contextual features of the team — such as the speaker’s role — to more accurately analyze team communication,” Min says. “That context can be important. For example, something a team leader says may need to be viewed differently than something another team member says.”

To test the performance of the new framework, the researchers compared it to two previous AI technologies. Specifically, the researchers tested the ability of all three AI technologies to understand the dialogue within a squad of six soldiers during a training exercise.

The AI framework was tasked with two things: classify what sort of dialogue was taking place, and follow the flow of information within the squad. Classifying the dialogue refers to determining the purpose of what was being said. For example, was someone requesting information, providing information, or issuing a command? Following the flow of information refers to how information was being shared within the team. For example, was information being passed up or down the chain of command?

“We found that the new framework performed substantially better than the previous AI technologies,” Pande says.

“One of the things that was particularly promising was that we trained our framework using data from one training mission, but tested the model’s performance using data from a different training mission,” Min says. “And the boost in performance over the previous AI models was notable — even though we were testing the model in a new set of circumstances.”

The researchers also note that they were able to achieve these results using a relatively small version of the T5 model. That’s important, because it means that they can get analysis in fractions of a second without a supercomputer.

A worm-inspired robot based on origami structures driven by the magnetic field

by Yuchen Jin et al in Bioinspiration & Biomimetics

Bio-inspired robots, robotic systems that emulate the appearance, movements, and/or functions of specific biological systems, could help to tackle real-world problems more efficiently and reliably. Over the past two decades, roboticists have introduced a growing number of these robots, some of which draw inspiration from fruit flies, worms, and other small organisms.

Researchers at China University of Petroleum (East China) recently developed a worm-inspired robot with a body structure that is based on the oriental paper-folding art of origami. This robotic system is based on actuators that respond to magnetic forces, compressing and bending its body to replicate the movements of worms.

“Soft robotics is a promising field that our research group has been paying a lot of attention to,” Jianlin Liu, one of the researchers who developed the robot, told Tech Xplore. “While reviewing the existing research literature in the field, we found that bionic robots, such as worm-inspired robots, was a topic worth exploring. We thus set out to fabricate a worm-like origami robot based on the existing literatures. After designing and reviewing several different structures, we chose to focus on a specific knitting pattern for our robot.”

The worm-inspired robot created by Liu and his colleagues consists of an origami-based backbone, 24 magnetic sheets inside its body, and two NdFeB magnets in the external part of its body. The robot’s backbone was created following a paper-knitting origami pattern.

Schematic diagram of the worm-inspired robot. Credit: Jin et al

When exposed to magnetic forces, the robot’s body deforms and compresses, resulting in locomotion patterns that resemble those of earthworms and other crawling, worm-like organisms. As it is primarily based on paper and magnets, the system is low-cost, easy to fabricate and weighs very little.

“The origami structure proposed in this article reproduces the both appearance and structure of worms and other worm-like creatures,” Liu explained. “Several robots introduced in recent years are based on magnetic actuation, and these robots can be valuable for different applications, for instance for cleaning pipelines and other constricted environments. Our work could largely enrich the origami robot field and inspire the development on new advanced equipment.”

In initial simulations, the researchers used their robot to produce three different types of motion, which they dubbed the inchworm, Omega and hybrid motions. These different locomotion styles could allow a more advanced version of their robot to effectively tackle different types of tasks, for instance avoiding obstacles, climbing walls, crawling inside pipes, or delivering small parcels.

Compression and tensile tests conducted by the researchers. Credit: Jin et al

In the future, Liu and his colleagues plan to further improve their robot’s design and create more advanced bio-inspired robotic systems. In addition, they hope that their work will inspire other research teams to create similar worm-inspired robots that could help to solve a wide range of real-world problems more efficiently.

“The development of origami robots based on bionic worms is a promising research topic and I plan to explore it further in my future work,” Liu added. “For example, I would like to design more origami prototype robots with features that are specifically inspired by earthworms or other worms, which could be valuable for specific applications. Finally, focusing on other actuation methods could also potentially enrich our research.”

Cooperation for Scalable Supervision of Autonomy in Mixed Traffic

by Cameron Hickert et al in IEEE Transactions on Robotics

When we think of getting on the road in our cars, our first thoughts may not be that fellow drivers are particularly safe or careful — but human drivers are more reliable than one may expect. For each fatal car crash in the United States, motor vehicles log a whopping hundred million miles on the road.

Human reliability also plays a role in how autonomous vehicles are integrated in the traffic system, especially around safety considerations. Human drivers continue to surpass autonomous vehicles in their ability to make quick decisions and perceive complex environments: Autonomous vehicles are known to struggle with seemingly common tasks, such as taking on- or off-ramps, or turning left in the face of oncoming traffic. Despite these enormous challenges, embracing autonomous vehicles in the future could yield great benefits, like clearing congested highways; enhancing freedom and mobility for non-drivers; and boosting driving efficiency, an important piece in fighting climate change.

MIT engineer Cathy Wu envisions ways that autonomous vehicles could be deployed with their current shortcomings, without experiencing a dip in safety. “I started thinking more about the bottlenecks. It’s very clear that the main barrier to deployment of autonomous vehicles is safety and reliability,” Wu says.

One path forward may be to introduce a hybrid system, in which autonomous vehicles handle easier scenarios on their own, like cruising on the highway, while transferring more complicated maneuvers to remote human operators. Wu, who is a member of the Laboratory for Information and Decision Systems (LIDS), a Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering (CEE) and a member of the MIT Institute for Data, Systems, and Society (IDSS), likens this approach to air traffic controllers on the ground directing commercial aircraft. In a paper, Wu and co-authors Cameron Hickert and Sirui Li (both graduate students at LIDS) introduced a framework for how remote human supervision could be scaled to make a hybrid system efficient without compromising passenger safety. They noted that if autonomous vehicles were able to coordinate with each other on the road, they could reduce the number of moments in which humans needed to intervene.

For the project, Wu, Hickert, and Li sought to tackle a maneuver that autonomous vehicles often struggle to complete. They decided to focus on merging, specifically when vehicles use an on-ramp to enter a highway. In real life, merging cars must accelerate or slow down in order to avoid crashing into cars already on the road. In this scenario, if an autonomous vehicle was about to merge into traffic, remote human supervisors could momentarily take control of the vehicle to ensure a safe merge.

In order to evaluate the efficiency of such a system, particularly while guaranteeing safety, the team specified the maximum amount of time each human supervisor would be expected to spend on a single merge. They were interested in understanding whether a small number of remote human supervisors could successfully manage a larger group of autonomous vehicles, and the extent to which this human-to-car ratio could be improved while still safely covering every merge. With more autonomous vehicles in use, one might assume a need for more remote supervisors. But in scenarios where autonomous vehicles coordinated with each other, the team found that cars could significantly reduce the number of times humans needed to step in. For example, a coordinating autonomous vehicle already on a highway could adjust its speed to make room for a merging car, eliminating a risky merging situation altogether.

The team substantiated the potential to safely scale remote supervision in two theorems. First, using a mathematical framework known as queuing theory, the researchers formulated an expression to capture the probability of a given number of supervisors failing to handle all merges pooled together from multiple cars. This way, the researchers were able to assess how many remote supervisors would be needed in order to cover every potential merge conflict, depending on the number of autonomous vehicles in use. The researchers derived a second theorem to quantify the influence of cooperative autonomous vehicles on surrounding traffic for boosting reliability, to assist cars attempting to merge.

When the team modeled a scenario in which 30% of cars on the road were cooperative autonomous vehicles, they estimated that a ratio of one human supervisor to every 47 autonomous vehicles could cover 99.9999% of merging cases. But this level of coverage drops below 99%, an unacceptable range, in scenarios where autonomous vehicles did not cooperate with each other.

“If vehicles were to coordinate and basically prevent the need for supervision, that’s actually the best way to improve reliability,” Wu says.

The team decided to focus on merging not only because it’s a challenge for autonomous vehicles, but also because it’s a well-defined task associated with a less-daunting scenario: driving on the highway. About half of the total miles traveled in the United States occur on interstates and other freeways. Since highways allow higher speeds than city roads, Wu says, “If you can fully automate highway driving … you give people back about a third of their driving time.”

If it became feasible for autonomous vehicles to cruise unsupervised for most highway driving, the challenge of safely navigating complex or unexpected moments would remain. For instance, “you [would] need to be able to handle the start and end of the highway driving,” Wu says. You would also need to be able to manage times when passengers zone out or fall asleep, making them unable to quickly take over controls should it be needed. But if remote human supervisors could guide autonomous vehicles at key moments, passengers may never have to touch the wheel. Besides merging, other challenging situations on the highway include changing lanes and overtaking slower cars on the road.

Although remote supervision and coordinated autonomous vehicles are hypotheticals for high-speed operations, and not currently in use, Wu hopes that thinking about these topics can encourage growth in the field.

“This gives us some more confidence that the autonomous driving experience can happen,” Wu says. “I think we need to be more creative about what we mean by ‘autonomous vehicles.’ We want to give people back their time — safely. We want the benefits, we don’t strictly want something that drives autonomously.”

Upcoming events

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--