RT/ A worm-inspired robot based on an origami structure and magnetic actuators

Paradigm
Paradigm
Published in
30 min readJun 15, 2023

Robotics biweekly vol.76, 26th May — 15th June

TL;DR

  • Researchers recently developed a worm-inspired robot with a body structure that is based on the oriental paper-folding art of origami. This robotic system is based on actuators that respond to magnetic forces, compressing and bending its body to replicate the movements of worms.
  • As robots assume more roles in the world, a new analysis reviewed research on robot rights, concluding that granting rights to robots is a bad idea. Instead, the article looks to Confucianism to offer an alternative.
  • A robotic bee that can fly fully in all directions has been developed. With four wings made out of carbon fiber and mylar as well as four light-weight actuators to control each wing, the Bee++ prototype is the first to fly stably in all directions. That includes the tricky twisting motion known as yaw, with the Bee++ fully achieving the six degrees of free movement that a typical flying insect displays.
  • FluidLab, a new simulation tool from researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), enhances robot learning for complex fluid manipulation tasks like making latte art, ice cream, and even manipulating air. The virtual environment offers a versatile collection of intricate fluid handling challenges, involving both solids and liquids, and multiple fluids simultaneously.
  • MIT engineers envisions ways that autonomous vehicles could be deployed with their current shortcomings, without experiencing a dip in safety. Researchers have introduced a framework for how remote human supervision could be scaled to make a hybrid system efficient without compromising passenger safety.
  • A research team at Stanford University recently introduced NerfBridge, a new open-source software package for training NeRF algorithms that could ultimately enable their use in online robotics experiments.
  • The qualities that make a knitted sweater comfortable and easy to wear are the same things that might allow robots to better interact with humans. RobotSweater, developed by a research team, is a machine-knitted textile “skin” that can sense contact and pressure.
  • Researchers have established a new approach for additively manufacturing soft robotics, using a 3D knitting method that can holistically “print” entire soft robots.
  • Researchers at Stanford University have developed digital skin that can convert sensations such as heat and pressure to electrical signals that can be read by electrodes implanted in the human brain.
  • Scientists set out to understand whether robots using a voice designed to sound charismatic would be more successful as team creativity facilitators.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

A worm-inspired robot based on origami structures driven by the magnetic field

by Yuchen Jin et al in Bioinspiration & Biomimetics

Bio-inspired robots, robotic systems that emulate the appearance, movements, and/or functions of specific biological systems, could help to tackle real-world problems more efficiently and reliably. Over the past two decades, roboticists have introduced a growing number of these robots, some of which draw inspiration from fruit flies, worms, and other small organisms.

Researchers at China University of Petroleum (East China) recently developed a worm-inspired robot with a body structure that is based on the oriental paper-folding art of origami. This robotic system is based on actuators that respond to magnetic forces, compressing and bending its body to replicate the movements of worms.

“Soft robotics is a promising field that our research group has been paying a lot of attention to,” Jianlin Liu, one of the researchers who developed the robot, told Tech Xplore. “While reviewing the existing research literature in the field, we found that bionic robots, such as worm-inspired robots, was a topic worth exploring. We thus set out to fabricate a worm-like origami robot based on the existing literatures. After designing and reviewing several different structures, we chose to focus on a specific knitting pattern for our robot.”

The worm-inspired robot created by Liu and his colleagues consists of an origami-based backbone, 24 magnetic sheets inside its body, and two NdFeB magnets in the external part of its body. The robot’s backbone was created following a paper-knitting origami pattern.

Schematic diagram of the worm-inspired robot. Credit: Jin et al

When exposed to magnetic forces, the robot’s body deforms and compresses, resulting in locomotion patterns that resemble those of earthworms and other crawling, worm-like organisms. As it is primarily based on paper and magnets, the system is low-cost, easy to fabricate and weighs very little.

“The origami structure proposed in this article reproduces the both appearance and structure of worms and other worm-like creatures,” Liu explained. “Several robots introduced in recent years are based on magnetic actuation, and these robots can be valuable for different applications, for instance for cleaning pipelines and other constricted environments. Our work could largely enrich the origami robot field and inspire the development on new advanced equipment.”

In initial simulations, the researchers used their robot to produce three different types of motion, which they dubbed the inchworm, Omega and hybrid motions. These different locomotion styles could allow a more advanced version of their robot to effectively tackle different types of tasks, for instance avoiding obstacles, climbing walls, crawling inside pipes, or delivering small parcels.

Compression and tensile tests conducted by the researchers. Credit: Jin et al

In the future, Liu and his colleagues plan to further improve their robot’s design and create more advanced bio-inspired robotic systems. In addition, they hope that their work will inspire other research teams to create similar worm-inspired robots that could help to solve a wide range of real-world problems more efficiently.

“The development of origami robots based on bionic worms is a promising research topic and I plan to explore it further in my future work,” Liu added. “For example, I would like to design more origami prototype robots with features that are specifically inspired by earthworms or other worms, which could be valuable for specific applications. Finally, focusing on other actuation methods could also potentially enrich our research.”

Should Robots Have Rights or Rites?

by Tae Wan Kim, Alan Strudler in Communications of the ACM

Philosophers and legal scholars have explored significant aspects of the moral and legal status of robots, with some advocating for giving robots rights. As robots assume more roles in the world, a new analysis reviewed research on robot rights, concluding that granting rights to robots is a bad idea. Instead, the article looks to Confucianism to offer an alternative.

“People are worried about the risks of granting rights to robots,” notes Tae Wan Kim, Associate Professor of Business Ethics at CMU’s Tepper School of Business, who conducted the analysis. “Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers — not a rights bearers — could work better.”

Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self. This, in turn, requires a unique perspective on rites, with people enhancing themselves morally by participating in proper rituals.

When considering robots, Kim suggests that the Confucian alternative of assigning rites — or what he calls role obligations — to robots is more appropriate than giving robots rights. The concept of rights is often adversarial and competitive, and potential conflict between humans and robots is concerning.

“Assigning role obligations to robots encourages teamwork, which triggers an understanding that fulfilling those obligations should be done harmoniously,” explains Kim. “Artificial intelligence (AI) imitates human intelligence, so for robots to develop as rites bearers, they must be powered by a type of AI that can imitate humans’ capacity to recognize and execute team activities — and a machine can learn that ability in various ways.”

Kim acknowledges that some will question why robots should be treated respectfully in the first place. “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves,” he suggests.

Various non-natural entities — such as corporations — are considered people and even assume some Constitutional rights. In addition, humans are not the only species with moral and legal status; in most developed societies, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments.

High-Performance Six-DOF Flight Control of the Bee$^{+ }$: An Inclined-Stroke-Plane Approach

by Ryan M. Bena, Xiufeng Yang, Ariel A. Calderón, Néstor O. Pérez-Arancibia in IEEE Transactions on Robotics

A robotic bee that can fly fully in all directions has been developed by Washington State University researchers.

With four wings made out of carbon fiber and mylar as well as four light-weight actuators to control each wing, the Bee++ prototype is the first to fly stably in all directions. That includes the tricky twisting motion known as yaw, with the Bee++ fully achieving the six degrees of free movement that a typical flying insect displays. Led by Néstor O. Pérez-Arancibia, Flaherty associate professor in WSU’s School of Mechanical and Materials Engineering, the researchers report on their work. Pérez-Arancibia will present the results at the IEEE International Conference on Robotics and Automation at the end of this month.

Researchers have been trying to develop artificial flying insects for more than 30 years, said Pérez-Arancibia. They could someday be used for many applications, including for artificial pollination, search and rescue efforts in tight spaces, biological research, or environmental monitoring, including in hostile environments. But just getting the tiny robots to take off and land required development of controllers that act the way an insect brain does.

“It’s a mixture of robotic design and control,” he said. “Control is highly mathematical, and you design a sort of artificial brain. Some people call it the hidden technology, but without those simple brains, nothing would work.”

Researchers initially developed a two-winged robotic bee, but it was limited in its movement. In 2019, Pérez-Arancibia and two of his PhD students for the first time built a four-winged robot light enough to take off. To do two maneuvers known as pitching or rolling, the researchers make the front wings flap in a different way than the back wings for pitching and the right wings flap in a different way than the left wings for rolling, creating torque that rotates the robot about its two main horizontal axes. But being able to control the complex yaw motion is tremendously important, he said. Without it, robots spin out of control, unable to focus on a point. Then they crash.

“If you can’t control yaw, you’re super limited,” he said. “If you’re a bee, here is the flower, but if you can’t control the yaw, you are spinning all the time as you try to get there.”

Having all degrees of movement is also critically important for evasive maneuvers or tracking objects.

“The system is highly unstable, and the problem is super hard,” he said. “For many years, people had theoretical ideas about how to control yaw, but nobody could achieve it due to actuation limitations.”

To allow their robot to twist in a controlled manner, the researchers took a cue from insects and moved the wings so that they flap in an angled plane. They also increased the amount of times per second their robot can flap its wings — from 100 to 160 times per second.

“Part of the solution was the physical design of the robot, and we also invented a new design for the controller — the brain that tells the robot what to do,” he said.

Weighing in at 95 mg with a 33-millimeter wingspan, the Bee++ is still bigger than real bees, which weigh around 10 milligrams. Unlike real insects, it can only fly autonomously for about five minutes at a time, so it is mostly tethered to a power source through a cable. The researchers are also working to develop other types of insect robots, including crawlers and water striders.

FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation

by Zhou Xian, Bo Zhu, Zhenjia Xu, Hsiao-Yu Tung, Antonio Torralba, Katerina Fragkiadaki, Chuang Gan in ICLR 2023 Conference

Imagine you’re enjoying a picnic by a riverbank on a windy day. A gust of wind accidentally catches your paper napkin and lands on the water’s surface, quickly drifting away from you. You grab a nearby stick and carefully agitate the water to retrieve it, creating a series of small waves. These waves eventually push the napkin back toward the shore, so you grab it. In this scenario, the water acts as a medium for transmitting forces, enabling you to manipulate the position of the napkin without direct contact.

Humans regularly engage with various types of fluids in their daily lives, but doing so has been a formidable and elusive goal for current robotic systems. Hand you a latte? A robot can do that. Make it? That’s going to require a bit more nuance.

FluidLab, a new simulation tool from researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), enhances robot learning for complex fluid manipulation tasks like making latte art, ice cream, and even manipulating air. The virtual environment offers a versatile collection of intricate fluid handling challenges, involving both solids and liquids, and multiple fluids simultaneously. FluidLab supports modeling solid, liquid, and gas, including elastic, plastic, rigid objects, Newtonian and non-Newtonian liquids, and smoke and air.

At the heart of FluidLab lies FluidEngine, an easy-to-use physics simulator capable of seamlessly calculating and simulating various materials and their interactions, all while harnessing the power of graphics processing units (GPUs) for faster processing. The engine is “differential,” meaning the simulator can incorporate physics knowledge for a more realistic physical world model, leading to more efficient learning and planning for robotic tasks.

In contrast, most existing reinforcement learning methods lack that world model that just depends on trial and error. This enhanced capability, say the researchers, lets users experiment with robot learning algorithms and toy with the boundaries of current robotic manipulation abilities.

Fluid manipulation tasks proposed in FluidLab.

To set the stage, the researchers tested said robot learning algorithms using FluidLab, discovering and overcoming unique challenges in fluid systems. By developing clever optimization methods, they’ve been able to transfer these learnings from simulations to real-world scenarios effectively.

“Imagine a future where a household robot effortlessly assists you with daily tasks, like making coffee, preparing breakfast, or cooking dinner. These tasks involve numerous fluid manipulation challenges. Our benchmark is a first step towards enabling robots to master these skills, benefiting households and workplaces alike,” says visiting researcher at MIT CSAIL and research scientist at the MIT-IBM Watson AI Lab Chuang Gan, the senior author on a new paper about the research.

“For instance, these robots could reduce wait times and enhance customer experiences in busy coffee shops. FluidEngine is, to our knowledge, the first-of-its-kind physics engine that supports a wide range of materials and couplings while being fully differentiable. With our standardized fluid manipulation tasks, researchers can evaluate robot learning algorithms and push the boundaries of today’s robotic manipulation capabilities.”

Over the past few decades, scientists in the robotic manipulation domain have mainly focused on manipulating rigid objects, or on very simplistic fluid manipulation tasks like pouring water. Studying these manipulation tasks involving fluids in the real world can also be an unsafe and costly endeavor. With fluid manipulation, it’s not always just about fluids, though. In many tasks, such as creating the perfect ice cream swirl, mixing solids into liquids, or paddling through the water to move objects, it’s a dance of interactions between fluids and various other materials.

Simulation environments must support “coupling,” or how two different material properties interact. Fluid manipulation tasks usually require pretty fine-grained precision, with delicate interactions and handling of materials, setting them apart from straightforward tasks like pushing a block or opening a bottle. FluidLab’s simulator can quickly calculate how different materials interact with each other.

Helping out the GPUs is “Taichi,” a domain-specific language embedded in Python. The system can compute gradients (rates of change in environment configurations with respect to the robot’s actions) for different material types and their interactions (couplings) with one another. This precise information can be used to fine-tune the robot’s movements for better performance. As a result, the simulator allows for faster and more efficient solutions, setting it apart from its counterparts.

The 10 tasks the team put forth fell into two categories: using fluids to manipulate hard-to-reach objects, and directly manipulating fluids for specific goals. Examples included separating liquids, guiding floating objects, transporting items with water jets, mixing liquids, creating latte art, shaping ice cream, and controlling air circulation.

“The simulator works similarly to how humans use their mental models to predict the consequences of their actions and make informed decisions when manipulating fluids. This is a significant advantage of our simulator compared to others,” says Carnegie Mellon University Ph.D. student Zhou Xian, another author on the paper.

“While other simulators primarily support reinforcement learning, ours supports reinforcement learning and allows for more efficient optimization techniques. Utilizing the gradients provided by the simulator supports highly efficient policy search, making it a more versatile and effective tool.”

FluidLab’s future looks bright. The current work attempted to transfer trajectories optimized in simulation to real-world tasks directly in an open-loop manner. For next steps, the team is working to develop a closed-loop policy in simulation that takes as input the state or the visual observations of the environments and performs fluid manipulation tasks in real time, and then transfers the learned policies in real-world scenes. The platform is publicly available, and researchers hope it will benefit future studies in developing better methods for solving complex fluid manipulation tasks.

Cooperation for Scalable Supervision of Autonomy in Mixed Traffic

by Cameron Hickert et al in IEEE Transactions on Robotics

When we think of getting on the road in our cars, our first thoughts may not be that fellow drivers are particularly safe or careful — but human drivers are more reliable than one may expect. For each fatal car crash in the United States, motor vehicles log a whopping hundred million miles on the road.

Human reliability also plays a role in how autonomous vehicles are integrated in the traffic system, especially around safety considerations. Human drivers continue to surpass autonomous vehicles in their ability to make quick decisions and perceive complex environments: Autonomous vehicles are known to struggle with seemingly common tasks, such as taking on- or off-ramps, or turning left in the face of oncoming traffic. Despite these enormous challenges, embracing autonomous vehicles in the future could yield great benefits, like clearing congested highways; enhancing freedom and mobility for non-drivers; and boosting driving efficiency, an important piece in fighting climate change.

MIT engineer Cathy Wu envisions ways that autonomous vehicles could be deployed with their current shortcomings, without experiencing a dip in safety. “I started thinking more about the bottlenecks. It’s very clear that the main barrier to deployment of autonomous vehicles is safety and reliability,” Wu says.

One path forward may be to introduce a hybrid system, in which autonomous vehicles handle easier scenarios on their own, like cruising on the highway, while transferring more complicated maneuvers to remote human operators. Wu, who is a member of the Laboratory for Information and Decision Systems (LIDS), a Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering (CEE) and a member of the MIT Institute for Data, Systems, and Society (IDSS), likens this approach to air traffic controllers on the ground directing commercial aircraft. In a paper, Wu and co-authors Cameron Hickert and Sirui Li (both graduate students at LIDS) introduced a framework for how remote human supervision could be scaled to make a hybrid system efficient without compromising passenger safety. They noted that if autonomous vehicles were able to coordinate with each other on the road, they could reduce the number of moments in which humans needed to intervene.

For the project, Wu, Hickert, and Li sought to tackle a maneuver that autonomous vehicles often struggle to complete. They decided to focus on merging, specifically when vehicles use an on-ramp to enter a highway. In real life, merging cars must accelerate or slow down in order to avoid crashing into cars already on the road. In this scenario, if an autonomous vehicle was about to merge into traffic, remote human supervisors could momentarily take control of the vehicle to ensure a safe merge.

In order to evaluate the efficiency of such a system, particularly while guaranteeing safety, the team specified the maximum amount of time each human supervisor would be expected to spend on a single merge. They were interested in understanding whether a small number of remote human supervisors could successfully manage a larger group of autonomous vehicles, and the extent to which this human-to-car ratio could be improved while still safely covering every merge. With more autonomous vehicles in use, one might assume a need for more remote supervisors. But in scenarios where autonomous vehicles coordinated with each other, the team found that cars could significantly reduce the number of times humans needed to step in. For example, a coordinating autonomous vehicle already on a highway could adjust its speed to make room for a merging car, eliminating a risky merging situation altogether.

The team substantiated the potential to safely scale remote supervision in two theorems. First, using a mathematical framework known as queuing theory, the researchers formulated an expression to capture the probability of a given number of supervisors failing to handle all merges pooled together from multiple cars. This way, the researchers were able to assess how many remote supervisors would be needed in order to cover every potential merge conflict, depending on the number of autonomous vehicles in use. The researchers derived a second theorem to quantify the influence of cooperative autonomous vehicles on surrounding traffic for boosting reliability, to assist cars attempting to merge.

When the team modeled a scenario in which 30% of cars on the road were cooperative autonomous vehicles, they estimated that a ratio of one human supervisor to every 47 autonomous vehicles could cover 99.9999% of merging cases. But this level of coverage drops below 99%, an unacceptable range, in scenarios where autonomous vehicles did not cooperate with each other.

“If vehicles were to coordinate and basically prevent the need for supervision, that’s actually the best way to improve reliability,” Wu says.

The team decided to focus on merging not only because it’s a challenge for autonomous vehicles, but also because it’s a well-defined task associated with a less-daunting scenario: driving on the highway. About half of the total miles traveled in the United States occur on interstates and other freeways. Since highways allow higher speeds than city roads, Wu says, “If you can fully automate highway driving … you give people back about a third of their driving time.”

If it became feasible for autonomous vehicles to cruise unsupervised for most highway driving, the challenge of safely navigating complex or unexpected moments would remain. For instance, “you [would] need to be able to handle the start and end of the highway driving,” Wu says. You would also need to be able to manage times when passengers zone out or fall asleep, making them unable to quickly take over controls should it be needed. But if remote human supervisors could guide autonomous vehicles at key moments, passengers may never have to touch the wheel. Besides merging, other challenging situations on the highway include changing lanes and overtaking slower cars on the road.

Although remote supervision and coordinated autonomous vehicles are hypotheticals for high-speed operations, and not currently in use, Wu hopes that thinking about these topics can encourage growth in the field.

“This gives us some more confidence that the autonomous driving experience can happen,” Wu says. “I think we need to be more creative about what we mean by ‘autonomous vehicles.’ We want to give people back their time — safely. We want the benefits, we don’t strictly want something that drives autonomously.”

NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics

by Javier Yu et al in arXiv

Neural radiance fields (NeRFs) are advanced machine learning techniques that can generate three-dimensional (3D) representations of objects or environments from two-dimensional (2D) images. As these techniques can model complex real-world environments realistically and in detail, they could greatly support robotics research.

Most existing datasets and platforms for training NeRFs, however, are designed to be used offline, as they require the completion of a pose optimization step that significantly delays the creation of photo realistic representations. This has so far prevented most roboticists from using these techniques to test their algorithms on physical robots in real-time.

A research team at Stanford University recently introduced NerfBridge, a new open-source software package for training NeRF algorithms that could ultimately enable their use in online robotics experiments. This package is designed to effectively bridge ROS (the robot operating system), a renowned software library for robotics applications, and Nerfstudio, an open-source library designed to train NeRFs in real-time.

“Recently members of my lab, the Stanford Multi-robot Systems Lab, have been excited about exploring applications of Neural Radiance Fields (NeRFs) in robotics, but we found that right now there isn’t an easy way to use these methods with an actual robot, so it’s impossible to do any real experiments with them,” Javier Yu, the first author of the paper, told Tech Xplore. “Since the tools didn’t exist, we decided to build them ourselves, and out of that engineering push to see how NeRFs work on robots we got a nice tool that we think will be useful to a lot of folks in the robotics community.”

NeRFs are sophisticated techniques based on artificial neural networks that were first introduced by the computer graphics research community. They essentially create detailed maps of the world by training a neural network to reconstruct the 3D geometry and color of the scene captured in a photograph or 2D image.

“The problem of mapping from images is one that we in the robotics community have been working on for a long time and NeRFs offer a new perspective on how to approach it,” Yu explained. “Typically, NeRFs are trained in an offline fashion where all of the images are gathered ahead of time, and then the NeRF of the scene is trained all at once. In robotics, however, we want to use the NeRF directly for tasks like navigation and so the NeRF is not useful if we only get it when we arrive at our destination. Instead, we want to build the NeRF incrementally (online) as the robot explores its environment. This is exactly the problem that NerfBridge solves.”

NerfBridge, the package introduced by Yu and his colleagues, utilizes images captured by the sensors and cameras integrated in physical robots. These images are continuously streamed into Nerfstudio’s powerful NeRF training library, enabling the creation of NeRFs that are constantly updating themselves and improving as the robot captures new images of its surroundings.

A visualization of how NerfBridge integrates with robot systems and NerfStudio. Images are streamed from the robot, and camera poses are estimated in real time. Posed images are then passed to NerfBridge which in turn inserts them into the training data set for an instance of NerfStudio. Credit: Yu et al

To demonstrate the potential of their method, Yu and his colleagues used it to train a NeRF based on images captured by a camera mounted on a quadrotor, a drone with four rotors, as it flew around in both indoor and outdoor environments. Their results were remarkable, highlighting the value of NerfBridge for facilitating the use of NeRFs in robotics research.

This promising method could thus soon be used by other researchers to train NERFs and test their algorithms on physical robots as they navigate their surrounding environment. Meanwhile, Yu and his colleagues plan to explore additional strategies that could broaden the use of NeRFs in robotics.

“Ultimately, we hope that NerfBridge will lower the barrier of entry for other researchers to start looking at applications of NeRFs in robotics, and to test their new algorithms on robots in the real world,” Yu added. “Moving forward from NerfBridge, we are going to be looking into methods for improving NeRF training when images come streamed from a robot and demonstrating the concrete advantages of using NeRF-based maps for other tasks in robotics like localization and navigation.”

RobotSweater: Scalable, Generalizable, and Customizable Machine-Knitted Tactile Skins for Robots

by Zilin Si et al in arXiv

The qualities that make a knitted sweater comfortable and easy to wear are the same things that might allow robots to better interact with humans. RobotSweater, developed by a research team from Carnegie Mellon University’s Robotics Institute, is a machine-knitted textile “skin” that can sense contact and pressure.

“We can use that to make the robot smarter during its interaction with humans,” said Changliu Liu, an assistant professor of robotics in the School of Computer Science.

Just as knitters can take any kind of yarn and turn it into a sock, hat or sweater of any size or shape, the knitted RobotSweater fabric can be customized to fit uneven three-dimensional surfaces.

“Knitting machines can pattern yarn into shapes that are non-flat, that can be curved or lumpy,” said James McCann, an SCS assistant professor whose research has focused on textile fabrication in recent years. “That made us think maybe we could make sensors that fit over curved or lumpy robots.”

Once knitted, the fabric can be used to help the robot “feel” when a human touches it, particularly in an industrial setting where safety is paramount. Current solutions for detecting human-robot interaction in industry look like shields and use very rigid materials that Liu notes can’t cover the robot’s entire body because some parts need to deform.

“With RobotSweater, the robot’s whole body can be covered, so it can detect any possible collisions,” said Liu, whose research focuses on industrial applications of robotics.

RobotSweater’s knitted fabric consists of two layers of conductive yarn made with metallic fibers to conduct electricity. Sandwiched between the two is a net-like, lace-patterned layer. When pressure is applied to the fabric — say, from someone touching it — the conductive yarn closes a circuit and is read by the sensors.

“The force pushes together the rows and columns to close the connection,” said Wenzhen Yuan, an SCS assistant professor and director of the RoboTouch lab. “If there’s a force through the conductive stripes, the layers would contact each other through the holes.”

Apart from how to design the knitted layers, including dozens if not hundreds of samples and tests, the team faced another challenge in connecting the wiring and electronics components to the soft textile.

“There was a lot of fiddly physical prototyping and adjustment,” McCann said. “The students working on this managed to go from something that seemed promising to something that actually worked.”

What worked was wrapping the wires around snaps attached to the ends of each stripe in the knitted fabric. Snaps are a cost-effective and efficient solution, such that even hobbyists creating textiles with electronic elements, known as e-textiles, could use them, McCann said.

“You need a way of attaching these things together that is strong, so it can deal with stretching, but isn’t going to destroy the yarn,” he said, adding that the team also discussed using flexible circuit boards.

Once fitted to the robot’s body, RobotSweater can sense the distribution, shape and force of the contact. It’s also more accurate and effective than the visual sensors most robots rely on now.

“The robot will move in the way that the human pushes it, or can respond to human social gestures,” Yuan said.

In their research, the team demonstrated that pushing on a companion robot outfitted in RobotSweater told it which way to move or what direction to turn its head. When used on a robot arm, RobotSweater allowed a push from a person’s hand to guide the arm’s movement, while grabbing the arm told it to open or close its gripper. In future research, the team wants to explore how to program reactions from the swipe or pinching motions used on a touchscreen.

3D Knitting for Pneumatic Soft Robotics

by Vanessa Sanchez et al, in Advanced Functional Materials

Soft robotics have several key advantages over rigid counterparts, including their inherent safety features — soft materials with motions powered by inflating and deflating air chambers can safely be used in fragile environments or in proximity with humans — as well as their flexibility that enables them to fit into tight spaces. Textiles have become a choice material for constructing many types of soft robots, especially wearables, but the traditional “cut and sew” methods of manufacturing have left much to be desired.

Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have established a new approach for additively manufacturing soft robotics, using a 3D knitting method that can holistically “print” entire soft robots.

“The soft robotics community is still in the phase of seeking alternative materials approaches that will enable us to go beyond more classical rigid robot shapes and functions,” says Robert Wood, senior corresponding author on the paper, who is the Harry Lewis and Marlyn McGrath Professor of Engineering and Applied Sciences at SEAS.

“Textiles are appealing since we can radically tune their structural properties by choice of their constituent fibers and how those fibers interact with each other,” Wood says.

“Using ‘cut and sew’ methods, you need to manufacture large sheets of textile material that you then cut into patterns that are assembled by stitching or bonding — and this typically involves a high level of human labor,” says Vanessa Sanchez, first author on the paper and a former Ph.D. student in Wood’s lab. “Every seam adds costs, and potential points of failure. For manufacturing complex robotic devices, this can be a big challenge.”

Credit: Harvard John A. Paulson School of Engineering and Applied Sciences

Sanchez was intrigued by the concept of 3D knitting, which can produce seamless articles of clothing with little material waste. She wondered if the method could be adapted to create textile-based soft robots. The team acquired a vintage punch card knitting machine and Sanchez connected with knitting experts from the Rhode Island School of Design and Parsons School of Design and Fashion Institute of Technology.

To automate the knitting process, Sanchez and the team also needed to develop software that could direct the knitting equipment — machines often several-decades old — to make complex structures out of various types of yarns. “In one instance, I had to trick the machinery — using a software program — into thinking that my computer was a floppy disk,” Sanchez says. After initial experiments were promising the team moved to a more modern, automated machine. James McCann, an Assistant Professor at the Carnegie Mellon Robotics Institute, collaborated on the software.

“The team wanted to develop and characterize a wide range of soft actuators — they weren’t just building one pattern, they were building a whole set of parametric patterns,” McCann says. “This is hard to do with traditional knitting design software, which is generally focused on developing single outputs by hand instead of easily-adjustable parametric families of outputs.”

To create a workaround, the team described the 3D patterns using a “knitout” file format — a knitting description written in general-purpose programming languages — and then developed code to translate those knitout descriptions to run on their desired knitting machine.

“The cool thing about developing parametric patterns in a generic knitting format like knitout is that other groups with different types of knitting machines can use and build on the same patterns, without extensive translation effort,” McCann says.

After setting up their 3D knitting process, Sanchez and collaborators conducted a series of experiments to, for the first time, create an extensive library of knowledge about the way various knitting parameters impact mechanical properties of the resulting material. Testing 20 different combinations of yarn, structure, and more, the team characterized how varied knit architectures impact folding and unfolding, structural geometry, and tensile properties. Using combinations of these structures, they demonstrated many different knit robot prototypes, including various types of gripper devices with bending and grasping appendages, a multi-chamber claw, an inchworm-like robot, and a snake-like actuator capable of picking up objects much heavier than the device itself.

“We wanted to create a library for engineers to draw from to develop a variety of soft robots, so we characterized the mechanical properties of many different knits,” Sanchez says. “3D knitting is a new way of thinking about additive manufacturing, about how to make things that could be reconfigured or redeployed. There are already industrial machines to support this type of manufacturing — with this initial step, we think our approach can scale and translate out of the lab.”

“I envision that programmable textiles will have a similar impact on how soft robots are made as fiber-reinforced composites have had on the construction of high-performance aircraft and automobiles,” Wood says.

Neuromorphic sensorimotor loop embodied by monolithically integrated, low-voltage, soft e-skin

by Weichen Wang et al in Science

Researchers at Stanford University have developed digital skin that can convert sensations such as heat and pressure to electrical signals that can be read by electrodes implanted in the human brain.

Although such capability was developed years earlier, the components required at that time to convert digital signals were rigid and unwieldy. This new e-skin is soft as, well, skin. The conversion elements are seamlessly incorporated within the skin, which measures a few tens of nanometers thick.

The development holds promise for more natural interaction between AI-based prosthetic limbs and the brain. It also is a step forward in efforts to construct robots that can “feel” human sensations such as pain, pressure and temperature. This would allow robots working with accident victims, for instance, to better relate to signs of comfort or distress.

“Our dream is to make a whole hand where we have multiple sensors that can sense pressure, strain, temperature and vibration,” says Zhenan Bao, a chemical engineering professor at Stanford University, who worked on the project. “Then we will be able to provide a true kind of sensation.”

The researchers said a key reason people forgo the use of prosthetics is that a lack of sensory feedback feels unnatural and makes them uncomfortable. The e-skin was first tested in the brain cells of rats. The animals twitched their legs when their cortexes were stimulated. The extent of twitching corresponded to varying levels of pressure.

“Electronic skin would eliminate the boundary between the living body and machine components,” the researchers said.

An early challenge was to create a flexible e-skin that could run on low voltage. Early efforts required 30 volts. By creating stretchable field-effect transistors and solid-state synaptic transistors, the team was able to reduce the required voltage and gain greater efficiency.

“This new e-skin runs on just 5 volts and can detect stimuli similar to real skin,” said Weichen Wang, an author of the paper who has worked on the project for three years. “It provides electrical performance — such as low voltage drive, low power consumption, and moderate circuit integration — comparable to that of poly-silicon transistors.”

A related development was announced by scientists at the University of Edinburgh last March. They created an e-skin composed of a thin layer of silicone embedded with wires and sensitivity detectors “to give soft robots the ability to sense things only millimeters away, in all directions, very quickly,” according to Yunjie Yang, who lead the university team’s study. A university press release stated the development “gives robots for the first time a level of physical self-awareness similar to that of people and animals.”

Charismatic speech features in robot instructions enhance team creativity

by Karen Fucinato et al in Frontiers in Communication

Increasingly, social robots are being used for support in educational contexts. But does the sound of a social robot affect how well they perform, especially when dealing with teams of humans? Teamwork is a key factor in human creativity, boosting collaboration and new ideas. Danish scientists set out to understand whether robots using a voice designed to sound charismatic would be more successful as team creativity facilitators.

“We had a robot instruct teams of students in a creativity task. The robot either used a confident, passionate — ie charismatic — tone of voice or a normal, matter-of-fact tone of voice,” said Dr. Kerstin Fischer of the University of Southern Denmark, corresponding author of the study. “We found that when the robot spoke in a charismatic speaking style, students’ ideas were more original and more elaborate.”

We know that social robots acting as facilitators can boost creativity, and that the success of facilitators is at least partly dependent on charisma: people respond to charismatic speech by becoming more confident and engaged. Fischer and her colleagues aimed to see if this effect could be reproduced with the voices of social robots by using a text-to-speech function engineered for characteristics associated with charismatic speaking, such as a specific pitch range and way of stressing words. Two voices were developed, one charismatic and one less expressive, based on a range of parameters which correlate with perceived speaker charisma.

The scientists recruited five classes of university students, all taking courses which included an element of team creativity. The students were told that they were testing a creativity workshop, which involved brainstorming ideas based on images and then using those ideas to come up with a new chocolate product. The workshop was led by videos of a robot speaking: introducing the task, reassuring the teams of students that there were no bad ideas, and then congratulating them for completing the task and asking them to fill out a self-evaluation questionnaire. The questionnaire evaluated the robot’s performance, the students’ own views on how their teamwork went, and the success of the session. The creativity of each session, as measured by the number of original ideas produced and how elaborate they were, was also measured by the researchers.

The group that heard the charismatic voice rated the robot more positively, finding it more charismatic and interactive. Their perception of their teamwork was more positive, and they produced more original and elaborate ideas. They rated their teamwork more highly. However, the group that heard the non-charismatic voice perceived themselves as more resilient and efficient, possibly because a less charismatic leader led to better organization by the team members themselves, even though they produced fewer ideas.

“I had suspected that charismatic speech has very important effects, but our study provides clear evidence for the effect of charismatic speech on listener creativity,” said Dr. Oliver Niebuhr of the University of Southern Denmark, co-author of the study. “This is the first time that such a link between charismatic voices, artificial speakers, and creativity outputs has been found.”

The scientists pointed out that although the sessions with the charismatic voice were generally more successful, not all the teams responded identically to the different voices: previous experiences in their different classes may have affected their response. Larger studies will be needed to understand how these external factors affected team performance.

“The robot was present only in videos, but one could suspect that more exposure or repeated exposure to the charismatic speaking style would have even stronger effects,” said Fischer. “Moreover, we have only varied a few features between the two robot conditions. We don’t know how the effect size would change if other or more features were varied. Finally, since charismatic speaking patterns differ between cultures, we would expect that the same stimuli will not yield the same results in all languages and cultures.”

Upcoming events

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--