RT/ Living robots that can reproduce

Paradigm
Paradigm
Published in
26 min readDec 14, 2021

Robotics biweekly vol.42, 30th November — 14th December

TL;DR

  • Scientists have discovered a new form of biological reproduction — and created self-replicating living robots. Made from frog cells, these computer-designed organisms gather single cells inside a Pac-Man-shaped ‘mouth’ — and release Xenobot ‘babies’ that look and move like themselves. Then the offspring go and do the same — over and over.
  • Engineers at Caltech, ETH Zurich, and Harvard are developing artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.
  • Scientists at Berkeley Lab and the University of Massachusetts Amherst have demonstrated the first self-powered, aqueous robot that runs continuously without electricity. The technology has the potential as an automated chemical synthesis or drug delivery system for pharmaceuticals.
  • A floating, robotic film designed at UC Riverside could be trained to hoover oil spills at sea or remove contaminants from drinking water.
  • Like snowflakes, no two branches are alike. They can differ in size, shape and texture; some might be wet or moss-covered or bursting with offshoots. And yet birds can land on just about any of them. This ability was of great interest to the labs of Stanford University engineers Mark Cutkosky and David Lentink — now at the University of Groningen in the Netherlands — which have both developed technologies inspired by animal abilities.
  • Researchers have found that the physical texture of robots influenced perceptions of robot personality. Furthermore, first impressions of robots, based on physical appearance alone, could influence the relationship between physical texture and robot personality formation. This work could facilitate the development of robots with perceived personalities that match user expectations.
  • A new strategy to reduce the spread of COVID-19 employs a mobile robot that detects people in crowds who are not observing social-distancing rules, navigates to them, and encourages them to move apart.
  • Scientists from the Division of Mechanical Science and Engineering at Kanazawa University developed a prototype pipe maintenance robot that can unclog and repair pipes with a wide range of diameters. Using a cutting tool with multiple degrees of freedom, the machine is capable of manipulating and dissecting objects for removal. This work may be a significant step forward for the field of sewerage maintenance robots.
  • Engineered Arts, a robot maker based in the U.K., is showing off its latest creation at this year’s CES 2022. Called Ameca, the robot is able to display what appears to be the most human-like facial expressions by a robot to date.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Researches

Kinematic self-replication in reconfigurable organisms

by Sam Kriegman, Douglas Blackiston, Michael Levin, Josh Bongard in Proceedings of the National Academy of Sciences

To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.

Spontaneous kinematic self-replication. (A) Stem cells are removed from early-stage frog blastula, dissociated, and placed in a saline solution, where they cohere into spheres containing ∼3,000 cells. The spheres develop cilia on their outer surfaces after 3 d. When the resulting mature swarm is placed amid ∼60,000 dissociated stem cells in a 60-mm-diameter circular dish (B), their collective motion pushes some cells together into piles (C and D), which, if sufficiently large (at least 50 cells), develop into ciliated offspring (E) themselves capable of swimming, and, if provided additional dissociated stem cells (F), build additional offspring. In short, progenitors (p) build offspring (o), which then become progenitors. This process can be disrupted by withholding additional dissociated cells. Under these, the currently best known environmental conditions, the system naturally self-replicates for a maximum of two rounds before halting. The probability of halting (α) or replicating( 1 − α) depends on a temperature range suitable for frog embryos, the concentration of dissociated cells, the number and stochastic behavior of the mature organisms, the viscosity of the solution, the geometry of the dish’s surface, and the possibility of contamination. (Scale bars, 500 μm.)

Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction — and applied their discovery to create the first-ever, self-replicating living robots.

The same team that built the first living robots (“Xenobots,” assembled from frog cells — reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth” — that, a few days later, become new Xenobots that look and move just like themselves.

And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.

“With the right design — they will spontaneously self-replicate,” says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.

In a Xenopus laevis frog, these embryonic cells would develop into skin.

“They would be sitting on the outside of a tadpole, keeping out pathogens and redistributing mucus,” says Michael Levin, Ph.D., a professor of biology and director of the Allen Discovery Center at Tufts University and co-leader of the new research. “But we’re putting them into a novel context. We’re giving them a chance to reimagine their multicellularity.” Levin is also an Associate Faculty member at the Wyss Institute.

And what they imagine is something far different than skin.

“People have thought for quite a long time that we’ve worked out all the ways that life can reproduce or replicate. But this is something that’s never been observed before,” says co-author Douglas Blackiston, Ph.D., the senior scientist at Tufts University and the Wyss Institute who assembled the Xenobot “parents” and developed the biological portion of the new study.

“This is profound,” says Levin. “These cells have the genome of a frog, but, freed from becoming tadpoles, they use their collective intelligence, a plasticity, to do something astounding.” In earlier experiments, the scientists were amazed that Xenobots could be designed to achieve simple tasks. Now they are stunned that these biological objects — a computer-designed collection of cells — will spontaneously replicate. “We have the full, unaltered frog genome,” says Levin, “but it gave no hint that these cells can work together on this new task,” of gathering and then compressing separated cells into working self-copies.

“These are frog cells replicating in a way that is very different from how frogs do it. No animal or plant known to science replicates in this way,” says Sam Kriegman, Ph.D., the lead author on the new study, who completed his Ph.D. in Bongard’s lab at UVM and is now a post-doctoral researcher at Tuft’s Allen Center and Harvard University’s Wyss Institute for Biologically Inspired Engineering.

On its own, the Xenobot parent, made of some 3,000 cells, forms a sphere.

“These can make children but then the system normally dies out after that. It’s very hard, actually, to get the system to keep reproducing,” says Kriegman.

But with an artificial intelligence program working on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core, an evolutionary algorithm was able to test billions of body shapes in simulation — triangles, squares, pyramids, starfish — to find ones that allowed the cells to be more effective at the motion-based “kinematic” replication reported in the new research.

“We asked the supercomputer at UVM to figure out how to adjust the shape of the initial parents, and the AI came up with some strange designs after months of chugging away, including one that resembled Pac-Man,” says Kriegman. “It’s very non-intuitive. It looks very simple, but it’s not something a human engineer would come up with. Why one tiny mouth? Why not five? We sent the results to Doug and he built these Pac-Man-shaped parent Xenobots. Then those parents built children, who built grandchildren, who built great-grandchildren, who built great-great-grandchildren.” In other words, the right design greatly extended the number of generations.

Kinematic replication is well-known at the level of molecules — but it has never been observed before at the scale of whole cells or organisms.

“We’ve discovered that there is this previously unknown space within organisms, or living systems, and it’s a vast space,” says Bongard. “How do we then go about exploring that space? We found Xenobots that walk. We found Xenobots that swim. And now, in this study, we’ve found Xenobots that kinematically replicate. What else is out there?”

Or, as the scientists write in the Proceedings of the National Academy of Science study: “life harbors surprising behaviors just below the surface, waiting to be uncovered.”

Some people may find this exhilarating. Others may react with concern, or even terror, to the notion of a self-replicating biotechnology. For the team of scientists, the goal is deeper understanding.

“We are working to understand this property: replication. The world and technologies are rapidly changing. It’s important, for society as a whole, that we study and understand how this works,” says Bongard. These millimeter-sized living machines, entirely contained in a laboratory, easily extinguished, and vetted by federal, state and institutional ethics experts, “are not what keep me awake at night. What presents risk is the next pandemic; accelerating ecosystem damage from pollution; intensifying threats from climate change,” says UVM’s Bongard. “This is an ideal system in which to study self-replicating systems. We have a moral imperative to understand the conditions under which we can control it, direct it, douse it, exaggerate it.”

Bongard points to the COVID epidemic and the hunt for a vaccine. “The speed at which we can produce solutions matters deeply. If we can develop technologies, learning from Xenobots, where we can quickly tell the AI: ‘We need a biological tool that does X and Y and suppresses Z,’ — that could be very beneficial. Today, that takes an exceedingly long time.” The team aims to accelerate how quickly people can go from identifying a problem to generating solutions — “like deploying living machines to pull microplastics out of waterways or build new medicines,” Bongard says.

“We need to create technological solutions that grow at the same rate as the challenges we face,” Bongard says.

And the team sees promise in the research for advancements toward regenerative medicine.

“If we knew how to tell collections of cells to do what we wanted them to do, ultimately, that’s regenerative medicine — that’s the solution to traumatic injury, birth defects, cancer, and aging,” says Levin. “All of these different problems are here because we don’t know how to predict and control what groups of cells are going to build. Xenobots are a new platform for teaching us.”

Learning efficient navigation in vortical flow fields

by Peter Gunnarson, Ioannis Mandralis, Guido Novati, Petros Koumoutsakos, John O. Dabiri in Nature Communications

Engineers at Caltech, ETH Zurich, and Harvard are developing an artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.

Test navigation problem of navigating through unsteady cylinder flow. Swimmers are initialized randomly inside the red disk and are assigned a random target location inside the green disk. These regions of start and target points are 4D in diameter, and are located 5D downstream and centered 2.05D above and below the cylinder. Additionally, each swimmer is initialized at a random time step in the vortex shedding cycle. An episode is successful when a swimmer reaches within a radius of D/6 around the target location.

“When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri (MS ’03, PhD ‘05), the Centennial Professor of Aeronautics and Mechanical Engineering and corresponding author of a paper about the research.

The AI’s performance was tested using computer simulations, but the team behind the effort has also developed a small palm-sized robot that runs the algorithm on a tiny computer chip that could power seaborne drones both on Earth and other planets. The goal would be to create an autonomous system to monitor the condition of the planet’s oceans, for example using the algorithm in combination with prosthetics they previously developed to help jellyfish swim faster and on command. Fully mechanical robots running the algorithm could even explore oceans on other worlds, such as Enceladus or Europa.

In either scenario, drones would need to be able to make decisions on their own about where to go and the most efficient way to get there. To do so, they will likely only have data that they can gather themselves — information about the water currents they are currently experiencing.

To tackle this challenge, researchers turned to reinforcement learning (RL) networks. Compared to conventional neural networks, reinforcement learning networks do not train on a static data set but rather train as fast as they can collect experience. This scheme allows them to exist on much smaller computers — for the purposes of this project, the team wrote software that can be installed and run on a Teensy — a 2.4-by-0.7-inch microcontroller that anyone can buy for less than $30 on Amazon and only uses about a half watt of power.

Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform.

This kind of navigation is analogous to the way eagles and hawks ride thermals in the air, extracting energy from air currents to maneuver to a desired location with the minimum energy expended. Surprisingly, the researchers discovered that their reinforcement learning algorithm could learn navigation strategies that are even more effective than those thought to be used by real fish in the ocean.

“We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” says Dabiri.

The technology is still in its infancy: currently, the team would like to test the AI on each different type of flow disturbance it would possibly encounter on a mission in the ocean — for example, swirling vortices versus streaming tidal currents — to assess its effectiveness in the wild. However, by incorporating their knowledge of ocean-flow physics within the reinforcement learning strategy, the researchers aim to overcome this limitation. The current research proves the potential effectiveness of RL networks in addressing this challenge — particularly because they can operate on such small devices. To try this in the field, the team is placing the Teensy on a custom-built drone dubbed the “CARL-Bot” (Caltech Autonomous Reinforcement Learning Robot). The CARL-Bot will be dropped into a newly constructed two-story-tall water tank on Caltech’s campus and taught to navigate the ocean’s currents.

“Not only will the robot be learning, but we’ll be learning about ocean currents and how to navigate through them,” says Peter Gunnarson, graduate student at Caltech and lead author of the Nature Communications paper.

Continuous, autonomous subsurface cargo shuttling by nature-inspired meniscus-climbing systems

by Ganhua Xie, Pei Li, Paul Y. Kim, Pei-Yang Gu, Brett A. Helms, Paul D. Ashby, Lei Jiang, Thomas P. Russell in Nature Chemistry

When you think of a robot, images of R2-D2 or C-3PO might come to mind. But robots can serve up more than just entertainment on the big screen. In a lab, for example, robotic systems can improve safety and efficiency by performing repetitive tasks and handling harsh chemicals.

But before a robot can get to work, it needs energy — typically from electricity or a battery. Yet even the most sophisticated robot can run out of juice. For many years, scientists have wanted to make a robot that can work autonomously and continuously, without electrical input.

Now, as reported in the journal Nature Chemistry, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of Massachusetts Amherst have demonstrated just that — through “water-walking” liquid robots that, like tiny submarines, dive below water to retrieve precious chemicals, and then surface to deliver chemicals “ashore” again and again.

The technology is the first self-powered, aqueous robot that runs continuously without electricity. It has potential as an automated chemical synthesis or drug delivery system for pharmaceuticals.

“We have broken a barrier in designing a liquid robotic system that can operate autonomously by using chemistry to control an object’s buoyancy,” said senior author Tom Russell, a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts Amherst who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab’s Materials Sciences Division.

Russell said that the technology significantly advances a family of robotic devices called “liquibots.” In previous studies, other researchers demonstrated liquibots that autonomously perform a task, but just once; and some liquibots can perform a task continuously, but need electricity to keep on running. In contrast, “we don’t have to provide electrical energy because our liquibots get their power or ‘food’ chemically from the surrounding media,” Russell explained.

Through a series of experiments in Berkeley Lab’s Materials Sciences Division, Russell and first author Ganhua Xie, a former postdoctoral researcher at Berkeley Lab who is now a professor at Hunan University in China, learned that “feeding” the liquibots salt makes the liquibots heavier or denser than the liquid solution surrounding them.

Additional experiments by co-investigators Paul Ashby and Brett Helms at Berkeley Lab’s Molecular Foundry revealed how the liquibots transport chemicals back and forth.

Because they are denser than the solution, the liquibots — which look like little open sacks, and are just 2 millimeters in diameter — cluster in the middle of the solution where they fill up with select chemicals. This triggers a reaction that generates oxygen bubbles, which like little balloons lift the liquibot up to the surface.

Another reaction pulls the liquibots to the rim of a container, where they “land” and offload their cargo.

The liquibots go back and forth, like the pendulum of a clock, and can run continuously as long as there is “food” in the system.

Depending on their formulation, an array of liquibots could carry out different tasks simultaneously. For example, some liquibots could detect different types of gas in the environment, while others react to specific types of chemicals. The technology may also enable autonomous, continuous robotic systems that screen small chemical samples for clinical applications, or drug discovery and drug synthesis applications.

Russell and Xie next plan to investigate how to scale up the technology for larger systems, and explore how it would work on solid surfaces.

The Molecular Foundry is a nanoscience user facility at Berkeley Lab.

Light-powered soft steam engines for self-adaptive oscillation and biomimetic swimming

by Zhiwei Li, Nosang Vincent Myung, Yadong Yin in Science Robotics

A floating, robotic film designed at UC Riverside could be trained to hoover oil spills at sea or remove contaminants from drinking water.

Powered by light and fueled by water, the film could be deployed indefinitely to clean remote areas where recharging by other means would prove difficult.

“Our motivation was to make soft robots sustainable and able to adapt on their own to changes in the environment. If sunlight is used for power, this machine is sustainable, and won’t require additional energy sources,” said UCR chemist Zhiwei Li. “The film is also re-usable.”

Researchers dubbed the film Neusbot after neustons, a category of animals that includes water striders. These insects traverse the surface of lakes and slow-moving streams with a pulsing motion, much like scientists have been able to achieve with the Neusbot, which can move on any body of water.

While other scientists have created films that bend in response to light, they have not been able to generate the adjustable, mechanical oscillation of which Neusbot is capable. This type of motion is key to controlling the robot and getting it to function where and when you want.

Technical details of this achievement are described in a new Science Robotics paper.

“There aren’t many methods to achieve this controllable movement using light. We solved the problem with a tri-layer film that behaves like a steam engine,” Li explained.

The steam from boiling water powered the motion of early trains. It is a similar principle that powers Neusbot, except with light as the power source. The middle layer of the film is porous, holding water as well as iron oxide and copper nanorods. The nanorods convert light energy into heat, vaporizing the water and powering pulsed motion across the water’s surface.

Neusbot’s bottom layer is hydrophobic, so even if an ocean wave overpowered the film, it would float back to the surface. Additionally, the nanomaterials can withstand high salt concentrations without damage.

“I’m confident about their stability in high salt situations,” Li said.

Li and UCR chemistry professor Yadong Yin specialize in making robots from nanomaterials. They controlled Neusbot’s direction by changing the angle of its light source. Powered only by the sun, the robot would simply move forward. With an additional light source, they could control where Neusbot swims and cleans.

The current version of Neusbot only features three layers. The research team wants to test future versions with a fourth layer that could absorb oil, or one that absorbs other chemicals.

“Normally, people send ships to the scene of an oil spill to clean by hand. Neusbot could do this work like a robot vacuum, but on the water’s surface,” Li said.

They would also like to try and control its oscillation mode more precisely and give it the capability for even more complex motion.

“We want to demonstrate these robots can do many things that previous versions have not achieved,” he said.

Bird-inspired dynamic grasping and perching in arboreal environments

by W. R. T. Roderick, M. R. Cutkosky, D. Lentink in Science Robotics

Like snowflakes, no two branches are alike. They can differ in size, shape and texture; some might be wet or moss-covered or bursting with offshoots. And yet birds can land on just about any of them. This ability was of great interest to the labs of Stanford University engineers Mark Cutkosky and David Lentink — now at University of Groningen in the Netherlands — which have both developed technologies inspired by animal abilities.

SNAG is a bird-inspired robotic leg and end effector, which enables aerial robots to take off and land on complex surfaces as well catch objects in the air.(A) Birds use a stereotyped approach when landing. Upon touchdown, the bird’s legs must absorb the energy of a controlled collision, which, in Tau Theory, refers to when the rate of change in τ (estimated time to collision) is greater than 0.5 (1, 6). Meanwhile, their feet adapt to the surface variability of the perch to grasp it securely and to anchor the body. Last, birds adjust their footing and balance. [Bird snapshots in (1) have been flipped to match robot posture.] (B) SNAG’s bipedal foot and leg system enables aerial robots to take off and land on complex natural surfaces in a controlled fashion. (Snapshots from trial #28; data file S3). © Inspired by peregrine falcons, we demonstrate that SNAG can also grasp a dynamic prey-like object in flight and carry it along (peregrine photo courtesy of George Roderick). (D) To illustrate its application potential in natural environments, we tested SNAG in a forest. The photo shows SNAG posed on a branch (photo edited in Apple’s Photos application).

“It’s not easy to mimic how birds fly and perch,” said William Roderick, PhD ’20, who was a graduate student in both labs. “After millions of years of evolution, they make takeoff and landing look so easy, even among all of the complexity and variability of the tree branches you would find in a forest.”

Years of study on animal-inspired robots in the Cutkosky Lab and on bird-inspired aerial robots in the Lentink Lab enabled the researchers to build their own perching robot. When attached to a quadcopter drone, their “stereotyped nature-inspired aerial grasper,” or SNAG, forms a robot that can fly around, catch and carry objects and perch on various surfaces. Showing the potential versatility of this work, the researchers used it to compare different types of bird toe arrangements and to measure microclimates in a remote Oregon forest.

In the researchers’ previous studies of parrotlets — the second smallest parrot species — the diminutive birds flew back and forth between special perches while being recorded by five high-speed cameras. The perches — representing a variety of sizes and materials, including wood, foam, sandpaper and Teflon — also contained sensors that captured the physical forces associated with the birds’ landings, perching and takeoff.

“What surprised us was that they did the same aerial maneuvers, no matter what surfaces they were landing on,” said Roderick, who is lead author of the paper. “They let the feet handle the variability and complexity of the surface texture itself.” This formulaic behavior seen in every bird landing is why the “S” in SNAG stands for “stereotyped.”

Just like the parrotlets, SNAG approaches every landing in the same way. But, in order to account for the size of the quadcopter, SNAG is based on the legs of a peregrine falcon. In place of bones, it has a 3D-printed structure — which took 20 iterations to perfect — and motors and fishing line stand-in for muscles and tendons.

Each leg has its own motor for moving back and forth and another to handle grasping. Inspired by the way tendons route around the ankle in birds, a similar mechanism in the robot’s leg absorbs landing impact energy and passively converts it into grasping force. The result is that the robot has an especially strong and high-speed clutch that can be triggered to close in 20 milliseconds. Once wrapped around a branch, SNAG’s ankles lock and an accelerometer on the right foot reports that the robot has landed and triggers a balancing algorithm to stabilize it.

During COVID-19, Roderick moved equipment, including a 3D printer, from Lentink’s lab at Stanford to rural Oregon where he set up a basement lab for controlled testing. There, he sent SNAG along a rail system that launched the robot at different surfaces, at predefined speeds and orientations, to see how it performed in various scenarios. With SNAG held in place, Roderick also confirmed the robot’s ability to catch objects thrown by hand, including a prey dummy, a corn hole bean bag and a tennis ball. Lastly, Roderick and SNAG ventured into the nearby forest for some trial runs in the real world.

Overall, SNAG performed so well that next steps in development would likely focus on what happens before landing, such as improving the robot’s situational awareness and flight control.

There are countless possible applications for this robot, including search and rescue and wildfire monitoring; it can also be attached to technologies other than drones. SNAG’s proximity to birds also allows for unique insights into avian biology. For example, the researchers ran the robot with two different toe arrangements — anisodactyl, which has three toes in front and one in back, like a peregrine falcon, and zygodactyl, which has two toes in front and two in back, like a parrotlet. They found, to their surprise, that there was very little performance difference between the two.

For Roderick, whose parents are both biologists, one of the most exciting possible applications for SNAG is in environmental research. To that end, the researchers also attached a temperature and humidity sensor to the robot, which Roderick used to record the microclimate in Oregon.

“Part of the underlying motivation of this work was to create tools that we can use to study the natural world,” said Roderick. “If we could have a robot that could act like a bird, that could unlock completely new ways of studying the environment.”

Lentink, who is senior author of the paper, commended Roderick’s persistence in what proved to be a years-long project.

“It was really Will talking with several ecologists at Berkeley six years ago and then writing his NSF Fellowship on perching aerial robots for environmental monitoring that launched this research,” Lentink said. “Will’s research has proven to be timely because there now is a 10 million dollar XPRIZE for this challenge to monitor biodiversity in rainforests.”

The first impressions of small humanoid robots modulate the process of how touch affects personality what they are

by Naoki Umeda, Hisashi Ishihara, Takashi Ikeda, Minoru Asada in Advanced Robotics

Researchers have found that the physical texture of robots influenced perceptions of robot personality. Furthermore, first impressions of robots, based on physical appearance alone, could influence the relationship between physical texture and robot personality formation. This work could facilitate the development of robots with perceived personalities that match user expectations.

Impressions of a robot’s personality can be influenced by the way it looks, sounds, and feels. But now, researchers from Japan have found specific causal relationships between impressions of robot personality and body texture.

In a study, researchers from Osaka University and Kanazawa University have revealed that a robot’s physical texture interacts with elements of its appearance in a way that influences impressions of its personality.

Body texture, such as softness or elasticity, is an important consideration in the design of robots meant for interactive functions. In addition, appearance can modulate whether a person anticipates a robot to be friendly, likable, or capable, among other characteristics.

However, the ways in which people perceive the physical texture and the personality of robots have only been examined independently. As a result, the relationships between these two factors is unclear, something the researchers aimed to address.

“The mechanisms of impression formation should be quantitatively and systematically investigated,” says lead author of the study Naoki Umeda. “Because various factors contribute to personality impression, we wanted to investigate how specific robot body properties promote or deteriorate specific kinds of impressions.”

To do this, the researchers asked adult participants to view, touch, and evaluate six different inactive robots that were humanoid to varying degrees. The participants were asked to touch the arm of the robots. For each robot, four fake arms had been constructed; these were made of silicone rubber and prepared in such a way that their elasticity varied, thus providing differing touch sensations. The causal relationships between the physical textures of the robot arms and the participant perceptions were then evaluated.

“The results confirmed our expectations,” explains Hisashi Ishihara, senior author. “We found that the impressions of the personalities of the robots varied according to the texture of the robot arms, and that there were specific relationships among certain parameters.”

The researchers also found that the first impressions of the robots, made before the participants touched them, could modulate one of the effects.

“We found that the impression of likability was strengthened when the participant anticipated that the robot would engage in peaceful emotional verbal communication. This suggests that both first impressions and touch sensations are important considerations for social robot designers focused on perceived robot personality,” says Ishihara.

Given that many robots are designed for physical interaction with humans — for instance those used in therapy or clinical settings — the texture of the robot body is an important consideration. A thorough understanding of the physical factors that influence user impressions of robots will enable researchers to design robots that optimize user comfort. This is especially important for robots employed for advanced communication, because user comfort will influence the quality of communication, and thus the utility of the robotic system.

COVID surveillance robot: Monitoring social distancing constraints in indoor scenarios

by Adarsh Jagan Sathyamoorthy, Utsav Patel, Moumita Paul, Yash Savle, Dinesh Manocha in PLOS ONE

A new strategy to reduce the spread of COVID-19 employs a mobile robot that detects people in crowds who are not observing social-distancing rules, navigates to them, and encourages them to move apart.

Previous research has shown that staying at least two meters apart from others can reduce the spread of COVID-19. Technology-based methods — such as strategies using WiFi and Bluetooth — hold promise to help detect and discourage lapses in social distancing. However, many such approaches require participation from individuals or existing infrastructure, so robots have emerged as a potential tool for addressing social distancing in crowds.

Now, Sathyamoorthy and colleagues have developed a novel way to use an autonomous mobile robot for this purpose. The robot can detect breaches and navigate to them using its own Red Green Blue — Depth (RGB-D) camera and 2-D LiDAR (Light Detection and Ranging) sensor, and can tap into an existing CCTV system, if available. Once it reaches the breach, the robot encourages people to move apart via text that appears on a mounted display.

The robot uses a novel system to sort people who have breached social distancing rules into different groups, prioritize them according to whether they are standing still or moving, and then navigate to them. This system employs a machine-learning method known as Deep Reinforcement Learning and Frozone, an algorithm previously developed by several of the same researchers to help robots navigate crowds.

The researchers tested their method by having volunteers act out social-distancing breach scenarios while standing still, walking, or moving erratically. Their robot was able to detect and address most of the breaches that occurred, and CCTV enhanced its performance.

The robot also uses a thermal camera that can detect people with potential fevers, aiding contact-tracing efforts, while also incorporating measures to ensure privacy protection and de-identification.

Further research is needed to validate and refine this method, such as by exploring how the presence of robots impacts people’s behavior in crowds.

The authors add: “A lot of healthcare workers and security personnel had to put their health at risk to serve the public during the COVID-19 pandemic. Our work’s core objective is to provide them with tools to safely and efficiently serve their communities.”

Development of a compact sewerage robot with multi-DOF cutting tool

by Thaelasutt Tugeumwolachot et al in Artificial Life and Robotics

Scientists from the Division of Mechanical Science and Engineering at Kanazawa University developed a prototype pipe maintenance robot that can unclog and repair pipes with a wide range of diameters. Using a cutting tool with multiple degrees of freedom, the machine is capable of manipulating and dissecting objects for removal. This work may be a significant step forward for the field of sewerage maintenance robots.

Various sewer pipes that are essential to the services of buildings require regular inspection, repair, and maintenance. Current robots that move inside pipes are primarily designed only for visual surveying or inspection. Some robots were developed for maintenance, but they couldn’t execute complicated tasks. In-pipe robots that can also clear blockages or perform complex maintenance tasks are highly desirable, especially for pipes that are too narrow for humans to traverse. Now, a team of researchers at Kanazawa University have developed and tested a prototype with these capabilities.

“Our robot can help civic and industrial workers by making their job much safer. It can operate in small pipes that humans either cannot access or are dangerous,” explains first author Thaelasutt Tugeumwolachot.

One of the main challenges with designing a robot of this kind is how to achieve a snug fit inside pipes of different sizes. Previous models can expand or contract their width by only about 60 percent. Here, the researchers used six foldable “crawler” arms around the body of the robot. This adjustable locomotion mechanism allowed it to work in pipes with sizes between 15 to 31 cm, a range of over 100 percent. Another design challenge is how to crowd complex and tough arm mechanism into small space. This robot equipped a compact arm which enables complicated cutting movements by being driven via gear train from several motors inside the robot body.

Municipal sewer systems today are often blocked by large, hard deposits colloquially called “fatbergs.” These are amalgams of materials such as flushable wipes, oil, and fat that have undergone a chemical process of saponification that makes them very difficult to remove. The team tested the robot’s ability to deal with such challenges by simulating blockages caused by fatbergs or other common blockages, such as tree branches.

“We found that a cutting tool with a ball-shape burr was capable of accomplishing all of the pushing, cutting, drilling, and grinding tasks,” the team leader Hiroaki Seki says.

The team expects that this research will lead to an increase in the use of robots for many difficult and dangerous jobs in narrow pipes.

MISC

  • Engineered Arts, a robot maker based in the U.K., is showing off its latest creation at this year’s CES 2022. Called Ameca, the robot is able to display what appears to be the most human-like facial expressions by a robot to date:
  • Jet-Powered Robot Prepares for Liftoff:

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--