RT/ New programmable smart fabric responds to temperature and electricity

Paradigm
Paradigm
Published in
29 min readApr 29, 2023

Robotics biweekly vol.73, 14th April — 29h April

TL;DR

  • A new smart material is activated by both heat and electricity, making it the first ever to respond to two different stimuli.
  • Mechanically responsive molecular crystals are extremely useful in soft robotics, which requires a versatile actuation technology. Crystals driven by the photothermal effect are particularly promising for achieving high-speed actuation. However, the response (bending) observed in these crystals is usually small. Now, scientists address this issue by inducing large resonated natural vibrations in anisole crystals with UV light illumination at the natural vibration frequency of the crystal.
  • Like the formation of complex living organisms, molecular robots derive their form and functionality from assembled molecules stored in a single unit, i.e., a body. Yet manufacturing this body at the microscopic level is an engineering nightmare. Now, a team has created a simple workaround.
  • The new smart sensor uses embedded information to detect motion in a single video frame.
  • To make human-robot interactions safer and more fruitful, robots should be capable of sensing their environment. In a recent study, researchers developed a novel robotic link with tactile and proximity sensing capabilities. Additionally, they created a simulation and learning framework that can be employed to train the robotic link to sense its environment. Their findings will pave the way to a future where humans and robots can operate harmoniously in close proximity.
  • Researchers have designed a low-cost, energy-efficient robotic hand that can grasp a range of objects — and not drop them — using just the movement of its wrist and the feeling in its ‘skin’.
  • Brain scans taken during table tennis reveal differences in how we respond to human versus machine opponents.
  • A new study asked kids how smart and sensitive they thought the virtual assistant was compared to a robotic vacuum. Four- to eleven-year-olds rated Alexa as more intelligent than the Roomba but felt neither deserve to be yelled at or otherwise harmed.
  • Roboticists have developed a jellyfish-inspired underwater robot with which they hope one day to collect waste from the bottom of the ocean. The almost noise-free prototype can trap objects underneath its body without physical contact, thereby enabling safe interactions in delicate environments such as coral reefs. Jellyfish-Bot could become an important tool for environmental remediation.
  • Researchers develop an early warning system that combines acoustic technology with AI to immediately classify earthquakes and determine potential tsunami risk. They propose using underwater microphones, called hydrophones, to measure the acoustic radiation produced by the earthquake, which carries information about the tectonic event and travels significantly faster than tsunami waves. The computational model triangulates the source of the earthquake and AI algorithms classify its slip type and magnitude. It then calculates important properties like effective length and width, uplift speed, and duration, which dictate the size of the tsunami.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Multi‐Stimuli Dually‐Responsive Intelligent Woven Structures with Local Programmability for Biomimetic Applications

by Runxin Xu, Guanzheng Wu, Mengmeng Jiang, Shaojie Cao, Mahyar Panahi‐Sarmad, Milad Kamkar, Xueliang Xiao in Small

A new smart material developed by researchers at the University of Waterloo is activated by both heat and electricity, making it the first ever to respond to two different stimuli.

The unique design paves the way for a wide variety of potential applications, including clothing that warms up while you walk from the car to the office in winter and vehicle bumpers that return to their original shape after a collision. Inexpensively made with polymer nano-composite fibres from recycled plastic, the programmable fabric can change its colour and shape when stimuli are applied.

a) Fabric preparation process: Used a micro twin-screw extruder, uniform PET/TMC composite fibers have been formed. Used them as weft and PET/SSF yarns as warp, wove fabric according to the looming draft of a plain weave. Then, cut the fabric from the sample loom, and conductive adhesive tapes were applied on both sides of the warp yarns of the fabric, b) fabric physical map, c) the color-changing principle of TMC and shape memory principle of PET, and d) the shape memory and reversible color changing dual-response of the fabric under thermal- and electric-introduce, respectively.

“As a wearable material alone, it has almost infinite potential in AI, robotics and virtual reality games and experiences,” said Dr. Milad Kamkar, a chemical engineering professor at Waterloo. “Imagine feeling warmth or a physical trigger eliciting a more in-depth adventure in the virtual world.”

The novel fabric design is a product of the happy union of soft and hard materials, featuring a combination of highly engineered polymer composites and stainless steel in a woven structure.

Researchers created a device similar to a traditional loom to weave the smart fabric. The resulting process is extremely versatile, enabling design freedom and macro-scale control of the fabric’s properties.

a) Schematic of the fabric, b) digital photo of the fabric. c) The figure on the left represents the porosity of the fabric (the white parts are the holes), and the figure on the right represents the twisting (above) and stretching (below) state of the fabric. d,e) Electrical resistance of a conductive tape and the fabric, and f) dynamic resistance of the fabric during the shape-memory recovery process.

The fabric can also be activated by a lower voltage of electricity than previous systems, making it more energy-efficient and cost-effective. In addition, lower voltage allows integration into smaller, more portable devices, making it suitable for use in biomedical devices and environment sensors.

“The idea of these intelligent materials was first bred and born from biomimicry science,” said Kamkar, director of the Multi-scale Materials Design (MMD) Centre at Waterloo.

“Through the ability to sense and react to environmental stimuli such as temperature, this is proof of concept that our new material can interact with the environment to monitor ecosystems without damaging them.”

The next step for researchers is to improve the fabric’s shape-memory performance for applications in the field of robotics. The aim is to construct a robot that can effectively carry and transfer weight to complete tasks.

Photothermally induced natural vibration for versatile and high-speed actuation of crystals

by Yuki Hagiwara, Shodai Hasebe, Hiroki Fujisawa, Junko Morikawa, Toru Asahi, Hideko Koshima in Nature Communications

Mechanically responsive molecular crystals are extremely useful in soft robotics, which requires a versatile actuation technology. Crystals driven by the photothermal effect are particularly promising for achieving high-speed actuation. However, the response (bending) observed in these crystals is usually small. Now, scientists from Japan address this issue by inducing large resonated natural vibrations in anisole crystals with UV light illumination at the natural vibration frequency of the crystal.

Every material possesses a unique natural vibration frequency such that when an external periodic force is applied to this material close to this frequency, the vibrations are greatly amplified. In the parlance of physics, this phenomenon is known as “resonance.” Resonance is ubiquitous in our daily life, and, depending on the context, could be deemed desirable or undesirable. For instance, musical instruments like the guitar relies on resonance for sound amplification. On the other hand, buildings and bridges are more likely to collapse under an earthquake if the ground vibration frequency matches their natural frequency.

Interestingly, natural vibration has not received much attention in material actuation, which relies on the action of mechanically responsive crystals. Versatile actuation technologies are highly desirable in the field of soft robotics. Although crystal actuation based on processes like photoisomerisation and phase transitions have been widely studied, these processes lack versatility since they require specific crystals to work. One way to improve versatility is by employing photothermal crystals, which show bending due to light-induced heating. While promising for achieving high-speed actuation, the bending angle is usually small (<0.5°), making the actuation inefficient.

Bending behaviour of 1β crystal III by the photothermal effect and the natural vibration (390 Hz) by irradiation with UV light (375 nm, 1456 mW cm−2).

Now, a team of scientists from Waseda University and Tokyo Institute of Technology in Japan has managed to overcome this drawback with nothing more than the age-old phenomenon of resonated natural vibration. The team, led by Dr. Hideko Koshima from Waseda University in Japan, used 2,4-dinitroanisole β-phase crystals () to demonstrate large-angle photothermally resonated high-speed bending induced by pulsed UV irradiation.

“Initially, the goal of this research was to create crystals that bend largely due to the photothermal effect. Therefore, we chose 2,4-dinitroanisole (1) β-phase crystal (), which has a large thermal expansion coefficient,” explains Koshima, speaking of the team’s motivation behind the study. “We serendipitously discovered fast and small natural vibration induced by the photothermal effect. Furthermore, we achieved high-speed and large bending by photothermally resonating the natural vibration.”

In their work, the team first cooled a methanol solution of commercially available anisole 1 to obtain hexagonal, rod-shaped single crystals. To irradiate them with UV light, they used a pulsed UV laser with a wavelength of 375 nm and observed the bending response of the crystal using a digital high-speed microscope. They found that the rod-shaped crystals showed, under UV irradiation, a fast natural vibration at 390 Hz with a large photothermal bending of nearly 1°, which is larger than the value of 0.2° previously reported in other crystals. Further, the bending angle due to the natural vibraton increased to nearly 4° when irradiated with pulsed UV light at 390 Hz (same as the crystal’s natural frequency). In addition to this large bending, the team observed a high response frequency of 700 Hz along with the highest energy conversion efficiency recorded till date.

These findings were further confirmed through simulations performed by the team. To their excitement, the simulation results showed excellent agreement with experimental data. “Our findings show that any light-absorbing crystal can exhibit high-speed, versatile actuation through resonated natural vibrations. This can open doors to the applications of photothermal crystals, leading eventually to real-life soft robots with high-speed actuation capability and perhaps a society with humans and robots living in harmony,” concludes Koshima.

Scalable Synthesis of Planar Macroscopic Lipid-Based Multi-Compartment Structures

by Richard J. Archer, Shogo Hamada, Ryo Shimizu, Shin-Ichiro M. Nomura in Langmuir

The typical image of a robot is one composed of motors and circuits, encased in metal. Yet the field of molecular robotics, which is being spearheaded in Japan, is beginning to change that.

Much like how complex living organisms are formed, molecular robots derive form and functionality from assembled molecules. Such robots could have important applications, such as being used to treat and diagnose diseases in vivo. The first challenge in building a molecular robot is the same as the most basic need of any organism: the body, which holds everything together. But manufacturing complex structures, especially at the microscopic level, has proven to be an engineering nightmare, and many limitations on what is possible currently exist.

To address this problem, a research team at Tohoku University has developed a simple method for creating molecular robots from artificial, multicellular-like bodies by using molecules which can organize themselves into the desired shape. The team, including Associate Professor Shin-ichiro Nomura and postdoctoral researcher Richard Archer from the Department of Robotics at the Graduate School of Engineering, recently reported their breakthrough.

“Our work demonstrated a simple, self-assembly technique which utilizes phospholipids and synthetic surfactants coated onto a hydrophobic silicone sponge,” said Archer.

When Nomura and his colleagues introduced water into the lipid coated sponge, the hydrophilic and hydrophobic forces enabled the lipids and surfactants to assemble themselves, thereby allowing water to soak in. The sponge was then placed into oil, spontaneously forming micron sized, stabilized aqueous droplets as the water was expelled from the solid support. When pipetted on the surface of water, these droplets quickly assembled into larger planar macroscopic structures, like bricks coming together to form a wall.

“Our developed technique can easily build centimeter size structures from the assembly of micron sized compartments and is capable of being done with more than one droplet type,” adds Archer. “By using different sponges with water containing different solutes, and forming different droplet types, the droplets can combine to form heterogeneous structures. This modular approach to assembly unleashes near endless possibilities.”

The team could also turn these bodies into controllable devices with induced motion. To do so, they introduced magnetic nanoparticles into the hydrophobic walls of the multi-compartment structure. Archer says this multi-compartment approach to robot design will allow flexible modular designs with multiple functionalities and could redefine what we imagine robots to be. “Future work here will move us closer to a new generation of robots which are assembled by molecules rather than forged in steel and use functional chemicals rather than silicon chips and motors.”

Dynamic machine vision with retinomorphic photomemristor-reservoir computing

by Hongwei Tan, Sebastiaan van Dijken in Nature Communications

A new bio-inspired sensor can recognise moving objects in a single frame from a video and successfully predict where they will move to. This smart sensor will be a valuable tool in a range of fields, including dynamic vision sensing, automatic inspection, industrial process control, robotic guidance, and autonomous driving technology.

Current motion detection systems need many components and complex algorithms doing frame-by-frame analyses, which makes them inefficient and energy-intensive. Inspired by the human visual system, researchers at Aalto University have developed a new neuromorphic vision technology that integrates sensing, memory, and processing in a single device that can detect motion and predict trajectories.

At the core of their technology is an array of photomemristors, electrical devices that produce electric current in response to light. The current doesn’t immediately stop when the light is switched off. Instead, it decays gradually, which means that photomemristors can effectively ‘remember’ whether they’ve been exposed to light recently. As a result, a sensor made from an array of photomemristors doesn’t just record instantaneous information about a scene, like a camera does, but also includes a dynamic memory of the preceding instants.

Retinomorphic photomemristor-reservoir computing (RP-RC) system.

‘The unique property of our technology is its ability to integrate a series of optical images in one frame,’ explains Hongwei Tan, the research fellow who led the study. ‘The information of each image is embedded in the following images as hidden information. In other words, the final frame in a video also has information about all the previous frames. That lets us detect motion earlier in the video by analysing only the final frame with a simple artificial neural network. The result is a compact and efficient sensing unit.’

To demonstrate the technology, the researchers used videos showing the letters of a word one at a time. Because all the words ended with the letter ‘E’, the final frame of all the videos looked similar. Conventional vision sensors couldn’t tell whether the ‘E’ on the screen had appeared after the other letters in ‘APPLE’ or ‘GRAPE’. But the photomemristor array could use hidden information in the final frame to infer which letters had preceded it and predict what the word was with nearly 100% accuracy.

In another test, the team showed the sensor videos of a simulated person moving at three different speeds. Not only was the system able to recognize motion by analysing a single frame, but it also correctly predicted the next frames.

Accurately detecting motion and predicting where an object will be are vital for self-driving technology and intelligent transport. Autonomous vehicles need accurate predictions of how cars, bikes, pedestrians, and other objects will move in order to guide their decisions. By adding a machine learning system to the photomemristor array, the researchers showed that their integrated system can predict future motion based on in-sensor processing of an all-informative frame.

‘Motion recognition and prediction by our compact in-sensor memory and computing solution provides new opportunities in autonomous robotics and human-machine interactions,’ says Professor Sebastiaan van Dijken. ‘The in-frame information that we attain in our system using photomemristors avoids redundant data flows, enabling energy-efficient decision-making in real time.’

Simulation, Learning, and Application of Vision-Based Tactile Sensing at Large Scale

by Quan Khanh Luu, Nhan Huu Nguyen, Van Anh Ho in IEEE Transactions on Robotics

In recent years, robots have become incredibly sophisticated machines capable of performing or assisting humans in all tasks. The days of robots functioning behind a security barrier are long gone, and today we may anticipate robots working alongside people in close contact. While working alongside robots may be very practical in some situations, they should be designed to be safe and pleasant for humans to interact with. For instance, in human-robot interactions (HRIs), robots should be able to react correctly to potential collisions with humans and also respond safely and predictably to intentional physical contact.

One of the best approaches to improve HRIs is to grant robots the ability to sense their environment in multiple ways, such as by touch, sound, and sight. Of these three, tactile sensation is particularly important for robots that are likely to come into physical contact with humans during operation. Although small-scale tactile sensors have seen tremendous progress over the past decade, the development of large-scale tactile sensors has been plagued with challenges. Moreover, most researchers have focused on systems that respond to physical touch and ignore touchless stimuli, such as when an object is in close proximity. To address these issues, a research team led by Associate Professor Van Anh Ho from Japan Advanced Institute of Science and Technology (JAIST) recently developed ProTac — an innovative soft robotic link with tactile and proximity sensing capabilities. As explained in their paper, the team not only engineered ProTac itself but also pioneered a new simulation and learning framework to effectively prepare the robotic link for use.

But what does a robotic link look like, and what is ProTac good for? In general, robotic links are rigid structural components of a robot that connect two or more joints. For example, robotic links can be seen as various ‘segments’ in a robotic limb. In this study, ProTac is designed as a soft, cylindrical segment for a robotic arm. What makes it remarkable is how the researchers incorporated the tactile and proximity sensing capabilities in a very convenient and space-efficient way.

A simulation and learning framework for tactile perception in robots.

ProTac has an outer ‘soft magic skin’ that can be slightly deformed by touch without damage. The inside of the skin is patterned with arrays of reflective markers, and fisheye cameras are installed at both ends of the robotic link looking towards these markers. The idea is that, upon physical contact and deformation of the skin, changes in the relative positions of the markers are captured by the cameras and processed to calculate the precise location and intensity of the contact. On top of this, the outer skin is of a functional polymer that can be made entirely transparent by applying an external voltage. It allows the fisheye cameras to image the immediate surroundings of ProTac, providing footage for proximity calculations.

To more easily train ProTac to make proximity and tactile measurements, the team also developed SimTacLS, an open-source simulation and learning framework based on the SOFA and Gazebo physics engines (see the paper here). This machine learning framework is trained with simulated and experimental data considering the physics of soft contact and the realistic rendering of sensor images.

“SimTacLS enabled us to effectively implement tactile perception in robotic links without the high costs of complex experimental setups,” remarks Prof. Ho, “Furthermore, with this framework, users can readily validate sensor designs and learning-based sensing performance before proceeding to actual fabrication and implementation.”

Overall, this work will help pave the way to a world where humans can harmoniously coexist and work alongside robots. Excited by the team’s contribution to this dream, Prof. Ho comments: “We expect the proposed sensing device and framework to bring in ultimate solutions for the design of robots with softness, whole-body and multimodal sensing, and safety control strategies.” It is worth noting that proposed techniques can be extended to other types of robotic systems beyond the robotic manipulator demonstrated in the study, such as mobile and flying robots. Moreover, ProTac or similar robotic links could be used to enable robotic manipulation in cluttered environments or when operating in close vicinity with humans.

Predictive Learning of Error Recovery with a Sensorized Passivity‐Based Soft Anthropomorphic Hand

by Kieran Gilday, Thomas George-Thuruthel, Fumiya Iida in Advanced Intelligent Systems

Researchers have designed a low-cost, energy-efficient robotic hand that can grasp a range of objects — and not drop them — using just the movement of its wrist and the feeling in its ‘skin’.

Grasping objects of different sizes, shapes and textures is a problem that is easy for a human, but challenging for a robot. Researchers from the University of Cambridge designed a soft, 3D printed robotic hand that cannot independently move its fingers but can still carry out a range of complex movements. The robot hand was trained to grasp different objects and was able to predict whether it would drop them by using the information provided from sensors placed on its ‘skin’.

This type of passive movement makes the robot far easier to control and far more energy-efficient than robots with fully motorised fingers. The researchers say their adaptable design could be used in the development of low-cost robotics that are capable of more natural movement and can learn to grasp a wide range of objects. In the natural world, movement results from the interplay between the brain and the body: this enables people and animals to move in complex ways without expending unnecessary amounts of energy. Over the past several years, soft components have begun to be integrated into robotics design thanks to advances in 3D printing techniques, which have allowed researchers to add complexity to simple, energy-efficient systems.

Error detection and recovery from passive perception. Demonstrated with a wrist-driven soft hand — which achieves grasping through sequential hand–environment interactions rather than any internal actuation — prediction of future errors in an open loop grasp can be learned using exteroceptive and proprioceptive information from a barometric sensing skin.

The human hand is highly complex, and recreating all of its dexterity and adaptability in a robot is a massive research challenge. Most of today’s advanced robots are not capable of manipulation tasks that small children can perform with ease. For example, humans instinctively know how much force to use when picking up an egg, but for a robot this is a challenge: too much force, and the egg could shatter; too little, and the robot could drop it. In addition, a fully actuated robot hand, with motors for each joint in each finger, requires a significant amount of energy.

In Professor Fumiya Iida’s Bio-Inspired Robotics Laboratory in Cambridge’s Department of Engineering, researchers have been developing potential solutions to both problems: a robot hand than can grasp a variety of objects with the correct amount of pressure while using a minimal amount of energy.

“In earlier experiments, our lab has shown that it’s possible to get a significant range of motion in a robot hand just by moving the wrist,” said co-author Dr Thomas George-Thuruthel, who is now based at University College London (UCL) East. “We wanted to see whether a robot hand based on passive movement could not only grasp objects, but would be able to predict whether it was going to drop the objects or not, and adapt accordingly.”

The researchers used a 3D-printed anthropomorphic hand implanted with tactile sensors, so that the hand could sense what it was touching. The hand was only capable of passive, wrist-based movement. The team carried out more than 1200 tests with the robot hand, observing its ability to grasp small objects without dropping them. The robot was initially trained using small 3D printed plastic balls, and grasped them using a pre-defined action obtained through human demonstrations.

“This kind of hand has a bit of springiness to it: it can pick things up by itself without any actuation of the fingers,” said first author Dr Kieran Gilday, who is now based at EPFL in Lausanne, Switzerland. “The tactile sensors give the robot a sense of how well the grip is going, so it knows when it’s starting to slip. This helps it to predict when things will fail.”

The robot used trial and error to learn what kind of grip would be successful. After finishing the training with the balls, it then attempted to grasp different objects including a peach, a computer mouse and a roll of bubble wrap. In these tests, the hand was able to successfully grasp 11 of 14 objects.

“The sensors, which are sort of like the robot’s skin, measure the pressure being applied to the object,” said George-Thuruthel. “We can’t say exactly what information the robot is getting, but it can theoretically estimate where the object has been grasped and with how much force.”

“The robot learns that a combination of a particular motion and a particular set of sensor data will lead to failure, which makes it a customisable solution,” said Gilday. “The hand is very simple, but it can pick up a lot of objects with the same strategy.”

“The big advantage of this design is the range of motion we can get without using any actuators,” said Iida. “We want to simplify the hand as much as possible. We can get lots of good information and a high degree of control without any actuators, so that when we do add them, we’ll get more complex behaviour in a more efficient package.”

A fully actuated robotic hand, in addition to the amount of energy it requires, is also a complex control problem. The passive design of the Cambridge-designed hand, using a small number of sensors, is easier to control, provides a wide range of motion, and streamlines the learning process. In future, the system could be expanded in several ways, such as by adding computer vision capabilities, or teaching the robot to exploit its environment, which would enable it to grasp a wider range of objects.

Parieto-Occipital Electrocortical Dynamics during Real-World Table Tennis

by Amanda Studnicki, Daniel P. Ferris in eneuro

Captain of her high school tennis team and a four-year veteran of varsity tennis in college, Amanda Studnicki had been training for this moment for years. All she had to do now was think small. Like ping pong small.

For weeks, Studnicki, a graduate student at the University of Florida, served and rallied against dozens of players on a table tennis court. Her opponents sported a science-fiction visage, a cap of electrodes streaming off their heads into a backpack as they played against either Studnicki or a ball-serving machine. That cyborg look was vital to Studnicki’s goal: to understand how our brains react to the intense demands of a high-speed sport like table tennis — and what difference a machine opponent makes.

Studnicki and her advisor, Daniel Ferris, discovered that the brains of table tennis players react very differently to human or machine opponents. Faced with the inscrutability of a ball machine, players’ brains scrambled themselves in anticipation of the next serve. While with the obvious cues that a human opponent was about to serve, their neurons hummed in unison, seemingly confident of their next move. The findings have implications for sports training, suggesting that human opponents provide a realism that can’t be replaced with machine helpers. And as robots grow more common and sophisticated, understanding our brains’ response could help make our artificial companions more naturalistic.

“Robots are getting more ubiquitous. You have companies like Boston Dynamics that are building robots that can interact with humans and other companies that are building socially assistive robots that help the elderly,” said Ferris, a professor of biomedical engineering at UF. “Humans interacting with robots is going to be different than when they interact with other humans. Our long term goal is to try to understand how the brain reacts to these differences.”

Ferris’s lab has long studied the brain’s response to visual cues and motor tasks, like walking and running. He was looking to upgrade to studying complex, fast-paced action when Studnicki, with her tennis background, joined the research group. So the lab decided tennis was the perfect sport to address these questions with. But the oversized movements — especially high overhand serves — proved an obstacle to the burgeoning tech.

“So we literally scaled things down to table tennis and asked all the same questions we had for tennis before,” Ferris said. The researchers still had to compensate for the smaller movements of table tennis. So Ferris and Studnicki doubled the 120 electrodes in a typical brain-scanning cap, each bonus electrode providing a control for the rapid head movements during a table tennis match.

With all these electrodes scanning the brain activity of players, Studnicki and Ferris were able to tune into the brain region that turns sensory information into movement. This area is known as the parieto-occipital cortex.

“It takes all your senses — visual, vestibular, auditory — and it gives information on creating your motor plan. It’s been studied a lot for simple tasks, like reaching and grasping, but all of them are stationary,” Studnicki said. “We wanted to understand how it worked for complex movements like tracking a ball in space and intercepting it, and table tennis was perfect for this.”

The researchers analyzed dozens of hours of play against both Studnicki and the ball machine. When playing against another human, players’ neurons worked in unison, like they were all speaking the same language. In contrast, when players faced a ball-serving machine, the neurons in their brains were not aligned with one another. In the neuroscience world, this lack of alignment is known as desynchronization.

“If we have 100,000 people in a football stadium and they’re all cheering together, that’s like synchronization in the brain, which is a sign the brain is relaxed” Ferris said. “If we have those same 100,000 people but they’re all talking to their friends, they’re busy but they’re not in sync. In a lot of cases, that desynchronization is an indication that the brain is doing a lot of calculations as opposed to sitting and idling.”

The team suspects that the players’ brains were so active while waiting for robotic serves because the machine provides no cues of what they are going to do next. What’s clear is that our brains process these two experiences very differently, which suggests that training with a machine might not offer the same experience as playing against a real opponent.

“I still see a lot of value in practicing with a machine,” Studnicki said. “But I think machines are going to evolve in the next 10 or 20 years, and we could see more naturalistic behaviors for players to practice against.”

The minds of machines: Children’s beliefs about the experiences, thoughts, and morals of familiar interactive technologies

by Teresa Flanagan, Gavin Wong, Tamar Kushnir in Developmental Psychology

Most kids know it’s wrong to yell or hit someone, even if they don’t always keep their hands to themselves. But what about if that someone’s name is Alexa? A new study from Duke developmental psychologists asked kids just that, as well as how smart and sensitive they thought the smart speaker Alexa was compared to its floor-dwelling cousin Roomba, an autonomous vacuum.

Four- to eleven-year-olds judged Alexa to have more human-like thoughts and emotions than Roomba. But despite the perceived difference in intelligence, kids felt neither the Roomba nor the Alexa deserve to be yelled at or harmed. That feeling dwindled as kids advanced towards adolescence, however. The research was inspired in part by lead author Teresa Flanagan seeing how Hollywood depicts human-robot interactions in shows like HBO’s “Westworld.”

“In Westworld and the movie Ex Machina, we see how adults might interact with robots in these very cruel and horrible ways,” said Flanagan, a visiting scholar in the department of psychology & neuroscience at Duke. “But how would kids interact with them?”

To find out, Flanagan recruited 127 children aged four to eleven who were visiting a science museum with their families. The kids watched a 20-second clip of each technology, and then were asked a few questions about each device. Working under the guidance of Tamar Kushnir, Ph.D., her graduate advisor and a Duke Institute for Brain Sciences faculty member, Flanagan analyzed the survey data and found some mostly reassuring results.

Overall, kids decided that both the Alexa and Roomba probably aren’t ticklish and wouldn’t feel pain if they got pinched, suggesting they can’t feel physical sensations like people do. However, they gave Alexa, but not the Roomba, high marks for mental and emotional capabilities, like being able to think or getting upset after someone is mean to it.

“Even without a body, young children think the Alexa has emotions and a mind,” Flanagan said. “And it’s not that they think every technology has emotions and minds — they don’t think the Roomba does — so it’s something special about the Alexa’s ability to communicate verbally.”

Regardless of the different perceived abilities of the two technologies, children across all ages agreed it was wrong to hit or yell at the machines.

“Kids don’t seem to think a Roomba has much mental abilities like thinking or feeling,” Flanagan said. “But kids still think we should treat it well. We shouldn’t hit or yell at it even if it can’t hear us yelling.”

The older kids got however, the more they reported it would be slightly more acceptable to attack technology.

“Four- and five-year-olds seem to think you don’t have the freedom to make a moral violation, like attacking someone,” Flanagan said. “But as they get older, they seem to think it’s not great, but you do have the freedom to do it.”

The study’s findings offer insights into the evolving relationship between children and technology and raise important questions about the ethical treatment of AI and machines in general, and as parents. Should adults, for example, model good behavior for their kids by thanking Siri or its more sophisticated counterpart ChatGPT for their help? For now, Flanagan and Kushnir are trying to understand why children think it is wrong to assault home technology. In their study, one 10-year-old said it was not okay to yell at the technology because, “the microphone sensors might break if you yell too loudly,” whereas another 10-year-old said it was not okay because “the robot will actually feel really sad.”

“It’s interesting with these technologies because there’s another aspect: it’s a piece of property,” Flanagan said. “Do kids think you shouldn’t hit these things because it’s morally wrong, or because it’s somebody’s property and it might break?”

A versatile jellyfish-like robotic platform for effective underwater propulsion and manipulation

by Tianlu Wang, Hyeong-Joon Joo, Shanyuan Song, Wenqi Hu, Christoph Keplinger, Metin Sitti in Science Advances

Most of the world is covered in oceans, which are unfortunately highly polluted. One of the strategies to combat the mounds of waste found in these very sensitive ecosystems — especially around coral reefs — is to employ robots to master the cleanup. However, existing underwater robots are mostly bulky with rigid bodies, unable to explore and sample in complex and unstructured environments, and are noisy due to electrical motors or hydraulic pumps. For a more suitable design, scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart looked to nature for inspiration. They configured a jellyfish-inspired, versatile, energy-efficient and nearly noise-free robot the size of a hand. Jellyfish-Bot is a collaboration between the Physical Intelligence and Robotic Materials departments at MPI-IS.

To build the robot, the team used electrohydraulic actuators through which electricity flows. The actuators serve as artificial muscles which power the robot. Surrounding these muscles are air cushions as well as soft and rigid components which stabilize the robot and make it waterproof. This way, the high voltage running through the actuators cannot contact the surrounding water. A power supply periodically provides electricity through thin wires, causing the muscles to contract and expand. This allows the robot to swim gracefully and to create swirls underneath its body.

“When a jellyfish swims upwards, it can trap objects along its path as it creates currents around its body. In this way, it can also collect nutrients. Our robot, too, circulates the water around it. This function is useful in collecting objects such as waste particles. It can then transport the litter to the surface, where it can later be recycled. It is also able to collect fragile biological samples such as fish eggs. Meanwhile, there is no negative impact on the surrounding environment. The interaction with aquatic species is gentle and nearly noise-free,” Tianlu Wang explains. He is a postdoc in the Physical Intelligence Department at MPI-IS and first author of the publication.

Robot design and the principle of actuation.

His co-author Hyeong-Joon Joo from the Robotic Materials Department continues: “70% of marine litter is estimated to sink to the seabed. Plastics make up more than 60% of this litter, taking hundreds of years to degrade. Therefore, we saw an urgent need to develop a robot to manipulate objects such as litter and transport it upwards. We hope that underwater robots could one day assist in cleaning up our oceans.”

Jellyfish-Bots are capable of moving and trapping objects without physical contact, operating either alone or with several in combination. Each robot works faster than other comparable inventions, reaching a speed of up to 6.1 cm/s. Moreover, Jellyfish-Bot only requires a low input power of around 100 mW. And it is safe for humans and fish should the polymer material insulating the robot one day be torn apart. Meanwhile, the noise from the robot cannot be distinguished from background levels. In this way Jellyfish-Bot interacts gently with its environment without disturbing it — much like its natural counterpart.

The robot consists of several layers: some stiffen the robot, others serve to keep it afloat or insulate it. A further polymer layer functions as a floating skin. Electrically powered artificial muscles known as HASELs are embedded into the middle of the different layers. HASELs are liquid dielectric-filled plastic pouches that are partially covered by electrodes. Applying a high voltage across an electrode charges it positively, while surrounding water is charged negatively. This generates a force between positively-charged electrode and negatively-charged water that pushes the oil inside the pouches back and forth, causing the pouches to contract and relax — resembling a real muscle. HASELs can sustain the high electrical stresses generated by the charged electrodes and are protected against water by an insulating layer. This is important, as HASEL muscles were never before used to build an underwater robot.

The first step was to develop Jellyfish-Bot with one electrode with six fingers or arms. In the second step, the team divided the single electrode into separated groups to independently actuate them.

“We achieved grasping objects by making four of the arms function as a propeller, and the other two as a gripper. Or we actuated only a subset of the arms, in order to steer the robot in different directions. We also looked into how we can operate a collective of several robots. For instance, we took two robots and let them pick up a mask, which is very difficult for a single robot alone. Two robots can also cooperate in carrying heavy loads. However, at this point, our Jellyfish-Bot needs a wire. This is a drawback if we really want to use it one day in the ocean,” Hyeong-Joon Joo says.

Perhaps wires powering robots will soon be a thing of the past. “We aim to develop wireless robots. Luckily, we have achieved the first step towards this goal. We have incorporated all the functional modules like the battery and wireless communication parts so as to enable future wireless manipulation,” Tianlu Wang continues. The team attached a buoyancy unit at the top of the robot and a battery and microcontroller to the bottom. They then took their invention for a swim in the pond of the Max Planck Stuttgart campus, and could successfully steer it along. So far, however, they could not direct the wireless robot to change course and swim the other way.

Numerical validation of an effective slender fault source solution for past tsunami scenarios

by Bernabe Gomez and Usama Kadri in Physics of Fluids

Tsunamis are incredibly destructive waves that can destroy coastal infrastructure and cause loss of life. Early warnings for such natural disasters are difficult because the risk of a tsunami is highly dependent on the features of the underwater earthquake that triggers it.

Researchers from the University of California, Los Angeles and Cardiff University in the U.K. developed an early warning system that combines state-of-the-art acoustic technology with artificial intelligence to immediately classify earthquakes and determine potential tsunami risk. Underwater earthquakes can trigger tsunamis if a large amount of water is displaced, so determining the type of earthquake is critical to assessing the tsunami risk.

“Tectonic events with a strong vertical slip element are more likely to raise or lower the water column compared to horizontal slip elements,” said co-author Bernabe Gomez. “Thus, knowing the slip type at the early stages of the assessment can reduce false alarms and enhance the reliability of the warning systems through independent cross-validation.”

In these cases, time is of the essence, and relying on deep ocean wave buoys to measure water levels often leaves insufficient evacuation time. Instead, the researchers propose measuring the acoustic radiation (sound) produced by the earthquake, which carries information about the tectonic event and travels significantly faster than tsunami waves. Underwater microphones, called hydrophones, record the acoustic waves and monitor tectonic activity in real time.

Flowchart for the methodology utilized from the recording of the signal to the estimation of the potential wave heights at chosen locations.

“Acoustic radiation travels through the water column much faster than tsunami waves. It carries information about the originating source and its pressure field can be recorded at distant locations, even thousands of kilometers away from the source. The derivation of analytical solutions for the pressure field is a key factor in the real-time analysis,” co-author Usama Kadri said.

The computational model triangulates the source of the earthquake from the hydrophones and AI algorithms classify its slip type and magnitude. It then calculates important properties like effective length and width, uplift speed, and duration, which dictate the size of the tsunami.

The authors tested their model with available hydrophone data and found it almost instantaneously and successfully described the earthquake parameters with low computational demand. They are improving the model by factoring in more information to increase the tsunami characterization’s accuracy. Their work predicting tsunami risk is part of a larger project to enhance hazard warning systems. The tsunami classification is a back-end aspect of a software that can improve the safety of offshore platforms and ships.

Upcoming events

ICRA 2023: 29 May–2 June 2023, London, UK

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--