RT/ Universal controller paves the way for practical use of robotic prostheses and exoskeletons

Paradigm
Paradigm
Published in
29 min readApr 5, 2024

Robotics & AI biweekly vol.92, 14th March — 5th April

TL;DR

  • Researchers create a user-friendly robotic exoskeleton control system, utilizing deep learning for seamless assistance in walking, standing, and stair climbing.
  • Scientists investigate the mechanisms behind a robot’s ability to initiate and reciprocate human-like interactions, delving into the realm of artificial intelligence.
  • McMaster University and Stanford University devise an AI model to design billions of cost-effective antibiotic molecules, facilitating easy laboratory synthesis.
  • Engineers propose a novel quantitative framework to predict the impact of temperature on platinum-catalyzed silicone elastomers’ curing speed, potentially optimizing soft robotics and wearables manufacturing.
  • Topological solitons’ peculiar behavior in a robotic metamaterial unveils possibilities for future applications in robot control, environmental sensing, and communication.
  • A mobile app utilizing AI achieves high precision in diagnosing melanoma through image analysis of suspected skin lesions, according to recent research.
  • Researchers develop an artificial neural network capable of learning tasks from verbal instructions and describing them linguistically for reproduction by other AI systems.
  • ANYmal, a quadrupedal robot, acquires new skills such as obstacle climbing and navigating pitfalls through machine learning techniques.
  • A haptic device is engineered to replicate the softness of various materials, presenting a breakthrough in robotics by accurately mimicking tactile sensations.
  • New research indicates that civilian deaths attributed to advanced killer robots are more likely to be blamed compared to traditional military machines, highlighting public perception differences in fatalities involving high-tech bots.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Estimating human joint moments unifies exoskeleton control, reducing user effort

by Dean D. Molinaro, Inseung Kang, Aaron J. Young in Science Robotics

Robotic exoskeletons designed to help humans with walking or physically demanding work have been the stuff of sci-fi lore for decades. Remember Ellen Ripley in that Power Loader in Alien? Or the crazy mobile platform George McFly wore in 2015 in Back to the Future, Part II because he threw his back out?

Researchers are working on real-life robotic assistance that could protect workers from painful injuries and help stroke patients regain their mobility. So far, they have required extensive calibration and context-specific tuning, which keeps them largely limited to research labs. Mechanical engineers at Georgia Tech may be on the verge of changing that, allowing exoskeleton technology to be deployed in homes, workplaces, and more.

A team of researchers in Aaron Young’s lab have developed a universal approach to controlling robotic exoskeletons that requires no training, no calibration, and no adjustments to complicated algorithms. Instead, users can don the “exo” and go. Their system uses a kind of artificial intelligence called deep learning to autonomously adjust how the exoskeleton provides assistance, and they’ve shown it works seamlessly to support walking, standing, and climbing stairs or ramps. They described their “unified control framework”.

“The goal was not just to provide control across different activities, but to create a single unified system. You don’t have to press buttons to switch between modes or have some classifier algorithm that tries to predict that you’re climbing stairs or walking,” said Young, associate professor in the George W. Woodruff School of Mechanical Engineering.

A closeup view of experimental exoskeleton used in experiments that resulted in a universal controller for robotic assistance devices.

Most previous work in this area has focused on one activity at a time, like walking on level ground or up a set of stairs. The algorithms involved typically try to classify the environment to provide the right assistance to users. The Georgia Tech team threw that out the window. Instead of focusing on the environment, they focused on the human — what’s happening with muscles and joints — which meant the specific activity didn’t matter.

“We stopped trying to bucket human movement into what we call discretized modes — like level ground walking or climbing stairs — because real movement is a lot messier,” said Dean Molinaro, lead author on the study and a recently graduated Ph.D. student in Young’s lab. “Instead, we based our controller on the user’s underlying physiology. What the body is doing at any point in time will tell us everything we need to know about the environment. Then we used machine learning essentially as the translator between what the sensors are measuring on the exoskeleton and what torques the muscles are generating.”

With the controller delivering assistance through a hip exoskeleton developed by the team, they found they could reduce users’ metabolic and biomechanical effort: they expended less energy, and their joints didn’t have to work as hard compared to not wearing the device at all. In other words, wearing the exoskeleton was a benefit to users, even with the extra weight added by the device itself.

“What’s so cool about this is that it adjusts to each person’s internal dynamics without any tuning or heuristic adjustments, which is a huge difference from a lot of work in the field,” Young said. “There’s no subject-specific tuning or changing parameters to make it work.”

The control system in this study is designed for partial-assist devices. These exoskeletons support movement rather than completely replacing the effort. The team, which also included Molinaro and Inseung Kang, another former Ph.D. student now at Carnegie Mellon University, used an existing algorithm and trained it on mountains of force and motion-capture data they collected in Young’s lab. Subjects of different genders and body types wore the powered hip exoskeleton and walked at varying speeds on force plates, climbed height-adjustable stairs, walked up and down ramps, and transitioned between those movements. And like the motion-capture studios used to make movies, every movement was recorded and cataloged to understand what joints were doing for each activity.

The study is “application agnostic,” as Young put it. Yet their controller offers the first bridge to real-world viability for robotic exoskeleton devices.

Imagine how robotic assistance could benefit soldiers, airline baggage handlers, or any workers doing physically demanding jobs where musculoskeletal injury risk is high.

Human-robot facial coexpression

by Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson in Science Robotics

What would you do if you walked up to a robot with a human-like head and it smiled at you first? You’d likely smile back and perhaps feel the two of you were genuinely interacting. But how does a robot know how to do this? Or a better question, how does it know to get you to smile back?

While we’re getting accustomed to robots that are adept at verbal communication, thanks in part to advancements in large language models like ChatGPT, their nonverbal communication skills, especially facial expressions, have lagged far behind. Designing a robot that can not only make a wide range of facial expressions but also know when to use them has been a daunting task.

The Creative Machines Lab at Columbia Engineering has been working on this challenge for more than five years. In a new study, the group unveils Emo, a robot that anticipates facial expressions and executes them simultaneously with a human. It has even learned to predict a forthcoming smile about 840 milliseconds before the person smiles, and to co-express the smile simultaneously with the person.

The team, led by Hod Lipson, a leading researcher in the fields of artificial intelligence (AI) and robotics, faced two challenges: how to mechanically design an expressively versatile robotic face which involves complex hardware and actuation mechanisms, and knowing which expression to generate so that they appear natural, timely, and genuine. The team proposed training a robot to anticipate future facial expressions in humans and execute them simultaneously with a person. The timing of these expressions was critical — delayed facial mimicry looks disingenuous, but facial co-expression feels more genuine since it requires correctly inferring the human’s emotional state for timely execution.

Yuhang Hu of Creative Machines Lab face-to-face with Emo. Image: Courtesy of Creative Machines Lab.

Emo is a human-like head with a face that is equipped with 26 actuators that enable a broad range of nuanced facial expressions. The head is covered with a soft silicone skin with a magnetic attachment system, allowing for easy customization and quick maintenance. For more lifelike interactions, the researchers integrated high-resolution cameras within the pupil of each eye, enabling Emo to make eye contact, crucial for nonverbal communication.

The team developed two AI models: one that predicts human facial expressions by analyzing subtle changes in the target face and another that generates motor commands using the corresponding facial expressions.

To train the robot how to make facial expressions, the researchers put Emo in front of the camera and let it do random movements. After a few hours, the robot learned the relationship between their facial expressions and the motor commands — much the way humans practice facial expressions by looking in the mirror. This is what the team calls “self modeling” — similar to our human ability to imagine what we look like when we make certain expressions.

Then the team ran videos of human facial expressions for Emo to observe them frame by frame. After training, which lasts a few hours, Emo could predict people’s facial expressions by observing tiny changes in their faces as they begin to form an intent to smile.

“I think predicting human facial expressions accurately is a revolution in HRI. Traditionally, robots have not been designed to consider humans’ expressions during interactions. Now, the robot can integrate human facial expressions as feedback,” said the study’s lead author Yuhang Hu, who is a PhD student at Columbia Engineering in Lipson’s lab. “When a robot makes co-expressions with people in real-time, it not only improves the interaction quality but also helps in building trust between humans and robots. In the future, when interacting with a robot, it will observe and interpret your facial expressions, just like a real person.”

The researchers are now working to integrate verbal communication, using a large language model like ChatGPT into Emo. As robots become more capable of behaving like humans, Lipson is well aware of the ethical considerations associated with this new technology.

“Although this capability heralds a plethora of positive applications, ranging from home assistants to educational aids, it is incumbent upon developers and users to exercise prudence and ethical considerations,” says Lipson, James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering at Columbia Engineering, co-director of the Makerspace at Columbia, and a member of the Data Science Institute. “But it’s also very exciting — by advancing robots that can interpret and mimic human expressions accurately, we’re moving closer to a future where robots can seamlessly integrate into our daily lives, offering companionship, assistance, and even empathy. Imagine a world where interacting with a robot feels as natural and comfortable as talking to a friend.”

Generative AI for designing and validating easily synthesizable and structurally novel antibiotics

by Kyle Swanson, Gary Liu, Denise B. Catacutan, Autumn Arnold, James Zou, Jonathan M. Stokes in Nature Machine Intelligence

Researchers at McMaster University and Stanford University have invented a new generative artificial intelligence model which can design billions of new antibiotic molecules that are inexpensive and easy to build in the laboratory.

The worldwide spread of drug-resistant bacteria has created an urgent need for new antibiotics, but even modern AI methods are limited at isolating promising chemical compounds, especially when researchers must also find ways to manufacture these new AI-guided drugs and test them in the lab.

In a new study, researchers report they have developed a new generative AI model called SyntheMol, which can design new antibiotics to stop the spread of Acinetobacter baumannii, which the World Health Organization has identified as one of the world’s most dangerous antibiotic-resistant bacteria. Notoriously difficult to eradicate, A. baumannii can cause pneumonia, meningitis and infect wounds, all of which can lead to death. Researchers say few treatment options remain.

“Antibiotics are a unique medicine. As soon as we begin to employ them in the clinic, we’re starting a timer before the drugs become ineffective, because bacteria evolve quickly to resist them,” says Jonathan Stokes, lead author on the paper and an assistant professor in McMaster’s Department of Biomedicine & Biochemistry, who conducted the work with James Zou, an associate professor of biomedical data science at Stanford University.

“We need a robust pipeline of antibiotics and we need to discover them quickly and inexpensively. That’s where the artificial intelligence plays a crucial role,” he says.

Additional Property Prediction Model Development.

Researchers developed the generative model to access tens of billions of promising molecules quickly and cheaply. They drew from a library of 132,000 molecular fragments, which fit together like Lego pieces but are all very different in nature. They then cross-referenced these molecular fragments with a set of 13 chemical reactions, enabling them to identify 30 billion two-way combinations of fragments to design new molecules with the most promising antibacterial properties.

Each of the molecules designed by this model was in turn fed through another AI model trained to predict toxicity. The process yielded six molecules which display potent antibacterial activity against A. baumannii and are also non-toxic.

“Synthemol not only designs novel molecules that are promising drug candidates, but it also generates the recipe for how to make each new molecule. Generating such recipes is a new approach and a game changer because chemists do not know how to make AI-designed molecules,” says Zou, who co-authored the paper.

Thermally accelerated curing of platinum-catalyzed elastomers

by Te Faye Yap, Anoop Rajappan, Marquise D. Bell, Rawand M. Rasheed, Colter J. Decker, Daniel J. Preston in Cell Reports Physical Science

Soft robots use pliant materials such as elastomers to interact safely with the human body and other challenging, delicate objects and environments. A team of Rice University researchers has developed an analytical model that can predict the curing time of platinum-catalyzed silicone elastomers as a function of temperature. The model could help reduce energy waste and improve throughput for elastomer-based components manufacturing.

“In our study, we looked at elastomers as a class of materials that enables soft robotics, a field that has seen a huge surge in growth over the past decade,” said Daniel Preston, a Rice assistant professor of mechanical engineering and corresponding author on a study. “While there is some related research on materials like epoxies and even on several specific silicone elastomers, until now there was no detailed quantitative account of the curing reaction for many of the commercially available silicone elastomers that people are actually using to make soft robots. Our work fills that gap.”

The platinum-catalyzed silicone elastomers that Preston and his team studied typically start out as two viscoelastic liquids that, when mixed together, transform over time into a rubbery solid. As a liquid mixture, they can be poured into intricate molds and thus used for casting complex components. The curing process can occur at room temperature, but it can also be sped up using heat.

Manufacturing processes involving elastomers have typically relied on empirical estimates for temperature and duration to control the curing process. However, this ballpark approach makes it difficult to predict how elastomers will behave under varying curing conditions. Having a quantitative framework to determine exactly how temperature impacts curing speed will enable manufacturers to maximize efficiency and reduce waste.

“Previously, using existing models to predict elastomers’ curing behavior under varying temperature conditions was a much more challenging task,” said Te Faye Yap, a graduate student in the Preston lab who is lead author on the study. “There’s a huge need to make manufacturing processes more efficient and reduce waste, both in terms of energy consumption and materials.”

To understand how temperature impacts the curing process, the researchers used a rheometer — an instrument that measures the mechanical properties of liquids and soft solids — to analyze the curing behavior of six commercially available platinum-catalyzed elastomers.

“We were able to develop a model based on what is called the Arrhenius relationship that relates this curing reaction rate to the temperature at which the elastomer is being cured,” Preston said. “Now we have a really nice quantitative understanding of exactly how temperature impacts curing speed.”

The Arrhenius framework, a formula that relates the rate of chemical reactions to temperature, has been used in a variety of contexts such as semiconductor processing and virus inactivation. Preston and his group have used the framework in some of their prior work and found it also applies to curing reactions for materials like epoxies as described in previous studies. In this study, the researchers used the Arrhenius framework along with rheological data to develop an analytical model that could directly impact manufacturing practices.

“In this work, we really probed the curing reaction as a function of the temperature of the elastomer, but we also looked in depth at the mechanical properties of the elastomers when cured at elevated temperatures meant to achieve these higher throughputs and curing speeds,” Preston said.

The researchers conducted mechanical testing on elastomer samples that were cured at room temperature and at elevated temperatures to see whether heating treatments impact the materials’ mechanical properties.

“We found that exposing the elastomers to 70 degrees Celsius (158 Fahrenheit) does not alter the tensile and compressive properties of the material when compared to components that were cured at room temperature,” Yap said. “Moreover, to demonstrate the usage of accelerated curing when making a device, we fabricated soft, pneumatically actuated grippers at both elevated and room temperature conditions, and we observed no difference in the performance of the grippers upon pressurizing.”

While temperature did not seem to have an effect on the elastomers’ ability to withstand mechanical stress, the researchers found that it did impact adhesion between components.

“Say we’ve already cured a few different components that need to be assembled together into the complete, soft robotic system,” Preston said. “When we then try to adhere these components to each other, there’s an impact on the adhesion or the ability to stick them together. In this case, that is greatly affected by the extent of curing that has occurred before we tried to bond.”

The research advances scientific understanding of how temperature can be used to manipulate fabrication processes involving elastomers, which could open up the soft robotics design space for new or improved applications. One key area of interest is the biomedical industry.

“Surgical robots oftenbenefit from being compliant or soft in nature, because operating inside the human body means you want to minimize the risk of puncture or bruising to tissue or organs,” Preston said. “So a lot of the robots that now operate inside the human body are moving to softer architectures and are benefiting from that. Some researchers have also started to look into using soft robotic systems to help reposition patients confined to a bed for long periods of time to try to avoid putting pressure on certain areas.”

Other areas of potential use for soft robotics are agriculture (for instance picking fruits or vegetables that are fragile or bruise easily), disaster relief (search-and-rescue operations in impacted areas with limited or difficult access) and research (collecting or handling samples).

“This study provides a framework that could expand the design space for manufacturing with thermally cured elastomers to create complex structures that exhibit high elasticity which can be used to develop medical devices, shock absorbers and soft robots,” Yap said.

Silicone elastomers’ unique properties — biocompatibility, flexibility, thermal resistance, shock absorption, insulation and more — will continue to be an asset in a range of industries, and the current research can help expand and improve their use beyond current capabilities.

Non-reciprocal topological solitons in active metamaterials

by Jonas Veenstra, Oleksandr Gamayun, Xiaofei Guo, Anahita Sarvi, Chris Ventura Meinersen, Corentin Coulais in Nature

If it walks like a particle, and talks like a particle… it may still not be a particle. A topological soliton is a special type of wave or dislocation which behaves like a particle: it can move around but cannot spread out and disappear like you would expect from, say, a ripple on the surface of a pond. In a new study, researchers from the University of Amsterdam demonstrate the atypical behaviour of topological solitons in a robotic metamaterial, something which in the future may be used to control how robots move, sense their surroundings and communicate.

Topological solitons can be found in many places and at many different length scales. For example, they take the form of kinks incoiled telephone cords and large molecules such as proteins. At a very different scale, a black hole can be understood as a topological soliton in the fabric of spacetime. Solitons play an important role in biological systems, being relevant forprotein folding andmorphogenesis — the development of cells or organs.

The unique features of topological solitons — that they can move around but always retain their shape and cannot suddenly disappear — are particularly interesting when combined with so-called non-reciprocal interactions. “In such an interaction, an agent A reacts to an agent B differently to the way agent B reacts to agent A,” explains Jonas Veenstra, a PhD student at the University of Amsterdam and first author of the new publication.

Veenstra continues: “Non-reciprocal interactions are commonplace in society and complex living systems but have long been overlooked by most physicists because they can only exist in a system out of equilibrium. By introducing non-reciprocal interactions in materials, we hope to blur the boundary between materials and machines and to create animate or lifelike materials.”

Dependence of the Peierls-Nabarro barrier on the nondimensional amplitude D and initial conditions in the Frenkel-Kontorova model.

The Machine Materials Laboratory where Veenstra does his research specialises in designing metamaterials: artificial materials and robotic systems that interact with their environment in a programmable fashion. The research team decided to study the interplay between non-reciprocal interactions and topological solitons almost two years ago, when then-students Anahita Sarvi and Chris Ventura Meinersen decided to follow up on their research project for the MSc course ‘Academic Skills for Research’.

The soliton-hosting metamaterial developed by the researchers consists of a chain of rotating rods that are linked to each other by elastic bands. Each rod is mounted on a little motor which applies a small force to the rod, depending on how it is oriented with respect to its neighbours. Importantly, the force applied depends on which side the neighbour is on, making the interactions between neighbouring rods non-reciprocal. Finally, magnets on the rods are attracted by magnets placed next to the chain in such a way that each rod has two preferred positions, rotated either to the left or the right.

Solitons in this metamaterial are the locations where left- and right-rotated sections of the chain meet. The complementary boundaries between right- and left-rotated chain sections are then so-called ‘anti-solitons’. This is analogous to kinks in an old-fashioned coiled telephone cord, where clockwise and anticlockwise-rotating sections of the cord meet.

When the motors in the chain are turned off, the solitons and anti-solitons can be manually pushed around in either direction. However, once the motors — and thereby the reciprocal interactions — are turned on, the solitons and anti-solitons automatically slide along the chain. They both move in the same direction, with a speed set by the anti-reciprocity imposed by the motors.

Veenstra: “A lot of research has focussed on moving topological solitons by applying external forces. In systems studied so far, solitons and anti-solitons were found to naturally travel in opposite directions. However, if you want to control the behaviour of (anti-)solitons, you might want to drive them in the same direction. We discovered that non-reciprocal interactions achieve exactly this. The non-reciprocal forces are proportional to the rotation caused by the soliton, such that each soliton generates its own driving force.”

The movement of the solitons is similar to a chain of dominoes falling, each one toppling its neighbour. However, unlike dominoes, the non-reciprocal interactions ensure that the ‘toppling’ can only happen in one direction. And while dominoes can only fall down once, a soliton moving along the metamaterial simply sets up the chain for an anti-soliton to move through it in the same direction. In other words, any number of alternating solitons and anti-solitons can move through the chain without the need to ‘reset’.

Understanding the role of non-reciprocal driving will not only help us to better understand the behaviour of topological solitons in living systems, but can also lead to technological advances. The mechanism that generates the self-driving, one-directional solitons uncovered in this study, can be used to control the motion of different types of waves (known as waveguiding), or to endow a metamaterial with a basic information processing capability such as filtering.

Future robots can also use topological solitons for basic robotic functionalities such as movement, sending out signals and sensing their surroundings. These functionalities would then not be controlled from a central point, but rather emerge from the sum of the robot’s active parts. All in all, the domino effect of solitons in metamaterials, now an interesting observation in the lab, may soon start to play a role in different branches of engineering and design.

Evaluation of an artificial intelligence-based decision support for the detection of cutaneous melanoma in primary care: a prospective real-life clinical trial

by Panagiotis Papachristou, My Söderholm, Jon Pallon, Marina Taloyan, Sam Polesie, John Paoli, Chris D Anderson, Magnus Falk in British Journal of Dermatology

A mobile app that uses artificial intelligence, AI, to analyse images of suspected skin lesions can diagnose melanoma with very high precision. This is shown in a study led from Linköping University in Sweden where the app has been tested in primary care.

“Our study is the first in the world to test an AI-based mobile app for melanoma in primary care in this way. A great many studies have been done on previously collected images of skin lesions and those studies relatively agree that AI is good at distinguishing dangerous from harmless ones. We were quite surprised by the fact that no one had done a study on primary care patients,” says Magnus Falk, senior associate professor at the Department of Health, Medicine and Caring Sciences at Linköping University, specialist in general practice at Region Östergötland, who led the current study.

Melanoma can be difficult to differentiate from other skin changes, even for experienced physicians. However, it is important to detect melanoma as early as possible, as it is a serious type of skin cancer. There is currently no established AI-based support for assessing skin lesions in Swedish healthcare.

“Primary care physicians encounter many skin lesions every day and with limited resources need to make decisions about treatment in cases of suspected skin melanoma. This often results in an abundance of referrals to specialists or the removal of skin lesions, which in the majority of cases turn out to be harmless. We wanted to see if the AI support tool in the app could perform better than primary care physicians when it comes to identifying pigmented skin lesions as dangerous or not, in comparison with the final diagnosis,” says Panos Papachristou, researcher affiliated with Karolinska Institutet and specialist in general practice, main author of the study and co-founder of the company that developed the app.

“First of all, the app missed no melanoma. This disease is so dangerous that it’s essential not to miss it. But it’s almost equally important that the AI decision support tool could acquit many suspected skin lesions and determine that they were harmless,” says Magnus Falk.

In the study, primary care physicians followed the usual procedure for diagnosing suspected skin tumours. If the physicians suspected melanoma, they either referred the patient to a dermatologist for diagnosis, or the skin lesion was cut away for tissue analysis and diagnosis.

Only after the physician decided how to handle the suspected melanoma did they use the AI-based app. This involves the physician taking a picture of the skin lesion with a mobile phone equipped with an enlargement lens called a dermatoscope. The app analyses the image and provides guidance on whether or not the skin lesion appears to be melanoma.

To find out how well the AI-based app worked as a decision support tool, the researchers compared the app’s response to the diagnoses made by the regular diagnostic procedure. Of the more than 250 skin lesions examined, physicians found 11 melanomas and 10 precursors of cancer, known as in situ melanoma. The app found all the melanomas, and missed only one precursor. In cases where the app responded that a suspected lesion was not a melanoma, including in situ melanoma, there was a 99.5 percent probability that this was correct.

“It seems that this method could be useful. But in this study, physicians weren’t allowed to let their decision be influenced by the app’s response, so we don’t know what happens in practice if you use an AI-based decision support tool. So even if this is a very positive result, there is uncertainty and we need to continue to evaluate the usefulness of this tool with scientific studies,” says Magnus Falk.

The researchers now plan to proceed with a large follow-up primary care study in several countries, where use of the app as an active decision support tool will be compared to not using it at all.

Natural language instructions induce compositional generalization in networks of neurons

by Reidar Riveland, Alexandre Pouget in Nature Neuroscience

Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence. A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a ‘’sister’’ AI, which in turn performed them.

Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.

A sub-field of artificial intelligence (AI) — Natural language processing — seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.

‘’Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it,’’ explains Alexandre Pouget, full professor in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine.

Tasks and models.

The researcher and his team have succeeded in developing an artificial neuronal model with this dual capacity, albeit with prior training. ‘’We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons,’’ explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.

In the first stage of the experiment, the neuroscientists trained this network to simulate Wernicke’s area, the part of our brain that enables us to perceive and interpret language. In the second stage, the network was trained to reproduce Broca’s area, which, under the influence of Wernicke’s area, is responsible for producing and articulating words. The entire process was carried out on conventional laptop computers. Written instructions in English were then transmitted to the AI.

For example: pointing to the location — left or right — where a stimulus is perceived; responding in the opposite direction of a stimulus; or, more complex, between two visual stimuli with a slight difference in contrast, showing the brighter one. The scientists then evaluated the results of the model, which simulated the intention of moving, or in this case pointing.

‘’Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first — so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,’’ says Alexandre Pouget, who led the research.

This model opens new horizons for understanding the interaction between language and behaviour. It is particularly promising for the robotics sector, where the development of technologies that enable machines to talk to each other is a key issue. ‘’The network we have developed is very small. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other,’’ conclude the two researchers.

ANYmal parkour: Learning agile navigation for quadrupedal robots

by David Hoeller, Nikita Rudin, Dhionis Sako, Marco Hutter in Science Robotics

ANYmal has for some time had no problem coping with the stony terrain of Swiss hiking trails. Now researchers at ETH Zurich have taught this quadrupedal robot some new skills: it is proving rather adept at parkour, a sport based on using athletic manoeuvres to smoothly negotiate obstacles in an urban environment, which has become very popular. ANYmal is also proficient at dealing with the tricky terrain commonly found on building sites or in disaster areas. To teach ANYmal these new skills, two teams, both from the group led by ETH Professor Marco Hutter of the Department of Mechanical and Process Engineering, followed different approaches.

Working in one of the teams is ETH doctoral student Nikita Rudin, who does parkour in his free time. “Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential,” he says, “but I had a different opinion. In fact, I was sure that a lot more could be done with the mechanics of legged robots.”

With his own parkour experience in mind, Rudin set out to further push the boundaries of what ANYmal could do. And he succeeded, by using machine learning to teach the quadrupedal robot new skills. ANYmal can now scale obstacles and perform dynamic manoeuvres to jump back down from them.

In the process, ANYmal learned like a child would — through trial and error. Now, when presented with an obstacle, ANYmal uses its camera and artificial neural network to determine what kind of impediment it’s dealing with. It then performs movements that seem likely to succeed based on its previous training.

Is that the full extent of what’s technically possible? Rudin suggests that this is largely the case for each individual new skill. But he adds that this still leaves plenty of potential improvements. These include allowing the robot to move beyond solving predefined problems and instead asking it to negotiate difficult terrain like rubble-strewn disaster areas.

The quadrupedal robot ANYmal practises parkour in a hall at ETH Zurich. (Photograph: ETH Zurich / Nikita Rudin)

Getting ANYmal ready for precisely that kind of application was the goal of the other project, conducted by Rudin’s colleague and fellow ETH doctoral student Fabian Jenelten. But rather than relying on machine learning alone, Jenelten combined it with a tried-and-tested approach used in control engineering known as model-based control. This provides an easier way of teaching the robot accurate manoeuvres, such as how to recognise and get past gaps and recesses in piles of rubble. In turn, machine learning helps the robot master movement patterns that it can then flexibly apply in unexpected situations. “Combining both approaches lets us get the most out of ANYmal,” Jenelten says.

As a result, the quadrupedal robot is now better at gaining a sure footing on slippery surfaces or unstable boulders. ANYmal is soon also to be deployed on building sites or anywhere that is too dangerous for people — for instance to inspect a collapsed house in a disaster area.

SORI: A softness-rendering interface to unravel the nature of softness perception

by Mustafa Mete, Haewon Jeong, Wei Dawid Wang, Jamie Paik in Proceedings of the National Academy of Sciences

The perception of softness can be taken for granted, but it plays a crucial role in many actions and interactions — from judging the ripeness of an avocado to conducting a medical exam, or holding the hand of a loved one. But understanding and reproducing softness perception is challenging, because it involves so many sensory and cognitive processes.

Robotics researchers have tried to address this challenge with haptic devices, but previous attempts have not distinguished between two primary elements of softness perception: cutaneous cues (sensory feedback from the skin of the fingertip), and kinesthetic cues (feedback about the amount of force on the finger joint).

“If you press on a marshmallow with your fingertip, it’s easy to tell that it’s soft. But if you place a hard biscuit on top of that marshmallow and press again, you can still tell that the soft marshmallow is underneath, even though your fingertip is touching a hard surface,” explains Mustafa Mete, a PhD student in the Reconfigurable Robotics Lab in the School of Engineering. “We wanted to see if we could create a robotic platform that can do the same.”

With SORI (Softness Rendering Interface), the RRL, led by Jamie Paik, has achieved just that. By decoupling cutaneous and kinesthetic cues, SORI faithfully recreate the softness of a range of real materials, filling a gap in the robotics field enabling many applications where softness sensation is critical — from deep-sea exploration to robot-assisted surgery.

Mete explains that neuroscientific and psychological studies show that cutaneous cues are largely based on how much skin is in contact with a surface, which is often related in part to the deformation of the object. In other words, a surface that envelopes a greater area of your fingertip will be perceived as softer. But because human fingertips vary widely in size and firmness, one finger may make greater contact with a given surface than another.

“We realized that the softness I feel may not be the same as the softness you feel, because of our different finger shapes. So, for our study, we first had to develop parameters for the geometries of a fingertip and its contact surface in order to estimate the softness cues for that fingertip,” Mete explains. Then, the researchers extracted the softness parameters from a range of different materials, and mapped both sets of parameters onto the SORI device.

Building on the RRL’s trademark origami robot research, which has fueled spinoffs for reconfigurable environments and a haptic joystick, SORI is equipped with motor-driven origami joints that can be modulated to become stiffer or more supple. Perched atop the joints is a dimpled silicone membrane. A flow of air inflates the membrane to varying degrees, to envelop a fingertip placed at its center.

With this novel decoupling of kinesthetic and cutaneous functionality, SORI succeeded in recreating the softness of a range of materials — including beef, salmon, and marshmallow — over the course of several experiments with two human volunteers. It also mimicked materials with both soft and firm attributes (such as a biscuit on top of a marshmallow, or a leather-bound book). In one virtual experiment, SORI even reproduced the sensation of a beating heart, to demonstrate its efficacy at rendering soft materials in motion.

Medicine is therefore a primary area of potential application for this technology; for example, to train medical students to detect cancerous tumors, or to provide crucial sensory feedback to surgeons using robots to perform operations.

Other applications include robot-assisted exploration of space or the deep ocean, where the device could enable scientists to feel the softness of a discovered object from a remote location. SORI is also a potential answer to one of the biggest challenges in robot-assisted agriculture: harvesting tender fruits and vegetables without crushing them.

“This is not intended to act as a softness sensor for robots, but to transfer the feeling of ‘touch’ digitally, just like sending photos or music,” Mete summarizes.

Hazardous machinery: The assignment of agency and blame to robots versus non-autonomous machines

by Rael J. Dawtry, Mitchell J. Callan in Journal of Experimental Social Psychology

Advanced killer robots are more likely to blamed for civilian deaths than military machines, new research has revealed.

The University of Essex study shows that high-tech bots will be held more responsible for fatalities in identical incidents. Led by the Department of Psychology’s Dr Rael Dawtry it highlights the impact of autonomy and agency. And showed people perceive robots to be more culpable if described in a more advanced way. It is hoped the studywill help influence lawmakers as technology advances.

Dr Dawtry said: “As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement. “Some tasks, such as autonomous driving or military uses of robots, pose a risk to peoples’ safety, which raises questions about how — and where — responsibility will be assigned when people are harmed by autonomous robots.

“This is an important, emerging issue for law and policy makers to grapple with, for example around the use of autonomous weapons and human rights. “Our research contributes to these debates by examining how ordinary people explain robots’ harmful behaviour and showing that the same processes underlying how blame is assigned to humans also lead people to assign blame to robots.”

As part of the study Dr Dawtry presented different scenarios to more than 400 people. One saw them judge whether an armed humanoid robot was responsible for the death of a teenage girl. During a raid on a terror compound its machine guns “discharged” and fatally hit the civilian. When reviewing the incident, the participants blamed a robot more when it was described in more sophisticated terms despite the outcomes being the same. Other studies showed that simply labelling a variety of devices ‘autonomous robots’ lead people to hold them accountable compared to when they were labelled ‘machines’.

Dr Dawtry added: “These findings show that how robots’ autonomy is perceived- and in turn, how blameworthy robots are — is influenced, in a very subtle way, by how they are described. “For example, we found that simply labelling relatively simple machines, such as those used in factories, as ‘autonomous robots’, lead people to perceive them as agentic and blameworthy, compared to when they were labelled ‘machines’.

“One implication of our findings is that, as robots become more objectively sophisticated, or are simply made to appear so, they are more likely to be blamed.”

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--