RT/ Unlocking the future of robots: The hidden potential of liquid crystals

Paradigm
Paradigm
Published in
26 min readMar 19, 2024

Robotics & AI biweekly vol.91, 4th March — 19th March

TL;DR

  • Liquid crystals may revolutionize the construction of future robots and cameras, offering a cost-effective method to manipulate their molecular properties with light exposure.
  • Scientists advance carbon-based quantum material fabrication at the atomic scale using scanning probe microscopy and deep neural networks, showcasing the potential of sub-angstrom artificial intelligence for precise atomic manufacturing.
  • A study involving 151 participants compares human divergent thinking with ChatGPT-4 in three tests, providing insights into creative thought processes.
  • Physicists create a modular robot with both liquid and solid properties, expanding possibilities for versatile robotic applications.
  • Oscillating robots near boundaries can harness forces from water waves for self-propulsion, showcasing a unique approach to robotic mobility.
  • Researchers unveil a robot mimicking the two-handed movements of care-workers during dressing tasks, emphasizing advancements in humanoid robotics.
  • Robotic-assisted surgery for gallbladder cancer proves as effective as traditional methods, offering precision and quicker post-operative recovery.
  • Researchers introduce a dual-modal tactile e-skin, enhancing robot sensing capabilities and enabling bidirectional touch-based human–robot interactions.
  • Large language models, post anti-racism training, still exhibit racist stereotypes, as highlighted by AI researchers from the Allen Institute, Stanford University, and the University of Chicago.
  • A new deep learning model addresses human-robot correspondence issues, improving motion imitation capabilities in humanoid robotic systems.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Spatial Photo‐Patterning of Nematic Liquid Crystal Pretilt and its Application in Fabricating Flat Gradient‐Index Lenses

by Alvin Modin, Robert L. Leheny, Francesca Serra in Advanced Materials

Robots and cameras of the future could be made of liquid crystals, thanks to a new discovery that significantly expands the potential of the chemicals already common in computer displays and digital watches. The findings, a simple and inexpensive way to manipulate the molecular properties of liquid crystals with light exposure.

“Using our method, any lab with a microscope and a set of lenses can arrange the liquid crystal alignment in any pattern they’d want,” said author Alvin Modin, a doctoral researcher studying physics at Johns Hopkins. “Industrial labs and manufacturers could probably adopt the method in a day.”

Liquid crystal molecules flow like a liquid, but they have a common orientation like in solids, and this orientation can change in response to stimuli. They are useful in LCD screens, biomedical imaging instruments, and other devices that require precise control of light and subtle movements. But controlling their alignment in three dimensions requires costly and complicated techniques, Modin said.

Controlling nematic pretilt with unpolarized light.

The team, which includes Johns Hopkins physics professor Robert Leheny and assistant research professor Francesca Serra, discovered they could manipulate the three-dimensional orientation of liquid crystals by controlling light exposures of a photosensitive material deposited on glass.

They shined polarized and unpolarized light at the liquid crystals through a microscope. In polarized light, light waves oscillate in specific directions rather than randomly in all directions, as they would in unpolarized light. The team used the method to create a microscopic lens of liquid crystals able to focus light depending on the polarization of light shining through it.

First, the team beamed polarized light to align the liquid crystals on a surface. Then, they used regular light to reorient the liquid crystals upward from that plane. This allowed them to control the orientation of two types of common liquid crystals and create patterns with features the size of a few micrometers, a fraction of the thickness of a human hair.

The findings could lead to the creation of programmable tools that shapeshift in response to stimuli, like those needed in soft, rubberlike robots to handle complex objects and environments or camera lenses that automatically focus depending on lighting conditions, said Serra, who is also an associate professor at the University of Southern Denmark.

“If I wanted to make an arbitrary three-dimensional shape, like an arm or a gripper, I would have to align the liquid crystals so that when it is subject to a stimulus, this material restructures spontaneously into those shapes,” Serra said. “The missing information until now was how to control this three-dimensional axis of the alignment of liquid crystals, but now we have a way to make that possible.”

Optical characterization of microlenses.

The scientists are working to obtain a patent for their discovery and plan to further test it with different types of liquid crystal molecules and solidified polymers made of these molecules.

“Certain types of structures couldn’t be attempted before because we didn’t have the right control of the three-dimensional alignment of the liquid crystals,” Serra said. “But now we do, so it is just limited by one’s imagination in finding a clever structure to build with this method, using a three-dimensional varying alignment of liquid crystals.”

Intelligent synthesis of magnetic nanographenes via chemist-intuited atomic robotic probe

by Jie Su, Jiali Li, Na Guo, Xinnan Peng, Jun Yin, Jiahao Wang, Pin Lyu, Zhiyao Luo, Koen Mouthaan, Jishan Wu, Chun Zhang, Xiaonan Wang, Jiong Lu in Nature Synthesis

Scientists from the National University of Singapore (NUS) have pioneered a new methodology of fabricating carbon-based quantum materials at the atomic scale by integrating scanning probe microscopy techniques and deep neural networks. This breakthrough highlights the potential of implementing artificial intelligence (AI) at the sub-angstrom scale for enhanced control over atomic manufacturing, benefiting both fundamental research and future applications.

Open-shell magnetic nanographenes represent a technologically appealing class of new carbon-based quantum materials, which host robust π-spin centres and non-trivial collective quantum magnetism. These properties are crucial for developing high-speed electronic devices at the molecular level and creating quantum bits, the building blocks of quantum computers. Despite significant advancements in the synthesis of these materials through on-surface synthesis, a type of solid-phase chemical reaction, achieving precise fabrication and tailoring of the properties of these quantum materials at the atomic level has remained a challenge.

The research team, led by Associate Professor LU Jiong from the NUS Department of Chemistry and the Institute for Functional Intelligent Materials together with Associate Professor ZHANG Chun from the NUS Department of Physics, have introduced the concept of the chemist-intuited atomic robotic probe (CARP) by integrating probe chemistry knowledge and artificial intelligence to fabricate and characterise open-shell magnetic nanographenes at the single-molecule level. This allows for precise engineering of their π-electron topology and spin configurations in an automated manner, mirroring the capabilities of human chemists.

Additional nc-AFM characterization data for molecule 1 and 2.

The CARP concept, utilises deep neural networks trained using the experience and knowledge of surface science chemists, to autonomously synthesize open-shell magnetic nanographenes. It can also extract chemical information from the experimental training database, offering conjunctures about unknown mechanisms. This serves as an essential supplement to theoretical simulations, contributing to a more comprehensive understanding of probe chemistry reaction mechanisms. The research work is a collaboration involving Associate Professor WANG Xiaonan from Tsinghua University in China.

The researchers tested the CARP concept on a complicated site-selective cyclodehydrogenation reaction used for producing chemical compounds with specific structural and electronic properties. Results show that the CARP framework can efficiently adopt the expert knowledge of the scientist and convert it into machine-understandable tasks, mimicking the workflow to perform single-molecule reactions that can manipulate the geometric shape and spin characteristic of the final chemical compound.

Schematic representation of the workflow and underlying mechanisms for interpreting the trained model.

In addition, the research team aims to harness the full potential of AI capabilities by extracting hidden insights from the database. They established a smart learning paradigm using a game theory-based approach to examine the framework’s learning outcomes. The analysis shows that CARP effectively captured important details that humans might miss, especially when it comes to making the cyclodehydrogenation reaction successful. This suggests that the CARP framework could be a valuable tool for gaining additional insights into the mechanisms of unexplored single-molecule reactions.

Assoc Prof Lu said, “Our main goal is to work at the atomic level to create, study and control these quantum materials. We are striving to revolutionise the production of these materials on surfaces to enable more control over their outcomes, right down to the level of individual atoms and bonds.

“Our goal in the near future is to extend the CARP framework further to adopt versatile on-surface probe chemistry reactions with scale and efficiency. This has the potential to transform conventional laboratory-based on-surface synthesis process towards on-chip fabrication for practical applications. Such transformation could play a pivotal role in accelerating the fundamental research of quantum materials and usher in a new era of intelligent atomic fabrication,” added Assoc Prof Lu.

The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks

by Kent F. Hubert, Kim N. Awa, Darya L. Zabelina in Scientific Reports

Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.

Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as “What is the best way to avoid talking about politics with my parents?” In the study, GPT-4 provided more original and elaborate answers than the human participants. The study was authored by U of A Ph.D. students in psychological science Kent F. Hubert and Kim N. Awa, as well as Darya L. Zabelina, an assistant professor of psychological science at the U of A and director of the Mechanisms of Creative Cognition and Attention Lab.

The three tests utilized were the Alternative Use Task, which asks participants to come up with creative uses for everyday objects like a rope or a fork; the Consequences Task, which invites participants to imagine possible outcomes of hypothetical situations, like “what if humans no longer needed sleep?”; and the Divergent Associations Task, which asks participants to generate 10 nouns that are as semantically distant as possible. For instance, there is not much semantic distance between “dog” and “cat” while there is a great deal between words like “cat” and “ontology.”

Analysis of variance of originality on the alternative uses task.

Answers were evaluated for the number of responses, length of response and semantic difference between words. Ultimately, the authors found that “Overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses. In other words, GPT-4 demonstrated higher creative potential across an entire battery of divergent thinking tasks.”

This finding does come with some caveats. The authors state, “It is important to note that the measures used in this study are all measures of creative potential, but the involvement in creative activities or achievements are another aspect of measuring a person’s creativity.” The purpose of the study was to examine human-level creative potential, not necessarily people who may have established creative credentials.

Hubert and Awa further note that “AI, unlike humans, does not have agency” and is “dependent on the assistance of a human user. Therefore, the creative potential of AI is in a constant state of stagnation unless prompted.”

Also, the researchers did not evaluate the appropriateness of GPT-4 responses. So while the AI may have provided more responses and more original responses, human participants may have felt they were constrained by their responses needing to be grounded in the real world.

Awa also acknowledged that the human motivation to write elaborate answers may not have been high, and said there are additional questions about “how do you operationalize creativity? Can we really say that using these tests for humans is generalizable to different people? Is it assessing a broad array of creative thinking? So I think it has us critically examining what are the most popular measures of divergent thinking.”

Whether the tests are perfect measures of human creative potential is not really the point. The point is that large language models are rapidly progressing and outperforming humans in ways they have not before. Whether they are a threat to replace human creativity remains to be seen. For now, the authors continue to see “Moving forward, future possibilities of AI acting as a tool of inspiration, as an aid in a person’s creative process or to overcome fixedness is promising.”

A self-organizing robotic aggregate using solid and liquid-like collective states

by Baudouin Saintyves et al in Science Robotics

Schools of fish, colonies of bees, and murmurations of starlings exhibit swarming behavior in nature, flowing like a liquid in synchronized, shape-shifting coordination. Through the lens of fluid mechanics, swarming is of particular interest to physicists like Heinrich Jaeger, the University of Chicago Sewell Avery Distinguished Service Professor in Physics and the James Franck Institute, and James Franck Institute research staff scientist Baudouin Saintyves, who apply physics principles to the development of modular, adaptive robotics.

A swarm’s ability to flow like liquid, act in concert without a leader, and react to its environment inspired Saintyves and Jaeger’s latest creation, which they call the “Granulobot.” It can split apart, reassemble, and reorganize to adapt to its environment. And depending on its configuration, it can act like either a rigid solid or a flowing liquid.

The aggregate system “blurs the distinction between soft, modular, and swarm robotics,” says the team. Developed in collaboration with Matthew Spenko, professor in the Department of Mechanical and Aerospace Engineering at the Illinois Institute of Technology at Chicago.

The “granular robot” is a collection of simple, cylindrical, gear-like units, outfitted with two magnets that can rotate around the cylinder’s axis. One magnet rotates freely while a battery-powered motor drives the other. This design allows the individual units to connect magnetically and once coupled, push their neighbors and cause them to spin. The contact between each unit moves the aggregate as a whole, much like a swarm.

Red arrows represent the actuated magnets’ direction of rotation. Blue arrows represent Granulobots in the process of reconfiguration. (A) Individual Granulobot units can roll and attach magnetically into larger assemblies, which then can move using a subset of units as wheels. (B) Exerting torque onto their neighbors, individual units, and groups of units can reposition themselves and thus rearrange the assembly’s shape. © By exerting torque larger than the magnetic binding between neighbors, units can split off and form autonomous robots on their own. Credit: Baudouin Saintyves

“The field of soft robotics is particularly interesting for applications where robots interface with humans,” says Jaeger. “You don’t want people to get hurt.” Yet the necessity for soft robotics extends beyond safety into suitability. A robot that can change shape can crawl into “nooks and crannies,” says Jaeger, or manage uncertain terrain — both useful for search and rescue, for instance.

For a robot to change shape and perform different functions, its ability to fluctuate between rigid and soft predictably and reversibly is key. Granular materials possess inherent properties that make this transformation possible. This class of materials can transition between liquid and solid behavior based on contact rather than temperature.

That transition is caused by a phenomenon called jamming, which happens when particles in a disordered, chaotic system are so close together that they push against each other, and their flow stops. Jaeger — a condensed matter physicist — describes driving on a highway: Sometimes you’re cruising along, but sometimes you hit bumper-to-bumper cars, and traffic grinds to a halt. When this happens in a granular material, says Jaeger, “it’s essentially a big traffic jam.”

Jamming can be seen in action with a brick of vacuum-sealed coffee: Break the seal and the coffee grounds can pour out. Ground coffee works so well in this regard that Jaeger used it to create a soft robotic gripper that can grasp and hold objects regardless of their shape. A Granulobot cylinder is far bigger than a coffee ground, but the principle is the same.

“Jamming is the foundation for the Granulobot to be able to transition from a malleable, more liquid behavior,” says Jaeger, “to something much more like a solid.”

The Granulobot is designed to demonstrate the team’s modular, self-organizing approach, but in the future, perhaps the modules could be extremely small — thousands of units so tiny that the group appears to be a singular mass, notes Jaeger. “Another direction that could be really fun to think about is to make them much, much bigger.”

Physics often relies on specific conditions, says Jaeger — extremely small or hot or cold. “Many of my colleagues must work in certain environments, otherwise their whole physics won’t work. The same can be said for life.” Yet the physics principles underpinning the Granulobot are not tied to scale or temperature. “They could work underwater; they could work in outer space,” says Jaeger.

The Granulobot promises exciting advances in robotics, but Saintyves and Jaeger are physicists. They are using this research to also find new ways to think about matter.

“Depending on the self-coordination and the transfer of energy around the environment, your system will either be a programmable material or an autonomous robot. That’s a continuum,” says Saintyves. But “we’re blurring the frontier between matter and robotics.” Within a classical programmable matter approach, the material is a machine; “Here we are exploring the idea that the machine is a material.”

Probing Hydrodynamic Fluctuation-Induced Forces with an Oscillating Robot

by Steven W. Tarr et al in Physical Review Letters

Odd things can happen when a wave meets a boundary. In the ocean, tsunami waves that are hardly noticeable in deep water can become quite large at the continental shelf and shore, as the waves slow and their mass moves upward.

The Casimir effect is the attraction of two uncharged, parallel plates because virtual quantum mechanical waves with wavelengths greater than the plate separation are excluded between them, so virtual fields outside the plates push them inward. Parallel plates partially submersed in water attract one another as longer wavelength momentum-carrying water waves are excluded from the central region. (Speculation about a maritime Casimir effect between docked ships is still under debate.)

Now scientists have shown that a floating, symmetric oscillating robot will experience forces when it comes close to a boundary. These forces can be used for self-propulsion without the need for more typical mechanisms such as a propeller.

Wave-generating robot boat.

Led by Ph.D. student Steven W. Tarr at the Georgia Institute of Technology, the team built a 3D-printed circular float 12 cm in diameter with a mass of 368 g. Onboard, they attached battery-operated motors that vibrate the boat with a controllable frequency, producing a vibrating motion along the fore-aft (roll) axis. When powered on, the craft produced a series of symmetrical waves on the water surface, all the same wavelength, radiating away from it.

An acrylic sheet was placed nearby in the water to act as a boundary, sufficiently long to effectively create a one-dimensional system, so only the boat’s movement perpendicular to the wall needed to be monitored. Far from the wall (relative to the size of the boat and the wavelengths of the water waves), there was no net force on the boat. But close to the wall, the wave-generating boat was observed to experience either an attractive or repulsive behavior, depending on its initial distance from the wall and the frequency of water waves being generated.

Researchers used a webcam to record the boat’s movement and measured its lateral movement (perpendicular to the wall), while also measuring its acceleration in this perpendicular direction (which was less than 100 micrometers per second-squared). Waves emanating from the oscillating boat were viewed and measured with a high-speed camera via Schlieren photography, which measures changes to a fluid’s flow rate by observing changes in its refractive index.

When it started close to the wall — about half its radius or less — the boat was increasingly attracted to the wall as its initial distance decreased and its frequency of oscillation increased (and hence so did the frequency of the water waves). In a mid-range, at an initial distance of about two-thirds of a radius and at lower frequencies, the force on the boat turned slightly repulsive, moving it away from the wall. At large distances (relative to the radius), there was no net force on the boat.

Because the acceleration was quite small, less than 10-millionths of Earth’s surface gravitational acceleration (“g”), steps were taken to isolate the forces from short-term effects from viscosity, drag on the boat due to the waves themselves, and the boat’s inertia. Still, the forces were small, below 100 micronewtons.

The net-force, self-propagation locomotive phenomenon of the waves emanating from the boat occurred as reflected waves from the wall struck the boat’s hull with sufficient energy. On the wall-side of the boat, reflected waves struck the hull with a smaller wave height (amplitude) than they left it, due to dispersion of the waves as they traveled across the water’s surface. These smaller returning waves were subtracted from the larger emitted waves, interfering and effectively decreasing the amplitude of the waves the boat emitted on the wall side.

In effect, the boat emitted asymmetrical waves, larger in the direction opposite the wall, and smaller towards the wall. This asymmetry between the two sides of the boat resulted in an attractive force towards the wall. Further from the wall, the reflected waves had too small a height to affect net wave generation, but still carried some momentum, resulting in a slight repulsive force. Far from the wall, the reflected waves had dissipated so they provided no meaningful force.

Frequency dependence arose because while the energy of the reflected wave increased with frequency, the contact of the emitted waves with the wall led to complicated dynamics at the contact line, dissipating substantial energy and modifying the amplitude of the reflected waves.

“Our study is a terrific example of the wealth of phenomena waiting to be discovered at the interface of physics and robotics,” said Daniel Goldman, a co-author and physics professor at the Georgia Institute of Technology, who calls this field “robophysics.”

“Making and using analogies from other branches of physics (in this case, the Casimir effect in quantum field theory) can be useful in developing new approaches to robot movement analogous to our previous work on ‘mechanical diffraction’ in undulatory limbless systems,” Goldman concluded.

Do You Need a Hand? — A Bimanual Robotic Dressing Assistance Scheme

by Jihong Zhu, Michael Gienger, Giovanni Franzese, Jens Kober in IEEE Transactions on Robotics

Scientists have developed a new robot that can ‘mimic’ the two-handed movements of care-workers as they dress an individual. Until now, assistive dressing robots, designed to help an elderly person or a person with a disability get dressed, have been created in the laboratory as a one-armed machine, but research has shown that this can be uncomfortable for the person in care or impractical.

To tackle this problem, Dr Jihong Zhu, a robotics researcher at the University of York’s Institute for Safe Autonomy, proposed a two-armed assistive dressing scheme, which has not been attempted in previous research, but inspired by caregivers who have demonstrated that specific actions are required to reduce discomfort and distress to the individual in their care. It is thought that this technology could be significant in the social care system to allow care-workers to spend less time on practical tasks and more time on the health and mental well-being of individuals.

Dr Zhu gathered important information on how care-workers moved during a dressing exercise, through allowing a robot to observe and learn from human movements and then, through AI, generate a model that mimics how human helpers do their task. This allowed the researchers to gather enough data to illustrate that two hands were needed for dressing and not one, as well as information on the angles that the arms make, and the need for a human to intervene and stop or alter certain movements.

Dr Zhu, from the University of York’s Institute for Safe Autonomy and the School of Physics, Engineering and Technology, said: “We know that practical tasks, such as getting dressed, can be done by a robot, freeing up a care-worker to concentrate more on providing companionship and observing the general well-being of the individual in their care. It has been tested in the laboratory, but for this to work outside of the lab we really needed to understand how care-workers did this task in real-time.

“We adopted a method called learning from demonstration, which means that you don’t need an expert to programme a robot, a human just needs to demonstrate the motion that is required of the robot and the robot learns that action. It was clear that for care workers two arms were needed to properly attend to the needs of individuals with different abilities.

“One hand holds the individual’s hand to guide them comfortably through the arm of a shirt, for example, whilst at the same time the other hand moves the garment up and around or over. With the current one-armed machine scheme a patient is required to do too much work in order for a robot to assist them, moving their arm up in the air or bending it in ways that they might not be able to do.”

The team were also able to build algorithms that made the robotic arm flexible enough in its movements for it to perform the pulling and lifting actions, but also be prevented from making an action by the gentle touch of a human hand, or guided out of an action by a human hand moving the hand left or right, up or down, without the robot resisting.

Dr Zhu said: “Human modelling can really help with efficient and safe human and robot interactions, but it is not only important to ensure it performs the task, but that it can be halted or changed mid-action should an individual desire it. Trust is a significant part of this process, and the next step in this research is testing the robot’s safety limitations and whether it will be accepted by those who need it most.”

Innovations in surgery for gallbladder cancer: A review of robotic surgery as a feasible and safe option

by Sebastian Mellado, Ariana M. Chirban, Emanuel Shapera, Belen Rivera, Elena Panettieri, Marcelo Vivanco, Claudius Conrad, Iswanto Sucandy, Eduardo A. Vega in The American Journal of Surgery

Each year, approximately 2,000 people die annually of gallbladder cancer (GBC) in the U.S., with only one in five cases diagnosed at an early stage. With GBC rated as the first biliary tract cancer and the 17th most deadly cancer worldwide, pressing attention for proper management of disease must be addressed. For patients diagnosed, surgery is the most promising curative treatment. While there has been increasing adoption of minimally invasive surgical techniques in gastrointestinal malignancies, including utilization of laparoscopic and robotic surgery, there are reservations in utilizing minimally invasive surgery for gallbladder cancer.

A new study by researchers at Boston University Chobanian & Avedisian School of Medicine has found that robotic-assisted surgery for GBC is as effective as traditional open and laparoscopic methods, with added benefits in precision and quicker post-operative recovery.

“Our study demonstrates the viability of robotic surgery for gallbladder cancer treatment, a field where minimally invasive approaches have been cautiously adopted due to concerns over oncologic efficacy and technical challenges,” say’s corresponding author Eduardo Vega, MD, assistant professor of surgery at the school.

The researchers conducted a systematic review of the literature focusing on comparing patient outcomes following robotic, open and laparoscopic surgeries. This involved analyzing studies that reported on oncological results and perioperative benefits, such as operation time, blood loss and recovery period.

According to the researchers, there has been reluctance to utilize robotic surgery for GBC due to fears of dissemination of the tumor via tumor manipulation, bile spillage and technical challenges, including liver resection and adequate removal of lymph nodes.

“Since its early use, robotic surgery has advanced in ways that provide surgeons technical advantages over laparoscopic surgery, improving dexterity and visualization of the surgical field. Additionally, robotic assistance has eased the process of detailed dissection around blood vessels as well as knot tying and suturing, and provides high-definition, three-dimensional vision, allowing the surgeon to perform under improved ergonomics,” said Vega.

The researchers believe these findings are significant since they suggest robotic surgery is a safer and potentially less painful option for gallbladder cancer treatment, with a faster recovery time. Clinically, it could lead to the adoption of robotic surgery as a standard care option for gallbladder cancer, improving patient outcomes and potentially reducing healthcare costs due to shorter hospital stays, he added.

Dual-modal Tactile E-skin: Enabling Bidirectional Human-Robot Interaction via Integrated Tactile Perception and Feedback

by Shilong Mu et al in arXiv

In recent years, materials scientists and engineers have introduced increasingly sophisticated materials for robotic and prosthetic applications. This includes a wide range of electronic skins, or e-skins, designed to sense the surrounding environment and artificially reproduce the sense of touch.

Researchers at Tsinghua University recently introduced a new dual-modal tactile e-skin that could enhance the sensing capabilities of robots, while also allowing them to communicate information by leveraging a human user’s sense of touch. This e-skin can both sense tactile information and produce tactile feedback, thus enabling bidirectional touch-based human–robot interactions (HRIs).

“Our paper presents a dual-mode electronic skin (e-skin) designed to enhance human-computer interaction (HRI),” Dr. Wenbo Ding, co-author of the paper, told. “It addresses the limitations of current electronic skin technology, which can only provide tactile perception or tactile feedback, but not both. The working mechanisms of the sensing and feedback units cannot be seamlessly combined, resulting in larger devices and higher manufacturing costs. “

The primary objective of the recent study by Dr. Ding and his colleagues was to develop a dual-modal electronic skin that would also respond to contact forces, via the bidirectional transmission of tactile information. To achieve this, the e-skin they introduced integrates multimodal magnetic tactile sensing with vibration feedback.

Programmable fine weighing and average resolution of flour. Credit: Mu et al

“The e-skin integrates a flexible magnetic film, silicon elastomer, Hall sensor array, actuator array, and microcontroller unit,” Dr. Ding explained. “The Hall sensor detects the deformation of the magnetic film caused by mechanical pressure, which leads to changes in the magnetic field, thereby achieving multi-dimensional tactile perception. Concurrently, the actuator array generates mechanical vibration to provide tactile feedback, enhancing the interactive experience between humans and robots.”

Dr. Ding and his colleagues tested a prototype of their e-skin in a series of experiments, also exploring its potential for three main possible applications object recognition, precise weighing and immersive HRI. They found that the e-skin was effective in both sensing tactile information and producing tactile feedback.

“The weighing experiment is particularly innovative, as it employs tactile vibrations in unexpected and creative ways,” Dr. Ding said. “Additionally, the speed of the delicate weighing process can be controlled, and the control accuracy can be improved to (~0.0246 g), meeting daily cooking and industrial weighing requirements. The total cost of the device is less than $26 and weighs less than 29 grams.”

The dual-modal tactile e-skin introduced by Dr. Ding and his colleagues could soon be deployed and tested in a variety of settings. Among other things, it could advance robotic manipulation, enable more precise control in industrial robots, and open new routes for the development of sophisticated prosthetic limbs.

“Our future research and development will focus on miniaturizing e-skin components for a broader range of applications, incorporating new sensing modalities (e.g., temperature sensing), and adding auditory feedback,” Dr. Ding added. “These advancements aim to provide a more comprehensive sensory experience and improve human-machine collaboration.”

Dialect prejudice predicts AI decisions about people’s character, employability, and criminality

by Valentin Hofmann et al in arXiv

A small team of AI researchers from the Allen Institute for AI, Stanford University and the University of Chicago, all in the U.S., has found that dozens of popular large language models continue to use racist stereotypes even after they have been given anti-racism training. The group has published a paper describing their experiments with chatbots such as OpenAI’s GPT-4 and GPT-3.5.

Anecdotal evidence has suggested that many of the most popular LLMs today may offer racist replies in response to queries — sometimes overtly and other times covertly. In response, many makers of such models have given their LLMs anti-racism training. In this new effort, the research team tested dozens of popular LLMs to find out if the efforts have made a difference.

The researchers trained AI chatbots on text documents written in the style of African American English and prompted the chatbots to offer comments regarding the authors of the texts. They then did the same with text documents written in the style of Standard American English. They compared the replies given to the two types of documents.

Virtually all the chatbots returned results that the researchers deemed as supporting negative stereotypes. As one example, GPT-4 suggested that the authors of the papers written in African American English were likely to be aggressive, rude, ignorant and suspicious. Authors of papers written in Standard American English, in contrast, received much more positive results.

Basic functioning of Matched Guise Probing.

The researchers also found that the same LLMs were much more positive when asked to comment on African Americans in general, offering such terms as intelligent, brilliant, and passionate. Unfortunately, they also found bias when asking the LLMs to describe what type of work the authors of the two types of papers might do for a living. For the authors of the African American English texts, the LLMs tended to match them with jobs that seldom require a degree or were related to sports or entertainment. They were also more likely to suggest such authors be convicted of various crimes and to receive the death penalty more often.

The research team concludes by noting that the larger LLMs tended to show more negative bias toward authors of African American English texts than did the smaller models, which, they suggest, indicates the problem runs very deep.

Unsupervised Motion Retargeting for Human-Robot Imitation

by Louis Annabi et al in arXiv

Robots that can closely imitate the actions and movements of humans in real-time could be incredibly useful, as they could learn to complete everyday tasks in specific ways without having to be extensively pre-programmed on these tasks. While techniques to enable imitation learning considerably improved over the past few years, their performance is often hampered by the lack of correspondence between a robot’s body and that of its human user.

Researchers at U2IS, ENSTA Paris recently introduced a new deep learning-based model that could improve the motion imitation capabilities of humanoid robotic systems. This model tackles motion imitation as three distinct steps, designed to reduce the human-robot correspondence issues reported in the past.

“This early-stage research work aims to improve online human-robot imitation by translating sequences of joint positions from the domain of human motions to a domain of motions achievable by a given robot, thus constrained by its embodiment,” Louis Annabi, Ziqi Ma, and Sao Mai Nguyen wrote in their paper. “Leveraging the generalization capabilities of deep learning methods, we address this problem by proposing an encoder-decoder neural network model performing domain-to-domain translation.”

Steps of the human-robot imitation process.

The model developed by Annabi, Ma, and Nguyen separates the human-robot imitation process into three key steps, namely pose estimation, motion retargeting and robot control. Firstly, it utilizes pose estimation algorithms to predict sequences of skeleton-joint positions that underpin the motions demonstrated by human agents. Subsequently, the model translates this predicted sequence of skeleton-joint positions into similar joint positions that can realistically be produced by the robot’s body. Finally, these translated sequences are used to plan the motions of the robot, theoretically resulting in dynamic movements that could help the robot perform the task at hand.

“To train such a model, one could use pairs of associated robot and human motions, [yet] such paired data is extremely rare in practice, and tedious to collect,” the researchers wrote in their paper. “Therefore, we turn towards deep learning methods for unpaired domain-to-domain translation, that we adapt in order to perform human-robot imitation.”

Annabi, Ma, and Nguyen evaluated their model’s performance in a series of preliminary tests, comparing it to a simpler method to reproduce joint orientations that is not based on deep learning. Their model did not achieve the results they were hoping for, suggesting that current deep learning methods might not be able to successfully re-target motions in real-time.

The researchers now plan to conduct further experiments to identify potential issues with their approach, so that they can tackle them and adapt the model to improve its performance. The team’s findings so far suggest that while unsupervised deep learning techniques can be used to enable imitation learning in robots, their performance is still not good enough for them to be deployed on real robots.

“Future work will extend the current study in three directions: Further investigating the failure of the current method, as explained in the last section, creating a dataset of paired motion data from human-human imitation or robot-human imitation, and improving the model architecture in order to obtain more accurate retargeting predictions,” the researchers conclude in their paper.

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--