RT/ AI finds a way to people’s hearts (literally!)

Paradigm
Paradigm
Published in
28 min readJul 25, 2023

Robotics biweekly vol.78, 29th June — 25th July

TL;DR

  • Scientists have successfully developed a model that utilizes AI to accurately classify cardiac functions and valvular heart diseases from chest radiographs. The Area Under the Curve, or AUC, of the AI classification showed a high level of accuracy, exceeding 0.85 for almost all indicators and reaching 0.92 for detecting left ventricular ejection fraction — an important measure for monitoring cardiac function.
  • A new soft robotic glove is lending a ‘hand’ and providing hope to piano players who have suffered a disabling stroke or other neurotrauma. Combining flexible tactile sensors, soft actuators and AI, this robotic glove is the first to ‘feel’ the difference between correct and incorrect versions of the same song and to combine these features into a single hand exoskeleton. Unlike prior exoskeletons, this new technology provides precise force and guidance in recovering the fine finger movements required for piano playing and other complex tasks.
  • Researchers have presented important first steps in building underwater navigation robots.
  • A solution for temporal asymmetry — or entropy production — in thermodynamics has been developed to further our understanding of the behavior of biological systems, machine learning, and AI tools. The researchers worked on the time-irreversible Ising model dynamics caused by asymmetric connections between neurons.
  • AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system — both hardware and software combined — has been a decades-long challenge. Engineers are exploring how optical “memristors” may be a key to developing neuromorphic computing.
  • Researchers published an article that examines how the perceived meaning of manual labor can help predict the adoption of autonomous products.
  • Scientists show how their multilegged walking robot can be steered by inducing a dynamic instability. By making the couplings between segments more flexible, the robot changes from walking straight to moving in a curved path. This work can lead to more energy-efficient and reliable robotic navigation of terrain.
  • As we move into a world where human-machine interactions are becoming more prominent, pressure sensors that are able to analyze and simulate human touch are likely to grow in demand.
  • A new, AI-based technique for measuring fluid flow in the brain could lead to treatments for diseases such as Alzheimer’s.
  • Scientists have laid out a new approach to enhance AI-powered computer vision technologies by adding physics-based awareness to data-driven techniques. The study offered an overview of a hybrid methodology designed to improve how AI-based machinery sense, interact and respond to its environment in real time.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Artificial intelligence-based model to classify cardiac functions from chest radiographs: a multi-institutional, retrospective model development and validation study

by Daiju Ueda et al. in The Lancet Digital Health

AI (artificial intelligence) may sound like a cold robotic system, but Osaka Metropolitan University scientists have shown that it can deliver heartwarming — or, more to the point, “heart-warning” — support. They unveiled an innovative use of AI that classifies cardiac functions and pinpoints valvular heart disease with unprecedented accuracy, demonstrating continued progress in merging the fields of medicine and technology to advance patient care.

Valvular heart disease, one cause of heart failure, is often diagnosed using echocardiography. This technique, however, requires specialized skills, so there is a corresponding shortage of qualified technicians. Meanwhile, chest radiography is one of the most common tests to identify diseases, primarily of the lungs. Even though the heart is also visible in chest radiographs, little was known heretofore about the ability of chest radiographs to detect cardiac function or disease. Chest radiographs, or chest X-Rays, are performed in many hospitals and very little time is required to conduct them, making them highly accessible and reproducible. Accordingly, the research team led by Dr. Daiju Ueda, from the Department of Diagnostic and Interventional Radiology at the Graduate School of Medicine of Osaka Metropolitan University, reckoned that if cardiac function and disease could be determined from chest radiographs, this test could serve as a supplement to echocardiography.

Representative saliency maps for the external test dataset.

Dr. Ueda’s team successfully developed a model that utilizes AI to accurately classify cardiac functions and valvular heart diseases from chest radiographs. Since AI trained on a single dataset faces potential bias, leading to low accuracy, the team aimed for multi-institutional data. Accordingly, a total of 22,551 chest radiographs associated with 22,551 echocardiograms were collected from 16,946 patients at four facilities between 2013 and 2021. With the chest radiographs set as input data and the echocardiograms set as output data, the AI model was trained to learn features connecting both datasets.

The AI model was able to categorize precisely six selected types of valvular heart disease, with the Area Under the Curve, or AUC, ranging from 0.83 to 0.92. (AUC is a rating index that indicates the capability of an AI model and uses a value range from 0 to 1, with the closer to 1, the better.) The AUC was 0.92 at a 40% cut-off for detecting left ventricular ejection fraction — an important measure for monitoring cardiac function.

“It took us a very long time to get to these results, but I believe this is significant research,” stated Dr. Ueda. “In addition to improving the efficiency of doctors’ diagnoses, the system might also be used in areas where there are no specialists, in night-time emergencies, and for patients who have difficulty undergoing echocardiography.”

Feeling the beat: a smart hand exoskeleton for learning to play musical instruments

by Maohua Lin, Rudy Paul, Moaed Abd, James Jones, Darryl Dieujuste, Harvey Chim, Erik D. Engeberg in Frontiers in Robotics and AI

For people who have suffered neurotrauma such as a stroke, everyday tasks can be extremely challenging because of decreased coordination and strength in one or both upper limbs. These problems have spurred the development of robotic devices to help enhance their abilities. However, the rigid nature of these assistive devices can be problematic, especially for more complex tasks like playing a musical instrument.

A first-of-its-kind robotic glove is lending a “hand” and providing hope to piano players who have suffered a disabling stroke. Developed by researchers from Florida Atlantic University’s College of Engineering and Computer Science, the soft robotic hand exoskeleton uses artificial intelligence to improve hand dexterity. Combining flexible tactile sensors, soft actuators and AI, this robotic glove is the first to “feel” the difference between correct and incorrect versions of the same song and to combine these features into a single hand exoskeleton.

“Playing the piano requires complex and highly skilled movements, and relearning tasks involves the restoration and retraining of specific movements or skills,” said Erik Engeberg, Ph.D., senior author, a professor in FAU’s Department of Ocean and Mechanical Engineering within the College of Engineering and Computer Science, and a member of the FAU Center for Complex Systems and Brain Sciences and the FAU Stiles-Nicholson Brain Institute. “Our robotic glove is composed of soft, flexible materials and sensors that provide gentle support and assistance to individuals to relearn and regain their motor abilities.”

Researchers integrated special sensor arrays into each fingertip of the robotic glove. Unlike prior exoskeletons, this new technology provides precise force and guidance in recovering the fine finger movements required for piano playing. By monitoring and responding to users’ movements, the robotic glove offers real-time feedback and adjustments, making it easier for them to grasp the correct movement techniques.

Soft actuator with sensor arrays.

To demonstrate the robotic glove’s capabilities, researchers programmed it to feel the difference between correct and incorrect versions of the well-known tune, “Mary Had a Little Lamb,” played on the piano. To introduce variations in the performance, they created a pool of 12 different types of errors that could occur at the beginning or end of a note, or due to timing errors that were either premature or delayed, and that persisted for 0.1, 0.2 or 0.3 seconds. Ten different song variations consisted of three groups of three variations each, plus the correct song played with no errors.

To classify the song variations, Random Forest (RF), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN) algorithms were trained with data from the tactile sensors in the fingertips. Feeling the differences between correct and incorrect versions of the song was done with the robotic glove independently and while worn by a person. The accuracy of these algorithms was compared to classify the correct and incorrect song variations with and without the human subject.

Results of the study demonstrated that the ANN algorithm had the highest classification accuracy of 97.13 percent with the human subject and 94.60 percent without the human subject. The algorithm successfully determined the percentage error of a certain song as well as identified key presses that were out of time. These findings highlight the potential of the smart robotic glove to aid individuals who are disabled to relearn dexterous tasks like playing musical instruments.

Researchers designed the robotic glove using 3D printed polyvinyl acid stents and hydrogel casting to integrate five actuators into a single wearable device that conforms to the user’s hand. The fabrication process is new, and the form factor could be customized to the unique anatomy of individual patients with the use of 3D scanning technology or CT scans.

“Our design is significantly simpler than most designs as all the actuators and sensors are combined into a single molding process,” said Engeberg. “Importantly, although this study’s application was for playing a song, the approach could be applied to myriad tasks of daily life and the device could facilitate intricate rehabilitation programs customized for each patient.”

Clinicians could use the data to develop personalized action plans to pinpoint patient weaknesses, which may present themselves as sections of the song that are consistently played erroneously and can be used to determine which motor functions require improvement. As patients progress, more challenging songs could be prescribed by the rehabilitation team in a game-like progression to provide a customizable path to improvement.

“The technology developed by professor Engeberg and the research team is truly a gamechanger for individuals with neuromuscular disorders and reduced limb functionality,” said Stella Batalama, Ph.D., dean of the FAU College of Engineering and Computer Science. “Although other soft robotic actuators have been used to play the piano; our robotic glove is the only one that has demonstrated the capability to ‘feel’ the difference between correct and incorrect versions of the same song.”

Pleobot: a modular robotic solution for metachronal swimming

by Sara Oliveira Santos, Nils Tack, Yunxing Su, Francisco Cuenca-Jiménez, Oscar Morales-Lopez, P. Antonio Gomez-Valdez, Monica M. Wilhelmus in Scientific Reports

Picture a network of interconnected, autonomous robots working together in a coordinated dance to navigate the pitch-black surroundings of the ocean while carrying out scientific surveys or search-and-rescue missions.

In a new study, a team led by Brown University researchers has presented important first steps in building these types of underwater navigation robots. In the study, the researchers outline the design of a small robotic platform called Pleobot that can serve as both a tool to help researchers understand the krill-like swimming method and as a foundation for building small, highly maneuverable underwater robots.

Pleobot is currently made of three articulated sections that replicate krill-like swimming called metachronal swimming. To design Pleobot, the researchers took inspiration from krill, which are remarkable aquatic athletes and display mastery in swimming, accelerating, braking and turning. They demonstrate in the study the capabilities of Pleobot to emulate the legs of swimming krill and provide new insights on the fluid-structure interactions needed to sustain steady forward swimming in krill. According to the study, Pleobot has the potential to allow the scientific community to understand how to take advantage of 100 million years of evolution to engineer better robots for ocean navigation.

“Experiments with organisms are challenging and unpredictable,” said Sara Oliveira Santos, a Ph.D. candidate at Brown’s School of Engineering and lead author of the new study. “Pleobot allows us unparalleled resolution and control to investigate all the aspects of krill-like swimming that help it excel at maneuvering underwater. Our goal was to design a comprehensive tool to understand krill-like swimming, which meant including all the details that make krill such athletic swimmers.”

Morphology and kinematic parameters of the pleopod.

The effort is a collaboration between Brown researchers in the lab of Assistant Professor of Engineering Monica Martinez Wilhelmus and scientists in the lab of Francisco Cuenca-Jimenez at the Universidad Nacional Autónoma de México. A major aim of the project is to understand how metachronal swimmers, like krill, manage to function in complex marine environments and perform massive vertical migrations of over 1,000 meters — equivalent to stacking three Empire State Buildings — twice daily.

“We have snapshots of the mechanisms they use to swim efficiently, but we do not have comprehensive data,” said Nils Tack, a postdoctoral associate in the Wilhelmus lab. “We built and programmed a robot that precisely emulates the essential movements of the legs to produce specific motions and change the shape of the appendages. This allows us to study different configurations to take measurements and make comparisons that are otherwise unobtainable with live animals.”

The metachronal swimming technique can lead to remarkable maneuverability that krill frequently display through the sequential deployment of their swimming legs in a back to front wave-like motion. The researchers believe that in the future, deployable swarm systems can be used to map Earth’s oceans, participate in search-and-recovery missions by covering large areas, or be sent to moons in the solar system, such as Europa, to explore their oceans.

“Krill aggregations are an excellent example of swarms in nature: they are composed of organisms with a streamlined body, traveling up to one kilometer each way, with excellent underwater maneuverability,” Wilhelmus said. “This study is the starting point of our long-term research aim of developing the next generation of autonomous underwater sensing vehicles. Being able to understand fluid-structure interactions at the appendage level will allow us to make informed decisions about future designs.”

The researchers can actively control the two leg segments and have passive control of Pleobot’s biramous fins. This is believed to be the first platform that replicates the opening and closing motion of these fins. The construction of the robotic platform was a multi-year project, involving a multi-disciplinary team in fluid mechanics, biology and mechatronics. The researchers built their model at 10 times the scale of krill, which are usually about the size of a paperclip. The platform is primarily made of 3D printable parts and the design is open-access, allowing other teams to use Pleobot to continue answering questions on metachronal swimming not just for krill but for other organisms like lobsters.

In the published study, the group reveals the answer to one of the many unknown mechanisms of krill swimming: how they generate lift in order not to sink while swimming forward. If krill are not swimming constantly, they will start sinking because they are a little heavier than water. To avoid this, they still have to create some lift even while swimming forward to be able to remain at that same height in the water, said Oliveira Santos.

“We were able to uncover that mechanism by using the robot,” said Yunxing Su, a postdoctoral associate in the lab. “We identified an important effect of a low-pressure region at the back side of the swimming legs that contributes to the lift force enhancement during the power stroke of the moving legs.”

In the coming years, the researchers hope to build on this initial success and further build and test the designs presented in the article. The team is currently working to integrate morphological characteristics of shrimp into the robotic platform, such as flexibility and bristles around the appendages.

Nonequilibrium thermodynamics of the asymmetric Sherrington-Kirkpatrick model

by Miguel Aguilera, Masanao Igarashi, Hideaki Shimazaki in Nature Communications

Life, from the perspective of thermodynamics, is a system out of equilibrium, resisting tendencies towards increasing their levels of disorder. In such a state, the dynamics are irreversible over time. This link between the tendency toward disorder and irreversibility is expressed as the arrow of time by the English physicist Arthur Eddington in 1927.

Now, an international team including researchers from Kyoto University, Hokkaido University, and the Basque Center for Applied Mathematics, has developed a solution for temporal asymmetry, furthering our understanding of the behavior of biological systems, machine learning, and AI tools.

“The study offers, for the first time, an exact mathematical solution of the temporal asymmetry — also known as entropy production — of nonequilibrium disordered Ising networks,” says co-author Miguel Aguilera of the Basque Center for Applied Mathematics.

Asymmetric kinetic SK model.

The researchers focused on a prototype of large-scale complex networks called the Ising model, a tool used to study recurrently connected neurons. When connections between neurons are symmetric, the Ising model is in a state of equilibrium and presents complex disordered states called spin glasses. The mathematical solution of this state led to the award of the 2021 Nobel Prize in physics to Giorgio Parisi. Unlike in living systems, however, spin crystals are in equilibrium and their dynamics are time-reversible. The researchers instead worked on the time-irreversible Ising dynamics caused by asymmetric connections between neurons.

The exact solutions obtained serve as benchmarks for developing approximate methods for learning artificial neural networks. The development of learning methods used in multiple phases may advance machine learning studies.

“The Ising model underpins recent advances in deep learning and generative artificial neural networks. So, understanding its behavior offers critical insights into both biological and artificial intelligence in general,” added Hideaki Shimazaki at KyotoU’s Graduate School of Informatics.

“Our findings are the result of an exciting collaboration involving insights from physics, neuroscience and mathematical modeling,” remarked Aguilera. “The multidisciplinary approach has opened the door to novel ways to understand the organization of large-scale complex networks and perhaps decipher the thermodynamic arrow of time.”

Integrated optical memristors

by Nathan Youngblood, Carlos A. Ríos Ocampo, Wolfram H. P. Pernice, Harish Bhaskaran in Nature Photonics

AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system — both hardware and software combined — has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.

Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.

A new review article, sheds light on the evolution of this technology — and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.

“Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.”

The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address.

“Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood.

“One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”

Integration approaches to phase-change optical memristors.

Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing. Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.

Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence.

“We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor-something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.”

EXPRESS: Meaning of Manual Labor Impedes Consumer Adoption of Autonomous Products

by Emanuel de Bellis, Gita Venkataramani Johar, Nicola Poletti in Journal of Marketing

Researchers from University of St. Gallen and Columbia Business School published an article that examines how the perceived meaning of manual labor can help predict the adoption of autonomous products. The study is authored by Emanuel de Bellis, Gita Venkataramani Johar, and Nicola Poletti.

Whether it is cleaning homes or mowing lawns, consumers increasingly delegate manual tasks to autonomous products. These gadgets operate without human oversight and free consumers from mundane chores. However, anecdotal evidence suggests that people feel a sense of satisfaction when they complete household chores. Are autonomous products such as robot vacuums and cooking machines depriving consumers from meaningful experiences? This new research shows that, despite unquestionable benefits such as gains in efficiency and convenience, autonomous products strip away a source of meaning in life. As a result, consumers are hesitant to buy these products.

The researchers argue that manual labor is an important source of meaning in life. This is in line with research showing that everyday tasks have value — chores such as cleaning may not make us happy, but they add meaning to our lives. As de Bellis explains, “Our studies show that ‘meaning of manual labor’ causes consumers to reject autonomous products. For example, these consumers have a more negative attitude toward autonomous products and are also more prone to believe in the disadvantages of autonomous products relative to their advantages.”

On one hand, autonomous products take over tasks from consumers, typically leading to a reduction in manual labor and hence in the ability to derive meaning from manual tasks. On the other hand, by taking over manual tasks, autonomous products provide consumers with the opportunity to spend time on other, potentially more meaningful, tasks and activities.

“We suggest that companies highlight so-called alternative sources of meaning in life, which should reduce consumers’ need to derive meaning specifically from manual tasks. Highlighting other sources of meaning, such as through family or hobbies, at the time of the adoption decision should counteract the negative effect on autonomous product adoption,” says Johar.

In fact, a key value proposition for many of these technologies is that they free up time. iRobot claims that its robotic vacuum cleaner Roomba saves owners as much as 110 hours of cleaning a year. Some companies go even a step further by suggesting what consumers could do with their freed-up time. For example, German home appliance company Vorwerk promotes its cooking machine Thermomix with “more family time” and “Thermomix does the work so you can make time for what matters most.” Instead of promoting the quality of task completion (i.e., cooking a delicious meal), the company emphasizes that consumers can spend time on other, arguably more meaningful, activities.

This study demonstrates that the perceived meaning of manual labor (MML) — a novel concept introduced by the researchers — is key to predicting the adoption of autonomous products. Poletti says that “Consumers with a high MML tend to resist the delegation of manual tasks to autonomous products, irrespective of whether these tasks are central to one’s identity or not. Marketers can start by segmenting consumers into high and low MML consumers.” Unlike other personality variables that can only be reliably measured using complex psychometric scales, the extent of consumers’ MML might be assessed simply by observing their behavioral characteristics, such as whether consumers tend to do the dishes by hand, whether they prefer a manual car transmission, or what type of activities and hobbies they pursue. Activities like woodworking, cookery, painting, and fishing are likely predictors of high MML. Similarly, companies can measure likes on social media for specific activities and hobbies that involve manual labor. Finally, practitioners can ask consumers to rate the degree to which manual versus cognitive tasks are meaningful to them. Having segmented consumers according to their MML, marketers can better target and focus their messages and efforts.

In promotions, firms can highlight the meaningful time consumers gain with the use of autonomous products (e.g., “this product allows you to spend time on more meaningful tasks and pursuits than cleaning”). Such an intervention can prevent the detrimental effects of meaning of manual labor on autonomous product adoption.

Maneuverable and Efficient Locomotion of a Myriapod Robot with Variable Body-Axis Flexibility via Instability and Bifurcation

by Shinya Aoi, Yuki Yabuuchi, Daiki Morozumi, Kota Okamoto, Mau Adachi, Kei Senda, Kazuo Tsuchiya in Soft Robotics

Researchers from the Department of Mechanical Science and Bioengineering at Osaka University have invented a new kind of walking robot that takes advantage of dynamic instability to navigate. By changing the flexibility of the couplings, the robot can be made to turn without the need for complex computational control systems. This work may assist the creation of rescue robots that are able to traverse uneven terrain.

Most animals on Earth have evolved a robust locomotion system using legs that provides them with a high degree of mobility over a wide range of environments. Somewhat disappointingly, engineers who have attempted to replicate this approach have often found that legged robots are surprisingly fragile. The breakdown of even one leg due to the repeated stress can severely limit the ability of these robots to function. In addition, controlling a large number of joints so the robot can transverse complex environments requires a lot of computer power. Improvements in this design would be extremely useful for building autonomous or semi-autonomous robots that could act as exploration or rescue vehicles and enter dangerous areas.

Now, investigators from Osaka University have developed a biomimetic “myriapod” robot that takes advantage of a natural instability that can convert straight walking into curved motion. In a study published recently, researchers from Osaka University describe their robot, which consists of six segments (with two legs connected to each segment) and flexible joints. Using an adjustable screw, the flexibility of the couplings can be modified with motors during the walking motion. The researchers showed that increasing the flexibility of the joints led to a situation called a “pitchfork bifurcation,” in which straight walking becomes unstable. Instead, the robot transitions to walking in a curved pattern, either to the right or to the left. Normally, engineers would try to avoid creating instabilities. However, making controlled use of them can enable efficient maneuverability.

Myriapod robot.

“We were inspired by the ability of certain extremely agile insects that allows them to control the dynamic instability in their own motion to induce quick movement changes,” says Shinya Aoi, an author of the study. Because this approach does not directly steer the movement of the body axis, but rather controls the flexibility, it can greatly reduce both the computational complexity as well as the energy requirements.

The team tested the robot’s ability to reach specific locations and found that it could navigate by taking curved paths toward targets. “We can foresee applications in a wide variety of scenarios, such as search and rescue, working in hazardous environments or exploration on other planets,” says Mau Adachi, another study author. Future versions may include additional segments and control mechanisms.

Iontronic pressure sensor with high sensitivity over ultra-broad linear range enabled by laser-induced gradient micro-pyramids

by Ruoxi Yang, Ankan Dutta, Bowen Li, Naveen Tiwari, Wanqing Zhang, Zhenyuan Niu, Yuyan Gao, Daniel Erdely, Xin Xin, Tiejun Li, Huanyu Cheng in Nature Communications

As we move into a world where human-machine interactions are becoming more prominent, pressure sensors that are able to analyze and simulate human touch are likely to grow in demand. One challenge facing engineers is the difficulty in making the kind of cost-effective, highly sensitive sensor necessary for applications such as detecting subtle pulses, operating robotic limbs, and creating ultrahigh-resolution scales. However, a team of researchers has developed a sensor capable of performing all of those tasks.

The researchers, from Penn State and Hebei University of Technology in China, wanted to create a sensor that was extremely sensitive and reliably linear over a broad range of applications, had high pressure resolution, and was able to work under large pressure preloads.

“The sensor can detect a tiny pressure when large pressure is already applied,” said Huanyu “Larry” Cheng, James L. Henderson Jr. Memorial Associate Professor of Engineering Science and Mechanics at Penn State and co-author of a paper on the work published in Nature Communications. “An analogy I like to use is it’s like detecting a fly on top of an elephant. It can measure the slightest change in pressure, just like our skin does with touch.”

Cheng was inspired to develop these sensors due to a very personal experience: The birth of his second daughter. Cheng’s daughter lost 10% of her body weight soon after birth, so the doctor asked him to weigh the baby every two days to monitor any additional loss or weight gain. Cheng tried to do this by weighing himself on a regular home weight scale and then weighing himself holding his daughter to measure the baby’s weight.

“I noticed that when I put down my daughter in her blanket, when I was no longer holding her, you didn’t see the change in weight,” Cheng said. “So, we learned that trying to use a commercial scale doesn’t work, it didn’t detect the change in pressure.”

After trying many different approaches, they found that using a pressure sensor consisting of gradient micro-pyramidal structures and an ultrathin ionic layer to give a capacitive response was the most promising. However, there was a continued issue they faced. The high sensitivity of the microstructures would decrease as the pressure increased, and the random microstructures that were templated from natural objects resulted in uncontrollable deformation and a narrow linear range. In simple terms, when pressure was applied to the sensor, it would change the sensor’s shape and therefore alter the contact area between the microstructures and throw off the readings.

Overview of the iontronic pressure sensor.

To address these challenges, the scientists designed microstructure patterns that could increase the linear range without decreasing the sensitivity — they essentially made it flexible, so it could still function in the gradience of pressures that exist in the real world. Their study explored the use of a CO2 laser with a Gaussian beam to fabricate programmable structures such as gradient pyramidal microstructures (GPM) for iontronic sensors, which are soft electronics that can mimic the perception functions of human skin. This process reduces the cost and process complexity compared with photolithography, the method commonly used to prepare delicate microstructure patterns for sensors.

“I think in the future it is possible to further improve the model and be able to account for more complex systems and then we can certainly understand how to make even better sensors.”

“Yang is a very smart student who introduced the idea to solve this sensor issue, which is really something like a combination of many small pieces, smartly engineered together,” Cheng said. “We know the structure must be microscale and must have a delicate design. But it is challenging to design or optimize the structure, and she worked with the laser system we have in our lab to make this possible. She has been working very hard in the past few years and was able to explore all these different parameters and be able to quickly screen throughout this parameter space to find and improve the performance.”

This optimized sensor had rapid response and recovery times and excellent repeatability, which the team tested by detecting subtle pulses, operating interactive robotic hands, and creating ultrahigh-resolution, smart weight scales and chairs. The scientists also found that the proposed fabrication approaches and design toolkit from this work could be leveraged to easily tune the pressure sensor performance for varying target applications and open opportunities to create other iontronic sensors, the range of sensors that use ionic liquids such as an ultrathin ionic layer. Along with enabling a future scale where it would be easier for parents to weigh their baby, these sensors would have other uses as well.

“We were also able to detect not only the pulse from the wrist but also from the other distal vascular structures like the eyebrow and the fingertip,” Cheng said. “In addition, we combine that with the control system to show that this is possible to use for the future of human robotic interactional collaboration. Also, we envision other healthcare uses, such as someone who has lost a limb and this sensor could be part of a system to help them control a robotic limb.”

Cheng noted other potential uses, such as sensors to measure a person’s pulse during high-stress work situations such as search-and-rescue after an earthquake or carrying out difficult, dangerous tasks in a construction site. The research team used computer simulations and computer-aided design to help them explore ideas for these novel sensors, which Cheng notes is challenging work given all the possible sensor solutions. This electronic assistance will continue to push the research forward.

“I think in the future it is possible to further improve the model and be able to account for more complex systems and then we can certainly understand how to make even better sensors,” Cheng said.

Artificial intelligence velocimetry reveals in vivo flow rates, pressure gradients, and shear stresses in murine perivascular flows

by Kimberly A. S. Boster, Shengze Cai, Antonio Ladrón-de-Guevara, Jiatong Sun, Xiaoning Zheng, Ting Du, John H. Thomas, Maiken Nedergaard, George Em Karniadakis, Douglas H. Kelley in Proceedings of the National Academy of Sciences

A new artificial intelligence-based technique for measuring fluid flow around the brain’s blood vessels could have big implications for developing treatments for diseases such as Alzheimer’s.

The perivascular spaces that surround cerebral blood vessels transport water-like fluids around the brain and help sweep away waste. Alterations in the fluid flow are linked to neurological conditions, including Alzheimer’s, small vessel disease, strokes, and traumatic brain injuries but are difficult to measure in vivo. A multidisciplinary team of mechanical engineers, neuroscientists, and computer scientists led by University of Rochester Associate Professor Douglas Kelley developed novel AI velocimetry measurements to accurately calculate brain fluid flow.

“In this study, we combined some measurements from inside the animal models with a novel AI technique that allowed us to effectively measure things that nobody’s ever been able to measure before,” says Kelley, a faculty member in Rochester’s Department of Mechanical Engineering.

Overview of two-photon imaging experiments and resulting data.

The work builds upon years of experiments led by study coauthor Maiken Nedergaard, the codirector of Rochester’s Center for Translational Neuromedicine. The group has previously been able to conduct two-dimensional studies on the fluid flow in perivascular spaces by injecting tiny particles into the fluid and measuring their position and velocity over time. But scientists needed more complex measurements to understand the full intricacy of the system — and exploring such a vital, fluid system is a challenge.

To address that challenge, the team collaborated with George Karniadakis from Brown University to leverage artificial intelligence. They integrated the existing 2D data with physics-informed neural networks to create unprecedented high-resolution looks at the system.

“This is a way to reveal pressures, forces, and the three-dimensional flow rate with much more accuracy than we can otherwise do,” says Kelley. “The pressure is important because nobody knows for sure quite what pumping mechanism drives all these flows around the brain yet. This is a new field.”

Incorporating physics into data-driven computer vision

by Achuta Kadambi, Celso de Melo, Cho-Jui Hsieh, Mani Srivastava, Stefano Soatto in Nature Machine Intelligence

Researchers from UCLA and the United States Army Research Laboratory have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques.

The study offered an overview of a hybrid methodology designed to improve how AI-based machinery sense, interact and respond to its environment in real time — as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.

Computer vision allows AIs to see and make sense of their surroundings by decoding data and inferring properties of the physical world from images. While such images are formed through the physics of light and mechanics, traditional computer vision techniques have predominantly focused on data-based machine learning to drive performance. Physics-based research has, on a separate track, been developed to explore the various physical principles behind many computer vision challenges.

Achuta Kadambi/UCLA. Graphic showing two techniques to incorporate physics into machine learning pipelines — residual physics (top) and physical fusion (bottom)

It has been a challenge to incorporate an understanding of physics — the laws that govern mass, motion and more — into the development of neural networks, where AIs modeled after the human brain with billions of nodes to crunch massive image data sets until they gain an understanding of what they “see.” But there are now a few promising lines of research that seek to add elements of physics-awareness into already robust data-driven networks. The UCLA study aims to harness the power of both the deep knowledge from data and the real-world know-how of physics to create a hybrid AI with enhanced capabilities.

“Visual machines — cars, robots, or health instruments that use images to perceive the world — are ultimately doing tasks in our physical world,” said the study’s corresponding author Achuta Kadambi, an assistant professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Physics-aware forms of inference can enable cars to drive more safely or surgical robots to be more precise.”

The research team outlined three ways in which physics and data are starting to be combined into computer vision artificial intelligence:

  • Incorporating physics into AI data sets Tag objects with additional information, such as how fast they can move or how much they weigh, similar to characters in video games
  • Incorporating physics into network architectures Run data through a network filter that codes physical properties into what cameras pick up
  • Incorporating physics into network loss function Leverage knowledge built on physics to help AI interpret training data on what it observes

These three lines of investigation have already yielded encouraging results in improved computer vision. For example, the hybrid approach allows AI to track and predict an object’s motion more precisely and can produce accurate, high-resolution images from scenes obscured by inclement weather. With continued progress in this dual modality approach, deep learning-based AIs may even begin to learn the laws of physics on their own, according to the researchers.

Upcoming events

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--