RT/ Person-shaped robot can liquify and escape jail, all with the power of magnets

Paradigm
Paradigm
Published in
27 min readFeb 4, 2023

Robotics biweekly vol.67, 19th January — 4th February

TL;DR

  • Inspired by sea cucumbers, engineers have designed miniature robots that rapidly and reversibly shift between liquid and solid states. On top of being able to shape-shift, the robots are magnetic and can conduct electricity. The researchers put the robots through an obstacle course of mobility and shape-morphing tests.
  • Researchers have made a significant leap forward in developing insect-sized jumping robots capable of performing tasks in the small spaces often found in mechanical, agricultural and search-and-rescue settings. A new study demonstrates a series of click beetle-sized robots small enough to fit into tight spaces, powerful enough to maneuver over obstacles and fast enough to match an insect’s rapid escape time.
  • The new addition to the robo-dog family, ‘RaiBo’, can run along the sandy beach without losing balance and walk through grassy fields and back on the hard-floored tracking fields all on its own — no further tinkering necessary.
  • Scientists have been studying a fish sensory organ to understand cues for collective behavior which could be employed on underwater robots.
  • Engineers are tapping into advances in artificial intelligence to develop a new kind of walking stick for people who are blind or visually impaired.
  • Miniature biological robots have gained a new trick: remote control. The hybrid ‘eBiobots’ are the first to combine soft materials, living muscle and microelectronics, said researchers.
  • Researchers attempted to identify early symptoms of Parkinson’s disease using voice data. In their study, the researchers used artificial intelligence to analyze and assess speech signals, where calculations are done and diagnoses made in seconds rather than hours.
  • Artificial intelligence may help improve care for patients who show up at the hospital with acute chest pain, according to a new study.
  • Researchers have developed a new continuum robot inspired by the trunks of elephants. This robot has a customizable design that allows it to be tailored for different applications.
  • Researchers have recently developed a new framework that could provide four-legged robots with leader-following abilities in both nighttime and daytime conditions. This framework is based on visual and LiDAR detection technology.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Magnetoactive liquid-solid phase transitional matter

by Qingyuan Wang, Chengfeng Pan, Yuanxi Zhang, Lelun Peng, Zhipeng Chen, Carmel Majidi, Lelun Jiang in Matter

Inspired by sea cucumbers, engineers have designed miniature robots that rapidly and reversibly shift between liquid and solid states. On top of being able to shape-shift, the robots are magnetic and can conduct electricity. The researchers put the robots through an obstacle course of mobility and shape-morphing tests.

Where traditional robots are hard-bodied and stiff, “soft” robots have the opposite problem; they are flexible but weak, and their movements are difficult to control. “Giving robots the ability to switch between liquid and solid states endows them with more functionality,” says Chengfeng Pan, an engineer at The Chinese University of Hong Kong who led the study.

Schematic and applications of the liquid-solid phase transition of MPTM.

The team created the new phase-shifting material — dubbed a “magnetoactive solid-liquid phase transitional machine” — by embedding magnetic particles in gallium, a metal with a very low melting point (29.8 °C).

“The magnetic particles here have two roles,” says senior author and mechanical engineer Carmel Majidi of Carnegie Mellon University. “One is that they make the material responsive to an alternating magnetic field, so you can, through induction, heat up the material and cause the phase change. But the magnetic particles also give the robots mobility and the ability to move in response to the magnetic field.”

Typical magnetically actuated behaviors of solid and liquid MPTMs.

This is in contrast to existing phase-shifting materials that rely on heat guns, electrical currents, or other external heat sources to induce solid-to-liquid transformation. The new material also boasts an extremely fluid liquid phase compared to other phase-changing materials, whose “liquid” phases are considerably more viscous.

Before exploring potential applications, the team tested the material’s mobility and strength in a variety of contexts. With the aid of a magnetic field, the robots jumped over moats, climbed walls, and even split in half to cooperatively move other objects around before coalescing back together. In one video, a robot shaped like a person liquifies to ooze through a grid after which it is extracted and remolded back into its original shape.

“Now, we’re pushing this material system in more practical ways to solve some very specific medical and engineering problems,” says Pan.

MPTMs as magnetic solder using a liquid-solid transition.

On the biomedical side, the team used the robots to remove a foreign object from a model stomach and to deliver drugs on-demand into the same stomach. They also demonstrate how the material could work as smart soldering robots for wireless circuit assembly and repair (by oozing into hard-to-reach circuits and acting as both solder and conductor) and as a universal mechanical “screw” for assembling parts in hard-to-reach spaces (by melting into the threaded screw socket and then solidifying; no actual screwing required.)

“Future work should further explore how these robots could be used within a biomedical context,” says Majidi. “What we’re showing are just one-off demonstrations, proofs of concept, but much more study will be required to delve into how this could actually be used for drug delivery or for removing foreign objects.”

Insect-scale jumping robots enabled by a dynamic buckling cascade

by Yuzhe Wang, Qiong Wang, Mingchao Liu, Yimeng Qin, Liuyang Cheng, et al in Proceedings of the National Academy of Sciences

Researchers have made a significant leap forward in developing insect-sized jumping robots capable of performing tasks in the small spaces often found in mechanical, agricultural and search-and-rescue settings.

A new study led by mechanical sciences and engineering professor Sameh Tawfick demonstrates a series of click beetle-sized robots small enough to fit into tight spaces, powerful enough to maneuver over obstacles and fast enough to match an insect’s rapid escape time.

Researchers at the U. of I. and Princeton University have studied click beetle anatomy, mechanics and evolution over the past decade. A 2020 study found that snap buckling — the rapid release of elastic energy — of a coiled muscle within a click beetle’s thorax is triggered to allow them to propel themselves in the air many times their body length, as a means of righting themselves if flipped onto their backs.

“One of the grand challenges of small-scale robotics is finding a design that is small, yet powerful enough to move around obstacles or quickly escape dangerous settings,” Tawfick said.

In the new study, Tawfick and his team used tiny coiled actuators — analogous to animal muscles — that pull on a beam-shaped mechanism, causing it to slowly buckle and store elastic energy until it is spontaneously released and amplified, propelling the robots upward.

“This process, called a dynamic buckling cascade, is simple compared to the anatomy of a click beetle,” Tawfick said. “However, simple is good in this case because it allows us to work and fabricate parts at this small scale.”

Guided by biological evolution and mathematical models, the team built and tested four device variations, landing on two configurations that can successfully jump without manual intervention.

“Moving forward, we do not have a set approach on the exact design of the next generation of these robots, but this study plants a seed in the evolution of this technology — a process similar to biologic evolution,” Tawfick said.

The team envisions these robots accessing tight spaces to help perform maintenance on large machines like turbines and jet engines, for example, by taking pictures to identify problems.

“We also imagine insect-scale robots being useful in modern agriculture,” Tawfick said. “Scientists and farmers currently use drones and rovers to monitor crops, but sometimes researchers need a sensor to touch a plant or to capture a photograph of a very small-scale feature. Insect-scale robots can do that.”

Learning quadrupedal locomotion on deformable terrain

by Suyoung Choi, Gwanghyeon Ji, Jeongsoo Park, Hyeongjun Kim, Juhyeok Mun, Jeong Hyun Lee, Jemin Hwangbo in Science Robotics

KAIST (President Kwang Hyung Lee) announced on the 25th that a research team led by Professor Jemin Hwangbo of the Department of Mechanical Engineering developed a quadrupedal robot control technology that can walk robustly with agility even in deformable terrain such as sandy beach.

Professor Hwangbo’s research team developed a technology to model the force received by a walking robot on the ground made of granular materials such as sand and simulate it via a quadrupedal robot. Also, the team worked on an artificial neural network structure which is suitable in making real-time decisions needed in adapting to various types of ground without prior information while walking at the same time and applied it on to reinforcement learning. The trained neural network controller is expected to expand the scope of application of quadrupedal walking robots by proving its robustness in changing terrain, such as the ability to move in high-speed even on a sandy beach and walk and turn on soft grounds like an air mattress without losing balance. This research, with Ph.D. Student Soo-Young Choi of KAIST Department of Mechanical Engineering as the first author.

Contact model definition for simulation of granular substrates.

Reinforcement learning is an AI learning method used to create a machine that collects data on the results of various actions in an arbitrary situation and utilizes that set of data to perform a task. Because the amount of data required for reinforcement learning is so vast, a method of collecting data through simulations that approximates physical phenomena in the real environment is widely used.

In particular, learning-based controllers in the field of walking robots have been applied to real environments after learning through data collected in simulations to successfully perform walking controls in various terrains.

However, since the performance of the learning-based controller rapidly decreases when the actual environment has any discrepancy from the learned simulation environment, it is important to implement an environment similar to the real one in the data collection stage. Therefore, in order to create a learning-based controller that can maintain balance in a deforming terrain, the simulator must provide a similar contact experience.

The research team defined a contact model that predicted the force generated upon contact from the motion dynamics of a walking body based on a ground reaction force model that considered the additional mass effect of granular media defined in previous studies. Furthermore, by calculating the force generated from one or several contacts at each time step, the deforming terrain was efficiently simulated.

Adaptability of the proposed controller to various ground environments. The controller learned from a wide range of randomized granular media simulations showed adaptability to various natural and artificial terrains, and demonstrated high-speed walking ability and energy efficiency.

The research team also introduced an artificial neural network structure that implicitly predicts ground characteristics by using a recurrent neural network that analyzes time-series data from the robot’s sensors. The learned controller was mounted on the robot ‘RaiBo’, which was built hands-on by the research team to show high-speed walking of up to 3.03 m/s on a sandy beach where the robot’s feet were completely submerged in the sand. Even when applied to harder grounds, such as grassy fields, and a running track, it was able to run stably by adapting to the characteristics of the ground without any additional programming or revision to the controlling algorithm.

In addition, it rotated with stability at 1.54 rad/s (approximately 90° per second) on an air mattress and demonstrated its quick adaptability even in the situation in which the terrain suddenly turned soft.The research team demonstrated the importance of providing a suitable contact experience during the learning process by comparison with a controller that assumed the ground to be rigid, and proved that the proposed recurrent neural network modifies the controller’s walking method according to the ground properties.

The simulation and learning methodology developed by the research team is expected to contribute to robots performing practical tasks as it expands the range of terrains that various walking robots can operate on.

The first author, Suyoung Choi, said, “It has been shown that providing a learning-based controller with a close contact experience with real deforming ground is essential for application to deforming terrain.” He went on to add that “The proposed controller can be used without prior information on the terrain, so it can be applied to various robot walking studies.”

Lateral line morphology, sensory perception and collective behaviour in African cichlid fish

by Elliott Scott, Duncan E. Edgley, Alan Smith, Domino A. Joyce, Martin J. Genner, Christos C. Ioannou, Sabine Hauert in Royal Society Open Science

Scientists, led by University of Bristol, have been studying a fish sensory organ to understand cues for collective behaviour which could be employed on underwater robots.

This work was centred around the lateral line sensing organ in African cichlid fish, but found in almost all fish species, that enables them to sense and interpret water pressures around them with enough acuity to detect external influences such as neighbouring fish, changes in water flow, predators and obstacles. The lateral line system as a whole is distributed over the head, trunk and tail of the fish. It is comprised of mechanoreceptors (neuromasts) that are either within subdermal channels or on the surface of the skin.

Morphology quantified in this study in an example from the hybrids group.

Lead author Elliott Scott of the University of Bristol’s Department of Engineering Mathematics explained: “We were attempting to find out if the different areas of the lateral line — the lateral line on the head versus the lateral line on the body, or the different types of lateral line sensory units such as those on the skin, versus those under it, play different roles in how the fish is able to sense its environment through environmental pressure readings.

“We did this in a novel way, by using hybrid fish, that allowed for the natural generation of variation.”

They discovered the lateral line system around the head has the most important influence on how well fish are able to swim in a shoal, Meanwhile, the presence of more lateral line sensory units, neuromasts, that are found under the skin result in fish swimming closer together, while a greater presence of neuromasts on the skin tend to result in fish swimming further apart.

Generalized linear mixed models (GLMMs) of associations between lateral line morphology and behaviour.

In simulation, the researchers were able to show how the mechanisms behind the lateral line work are applicable at not just the tiny scales found in actual fish, but at larger scales too. This could inspire a novel type of easily-manufactured pressure sensor for underwater robotics, particularly swarm robotics, where cost is a large factor.

Elliott said: “These findings provide a better understanding of how the lateral line informs shoaling behaviour in fish, while also contributing a novel design of inexpensive pressure sensor that could be useful on underwater robots that have to navigate in dark or murky environments.”

The team now plan to develop the sensor further and integrate it into a robotic platform to help a robot navigate underwater and demonstrate its effectiveness.

A Novel Perceptive Robotic Cane with Haptic Navigation for Enabling Vision-Independent Participation in the Social Dynamics of Seat Choice

by Shivendra Agrawal, Mary Etta West, Bradley Hayes in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

Engineers at the University of Colorado Boulder are tapping into advances in artificial intelligence to develop a new kind of walking stick for people who are blind or visually impaired.

Think of it as assistive technology meets Silicon Valley. The researchers say that their “smart” walking stick could one day help blind people navigate tasks in a world designed for sighted people — from shopping for a box of cereal at the grocery store to picking a private place to sit in a crowded cafeteria.

“I really enjoy grocery shopping and spend a significant amount of time in the store,” said Shivendra Agrawal, a doctoral student in the Department of Computer Science. “A lot of people can’t do that, however, and it can be really restrictive. We think this is a solvable problem.”

In a study published in October, Agrawal and his colleagues in the Collaborative Artificial Intelligence and Robotics Lab got one step closer to solving it. The team’s walking stick resembles the white-and-red canes that you can buy at Walmart. But it also includes a few add-ons: Using a camera and computer vision technology, the walking stick maps and catalogs the world around it. It then guides users by using vibrations in the handle and with spoken directions, such as “reach a little bit to your right.”

The device isn’t supposed to be a substitute for designing places like grocery stores to be more accessible, Agrawal said. But he hopes his team’s prototype will show that, in some cases, AI can help millions of Americans become more independent.

“AI and computer vision are improving, and people are using them to build self-driving cars and similar inventions,” Agrawal said. “But these technologies also have the potential to improve quality of life for many people.”

Agrawal and his colleagues first explored that potential by tackling a familiar problem: Where do I sit?

“Imagine you’re in a café,” he said. “You don’t want to sit just anywhere. You usually take a seat close to the walls to preserve your privacy, and you usually don’t like to sit face-to-face with a stranger.”

Previous research has suggested that making these kinds of decisions is a priority for people who are blind or visually impaired. To see if their smart walking stick could help, the researchers set up a café of sorts in their lab — complete with several chairs, patrons and a few obstacles. Study subjects strapped on a backpack with a laptop in it and picked up the smart walking stick. They swiveled to survey the room with a camera attached near the cane handle. Like a self-driving car, algorithms running inside the laptop identified the various features in the room then calculated the route to an ideal seat. The team reported its findings this fall at the International Conference on Intelligent Robots and Systems in Kyoto, Japan. Researchers on the study included Bradley Hayes, assistant professor of computer science, and doctoral student Mary Etta West.

The study showed promising results: Subjects were able to find the right chair in 10 out of 12 trials with varying levels of difficulty. So far, the subjects have all been sighted people wearing blindfolds. But the researchers plan to evaluate and improve their device by working people who are blind or visually impaired once the technology is more dependable.

“Shivendra’s work is the perfect combination of technical innovation and impactful application, going beyond navigation to bring advancements in underexplored areas, such as assisting people with visual impairment with social convention adherence or finding and grasping objects,” Hayes said.

A computer vision algorithm scores boxes of cereal to identify a target product — in this case, a box of Kashi GO Coconut Almond Crunch. (Credit: Collaborative Artificial Intelligence and Robotics Lab)

Next up for the group: grocery shopping. In new research, which the team hasn’t yet published, Agrawal and his colleagues adapted their device for a task that can be daunting for anyone: finding and grasping products in aisles filled with dozens of similar-looking and similar-feeling choices.

Again, the team set up a makeshift environment in their lab: this time, a grocery shelf stocked with several different kinds of cereal. The researchers created a database of product photos, such as boxes of Honey Nut Cheerios or Apple Jacks, into their software. Study subjects then used the walking stick to scan the shelf, searching for the product they wanted.

“It assigns a score to the objects present, selecting what is the most likely product,” Agrawal said. “Then the system issues commands like ‘move a little bit to your left.’”

He added that it will be a while before the team’s walking stick makes it into the hands of real shoppers. The group, for example, wants to make the system more compact, designing it so that it can run off a standard smartphone attached to a cane. But the human-robot interaction researchers also hope that their preliminary results will inspire other engineers to rethink what robotics and AI are capable of.

“Our aim is to make this technology mature but also attract other researchers into this field of assistive robotics,” Agrawal said. “We think assistive robotics has the potential to change the world.”

Remote control of muscle-driven miniature robots with battery-free wireless optoelectronics

by Yongdeok Kim, Yiyuan Yang, Xiaotian Zhang, et al in Science Robotics

First, they walked. Then, they saw the light. Now, miniature biological robots have gained a new trick: remote control.

The hybrid “eBiobots” are the first to combine soft materials, living muscle and microelectronics, said researchers at the University of Illinois Urbana-Champaign, Northwestern University and collaborating institutions.

“Integrating microelectronics allows the merger of the biological world and the electronics world, both with many advantages of their own, to now produce these electronic biobots and machines that could be useful for many medical, sensing and environmental applications in the future,” said study co-leader Rashid Bashir, an Illinois professor of bioengineering and dean of the Grainger College of Engineering.

Bashir’s group has pioneered the development of biobots, small biological robots powered by mouse muscle tissue grown on a soft 3D-printed polymer skeleton. They demonstrated walking biobots in 2012 and light-activated biobots in 2016. The light activation gave the researchers some control, but practical applications were limited by the question of how to deliver the light pulses to the biobots outside of a lab setting.

The answer to that question came from Northwestern University professor John A. Rogers, a pioneer in flexible bioelectronics, whose team helped integrate tiny wireless microelectronics and battery-free micro-LEDs. This allowed the researchers to remotely control the eBiobots.

“This unusual combination of technology and biology opens up vast opportunities in creating self-healing, learning, evolving, communicating and self-organizing engineered systems. We feel that it’s a very fertile ground for future research with specific potential applications in biomedicine and environmental monitoring,” said Rogers, a professor of materials science and engineering, biomedical engineering and neurological surgery at Northwestern University and director of the Querrey Simpson Institute for Bioelectronics.

To give the biobots the freedom of movement required for practical applications, the researchers set out to eliminate bulky batteries and tethering wires. The eBiobots use a receiver coil to harvest power and provide a regulated output voltage to power the micro-LEDs, said co-first author Zhengwei Li, an assistant professor of biomedical engineering at the University of Houston.

The eBiobots are the first wireless bio-hybrid machines, combining biological tissue, microelectronics and 3D-printed soft polymers.

The researchers can send a wireless signal to the eBiobots that prompts the LEDs to pulse. The LEDs stimulate the light-sensitive engineered muscle to contract, moving the polymer legs so that the machines “walk.” The micro-LEDs are so targeted that they can activate specific portions of muscle, making the eBiobot turn in a desired direction.

The researchers used computational modeling to optimize the eBiobot design and component integration for robustness, speed and maneuverability. Illinois professor of mechanical sciences and engineering Mattia Gazzola led the simulation and design of the eBiobots. The iterative design and additive 3D printing of the scaffolds allowed for rapid cycles of experiments and performance improvement, said Gazzola and co-first author Xiaotian Zhang, a postdoctoral researcher in Gazzola’s lab.

The design allows for possible future integration of additional microelectronics, such as chemical and biological sensors, or 3D-printed scaffold parts for functions like pushing or transporting things that the biobots encounter, said co-first author Youngdeok Kim, who completed the work as a graduate student at Illinois.

The integration of electronic sensors or biological neurons would allow the eBiobots to sense and respond to toxins in the environment, biomarkers for disease and more possibilities, the researchers said.

“In developing a first-ever hybrid bioelectronic robot, we are opening the door for a new paradigm of applications for health care innovation, such as in-situ biopsies and analysis, minimum invasive surgery or even cancer detection within the human body,” Li said.

A Hybrid U-Lossian Deep Learning Network for Screening and Evaluating Parkinson’s Disease

by Rytis Maskeliūnas, Robertas Damaševičius, Audrius Kulikajevas, Evaldas Padervinskis, Kipras Pribuišis, Virgilijus Uloza in Applied Sciences

The diagnosis of Parkinson’s disease has shaken many lives. More than 10 million people worldwide are living with it. There is no cure, but if symptoms are noticed early, the disease can be controlled. As Parkinson’s disease progresses, along with other symptoms speech changes.

Lithuanian researcher from Kaunas University of Technology (KTU), Rytis Maskeliūnas, together with colleagues from the Lithuanian University of Health Sciences (LSMU), tried to identify early symptoms of Parkinson’s disease using voice data.

Parkinson’s disease is usually associated with loss of motor function — hand tremors, muscle stiffness, or balance problems. According to Maskeliūnas, a researcher at KTU’s Department of Multimedia Engineering, as motor activity decreases, so does the function of the vocal cords, diaphragm, and lungs: “Changes in speech often occur even earlier than motor function disorders, which is why the altered speech might be the first sign of the disease.”

Implementation flow.

According to Professor Virgilijus Ulozas, at the Department of Ear, Nose, and Throat at the LSMU Faculty of Medicine, patients with early-stage of Parkinson’s disease, might speak in a quieter manner, which can also be monotonous, less expressive, slower, and more fragmented, and this is very difficult to notice by ear. As the disease progresses, hoarseness, stuttering, slurred pronunciation of words, and loss of pauses between words can become more apparent. Taking these symptoms into account, a joint team of Lithuanian researchers has developed a system to detect the disease earlier.

“We are not creating a substitute for a routine examination of the patient — our method is designed to facilitate early diagnosis of the disease and to track the effectiveness of treatment,” says KTU researcher Maskeliūnas.

According to him, the link between Parkinson’s disease and speech abnormalities is not new to the world of digital signal analysis — it has been known and researched since the 1960s. However, as technology advances, it is becoming possible to extract more information from speech. In their study, the researchers used artificial intelligence (AI) to analyse and assess speech signals, where calculations are done and diagnoses made in seconds rather than hours. This study is also unique — the results are tailored to the specifics of the Lithuanian language, in this way expanding the AI language database.

Architecture of the U-lossian network classifier.

Speaking about the progress of the study, Kipras Pribuišis, lecturer at the Department of Ear, Nose, and Throat at the LSMU Faculty of Medicine, emphasises that it was only carried out on patients already diagnosed with Parkinson’s: “So far, our approach is able to distinguish Parkinson’s from healthy people using a speech sample. This algorithm is also more accurate than previously proposed.”

In a soundproof booth, a microphone was used to record the speech of healthy and Parkinson’s patients, and an artificial intelligence algorithm “learned” to perform signal processing by evaluating these recordings. The researchers highlight that the algorithm does not require powerful hardware and could be transferred to a mobile app in the future.

“Our results, which have already been published, have a very high scientific potential. Sure, there is still a long and challenging way to go before it can be applied in everyday clinical practice,” says Maskeliūnas.

According to the researcher, the next steps include increasing the number of patients to gather more data and determining whether the proposed algorithm is superior to alternative methods used for early diagnosis of Parkinson’s. In addition, it will be necessary to check whether the algorithm works well not only in laboratory-like environments but also in the doctor’s office or in the patient’s home.

Deep Learning Analysis of Chest Radiographs to Triage Patients with Acute Chest Pain Syndrome

by Márton Kolossváry, Vineet K. Raghu, John T. Nagurney, Udo Hoffmann, Michael T. Lu in Radiology

Artificial intelligence (AI) may help improve care for patients who show up at the hospital with acute chest pain, according to a new study.

“To the best of our knowledge, our deep learning AI model is the first to utilize chest X-rays to identify individuals among acute chest pain patients who need immediate medical attention,” said the study’s lead author, Márton Kolossváry, M.D., Ph.D., radiology research fellow at Massachusetts General Hospital (MGH) in Boston.

Acute chest pain syndrome may consist of tightness, burning or other discomfort in the chest or a severe pain that spreads to your back, neck, shoulders, arms, or jaw. It may be accompanied by shortness of breath. Acute chest pain syndrome accounts for over 7 million emergency department visits annually in the United States, making it one of the most common complaints.

Fewer than 8% of these patients are diagnosed with the three major cardiovascular causes of acute chest pain syndrome, which are acute coronary syndrome, pulmonary embolism or aortic dissection. However, the life-threatening nature of these conditions and low specificity of clinical tests, such as electrocardiograms and blood tests, lead to substantial use of cardiovascular and pulmonary diagnostic imaging, often yielding negative results. As emergency departments struggle with high patient numbers and shortage of hospital beds, effectively triaging patients at very low risk of these serious conditions is important.

Gradient-weighted class activation maps of representative chest radiographs in (A) an 85-year-old man with acute coronary syndrome (ACS), (B) a 77-year-old man with aortic dissection (AD), (с) a 39-year-old healthy man, and (D) a 27-year-old healthy woman.

Deep learning is an advanced type of artificial intelligence (AI) that can be trained to search X-ray images to find patterns associated with disease. For the study, Dr. Kolossváry and colleagues developed an open-source deep learning model to identify patients with acute chest pain syndrome who were at risk for 30-day acute coronary syndrome, pulmonary embolism, aortic dissection or all-cause mortality, based on a chest X-ray.

The study used electronic health records of patients presenting with acute chest pain syndrome who had a chest X-ray and additional cardiovascular or pulmonary imaging and/or stress tests at MGH or Brigham and Women’s Hospital in Boston between January 2005 and December 2015. For the study, 5,750 patients (mean age 59, including 3,329 men) were evaluated. The deep-learning model was trained on 23,005 patients from MGH to predict a 30-day composite endpoint of acute coronary syndrome, pulmonary embolism or aortic dissection and all-cause mortality based on chest X-ray images.

The deep-learning tool significantly improved prediction of these adverse outcomes beyond age, sex and conventional clinical markers, such as d-dimer blood tests. The model maintained its diagnostic accuracy across age, sex, ethnicity and race. Using a 99% sensitivity threshold, the model was able to defer additional testing in 14% of patients as compared to 2% when using a model only incorporating age, sex, and biomarker data.

“Analyzing the initial chest X-ray of these patients using our automated deep learning model, we were able to provide more accurate predictions regarding patient outcomes as compared to a model that uses age, sex, troponin or d-dimer information,” Dr. Kolossváry said. “Our results show that chest X-rays could be used to help triage chest pain patients in the emergency department.”

According to Dr. Kolossváry, in the future such an automated model could analyze chest X-rays in the background and help select those who would benefit most from immediate medical attention and may help identify patients who may be discharged safely from the emergency department.

A Preprogrammable Continuum Robot Inspired by Elephant Trunk for Dexterous Manipulation

by Jie Zhang et al in Soft Robotics

Conventional robots based on separate joints do not always perform well in complex real-world tasks, particularly those that involve the dexterous manipulation of objects. Some roboticists have thus been trying to devise continuum robots, robotic platforms characterized by infinite degrees of freedom and no fixed number of joints.

Continuum robots are typically based on cables or other deformable components that can move more freely and are not restricted by fixed joint structures. Despite these advantages, many continuum robot designs proposed still cannot yet efficiently navigate complex and unstructured environments.

Researchers at Sun Yat-Sen University, Dalian University of Technology and London South Bank University have recently developed a new continuum robot inspired by the trunks of elephants. This robot has a customizable design that allows it to be tailored for different applications.

Bio-inspired continuum robot constructed by tensegrity structure.

“We discovered that the existing cable-driven continuum robots always demonstrate circle-shaped profiles after deformation, which may hinder their interaction with varying-curvature environments,” Jianing Wu, one of the researchers who carried out the study, told Tech Xplore. “To overcome this limitation, we attempted to propose a continuum robotic paradigm for adapting to application scenarios with varying curvatures.”

Elephant trunks are naturally divided into finite segments connected by pseudo-joints. This allows elephants to interact with unstructured environments more efficiently, for instance by flexibly squeezing their trunk into narrow spaces or reaching higher branches.

Due to their unique conformation, the stiffness of different segments of elephant trunks can be independently regulated and tuned to bend in different ways. This ultimately allows elephant to adapt the shape of their trunk to tackle different tasks and reach objects with various shapes.

Continuum robot equipped with an infrared camera to detect pipelines with dim environment internally. Credit: Zhang et al

“Inspired by the movements of elephant trunks, we presented a continuum robot with pre-programmable stiffness distribution to solve the robotics problem we were trying to tackle,” Haijun Peng, who is another researcher of this study explained. “By regulating the stiffness distribution, our bio-inspired robot not only demonstrates various deformation patterns but also is able to move through pipelines with varying curvatures.”

The elephant trunk-inspired robot created by Wu and his colleagues is based on a class-3 tensegrity structure comprised of several elastic elements, which are evenly distributed throughout the structure. This allowed the researchers to program the local stiffness characteristics of the robot simply by replacing elastic elements with others that have different stiffness magnitudes.

“Benefiting from the difference in the stiffness distribution, the continuum robot exhibits varying robotic configurations under an identical actuation criterion,” Wu said. “Our bio-inspired continuum robot is able to not only conformally interact with varying-curvature environments but simplify the complexity of the required actuation and control systems by leveraging the inherent intelligence.”

The researchers have so far used their design to create a prototype robot consisting of 12 elastic modules. They then showcased its deformation capabilities in a series of trials, focusing on different real-world scenarios.

“Long-term and never-ending evolution has led animals to exhibit amazing capabilities,” Wu said. “If we used our full caution to observe them, we might harvest a spectrum of bio-inspired design paradigms for future robotic systems. For example, inspired by the interaction behavior of the elephant trunks, we could present a more flexible continuum robot to satisfy the interaction requirements in varying-curvature environments.”

In the future, the continuum robot created by this team of researchers could help to automate more real-world tasks in unstructured environments, which are difficult or impossible to tackle using robots with rigid joint structures. Due to its unique design, the robot can also simultaneously support different functions through the installation of diverse end effectors, such as grippers and sensors.

“We would now like to develop smart strategies for stiffness regulation, by which the ability of continuum robots can be further enhanced in exploring unpredictable scenarios in real time,” Wu added. “For instance, some advanced materials will be employed for fabricating the spring elements, such as shape memory alloy (SMA), and dielectric elastomer (DE).”

Day/Night Leader-Following Method Based on Adaptive Federated Filter for Quadruped Robots

by Jialin Zhang et al in Biomimetics

Legged robots have significant advantages over wheeled and track-based robots, particularly when it comes to moving on different types of terrains. This makes them particularly favorable for missions that involve transporting goods or traveling from one place to another. One promising approach that allows legged robots to effectively tackle these missions, particularly those that involve long-distance traveling, entails teaching them to follow a “leader,” whether a specific vehicle or human agent. However, this can be difficult to achieve, particularly under all lighting and atmospheric conditions.

Researchers at Shandong University in China have recently developed a new framework that could provide four-legged robots with leader-following abilities in both nighttime and daytime conditions. This framework is based on visual and LiDAR detection technology.

“Leader-following can help quadruped robots accomplish long-distance transportation tasks,” Jialin Zhang, Jiamin Guo, Hui Chai, Qin Zhang, Yibin Li, Zhiying Wang and Qifan Zhang wrote in their paper. “However, long-term following has to face the change of day and night as well as the presence of interference. To solve this problem, we present a day/night leader–following method for quadruped robots toward robustness and fault-tolerant person following in complex environments.”

Quadruped robot SDU-150 (Shandong University, Jinan, China) is composed of perception and motion control platforms.

To be effective, leader-following frameworks should allow robots to accurately detect and identify specific people under different lighting conditions, so that they can then follow them to a desired location. The method proposed by Zhang, Guo and their colleagues achieves this using three different modules: a person detection, a communication and a motion control module.

“We construct an Adaptive Federated Filter algorithm framework, which fuses the visual leader-following method and the LiDAR detection algorithm based on reflective intensity,” Zhang and his colleagues wrote in their paper. “Moreover, the framework uses the Kalman filter and adaptively adjusts the information sharing factor according to the light condition. In particular, the framework uses fault detection and multi-sensors information to stably achieve day/night leader-following.”

A unique feature of the leader-following framework introduced by the researchers is its use of a fault detection and isolation algorithm, which is designed to significantly improve its performance in both daytime and nighttime conditions. This algorithm relies on the data collected by several different sensors and on computations ran by a detection algorithm, which allow it to adapt to high-frequency vibrations, different levels of illumination and possible visual interferences caused by reflective materials in the surrounding environment.

Zhang, Guo and their colleagues evaluated their proposed framework in a series of trials using SDU-150, a quadruped robot developed at Shandong University. These tests yielded very promising results, as the robot was able to identify leaders reliably and effectively in various scenarios. The robot was tested in both indoor and outdoor environments, at day and at night and under different lighting conditions.

In the future, the leader-following framework developed by this team of researchers could help to improve the leader-following abilities of other existing and newly developed robots. In addition, it could potentially inspire the development of similar approaches designed to enhance the ability of robots to detect and track specific targets under different lighting conditions.

“The next step will combine sensor fusion with deep learning to perform data-level multisensor fusion, which greatly improves the detection accuracy and adapts to the high-precision operating situation,” the researchers conclude in their paper.

Upcoming events

ICRA 2023: 29 May–2 June 2023, London, UK

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--