RT/ Allowing robots to explore on their own

Paradigm
Paradigm
Published in
28 min readAug 10, 2023

Robotics biweekly vol.79, 25th July — 10th August

TL;DR

  • Scientists have developed a suite of robotic systems and planners enabling robots to explore more quickly, probe the darkest corners of unknown environments, and create more accurate and detailed maps. The systems allow robots to do all this autonomously, finding their way and creating a map without human intervention.
  • A new technique enables a human to efficiently fine-tune a robot that failed to complete a desired task with very little effort on the part of the human. Their system uses algorithms, counterfactual explanations, and feedback from the user to generate synthetic data it uses to quickly fine-tune the robot.
  • The new robot can swim under the sand and dig itself out too, thanks to two front limbs that mimic the oversized flippers of turtle hatchlings. It’s the only robot that is able to travel in sand at a depth of 5 inches. It can also travel at a speed of 1.2 millimeters per second — roughly 4 meters, or 13 feet, per hour. This may seem slow but is comparable to other subterranean animals like worms and clams.
  • Soft robotic grippers could greatly increase productivity in many fields. However, currently, existing designs are overly complex and expensive. A research team has developed ROSE, a novel embracing soft gripper inspired by the blooming and closing of rose flowers. Bearing a surprisingly simple, inexpensive, and scalable design, ROSE can pick up many kinds of objects without damaging them, even in challenging environments and conditions.
  • Researchers have made groundbreaking advancements in bionics with the development of a new electric variable-stiffness artificial muscle. This innovative technology possesses self-sensing capabilities and has the potential to revolutionize soft robotics and medical applications. The artificial muscle seamlessly transitions between soft and hard states, while also sensing forces and deformations.
  • Flexible displays that can change color, convey information and even send veiled messages via infrared radiation are now possible, thanks to new research. Engineers inspired by the morphing skins of animals like chameleons and octopuses have developed capillary-controlled robotic flapping fins to create switchable optical and infrared light multipixel displays that are 1,000 times more energy efficient than light-emitting devices.
  • Researchers have trained a robotic ‘chef’ to watch and learn from cooking videos and recreate the dish itself.
  • For the first time, a person with an arm amputation can manipulate each finger of a bionic hand as if it was his own. Thanks to revolutionary surgical and engineering advancements that seamlessly merge humans with machines, this breakthrough offers new hope and possibilities for people with amputations worldwide. A study presents the first documented case of an individual whose body was surgically modified to incorporate implanted sensors and a skeletal implant. AI algorithms then translated the user’s intentions into the movement of the prosthesis.
  • Scientists have successfully developed an AI model to accurately classify cardiac functions and valvular heart diseases from chest radiographs.
  • Researchers have presented important first steps in building underwater navigation robots.

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Representation granularity enables time-efficient autonomous exploration in large, complex worlds

by C. Cao, H. Zhu, Z. Ren, H. Choset, J. Zhang in Science Robotics

A research group in Carnegie Mellon University’s Robotics Institute is creating the next generation of explorers — robots.

The Autonomous Exploration Research Team has developed a suite of robotic systems and planners enabling robots to explore more quickly, probe the darkest corners of unknown environments, and create more accurate and detailed maps. The systems allow robots to do all this autonomously, finding their way and creating a map without human intervention.

“You can set it in any environment, like a department store or a residential building after a disaster, and off it goes,” said Ji Zhang, a systems scientist in the Robotics Institute. “It builds the map in real-time, and while it explores, it figures out where it wants to go next. You can see everything on the map. You don’t even have to step into the space. Just let the robots explore and map the environment.”

The three-layer architecture for a robotic system.

The team has worked on exploration systems for more than three years. They’ve explored and mapped several underground mines, a parking garage, the Cohon University Center, and several other indoor and outdoor locations on the CMU campus. The system’s computers and sensors can be attached to nearly any robotic platform, transforming it into a modern-day explorer. The group uses a modified motorized wheelchair and drones for much of its testing.

Robots can explore in three modes using the group’s systems. In one mode, a person can control the robot’s movements and direction while autonomous systems keep it from crashing into walls, ceilings or other objects. In another mode, a person can select a point on a map and the robot will navigate to that point. The third mode is pure exploration. The robot sets off on its own, investigates the entire space and creates a map.

“This is a very flexible system to use in many applications, from delivery to search-and-rescue,” said Howie Choset, a professor in the Robotics Institute.

The group combined a 3D scanning lidar sensor, forward-looking camera and inertial measurement unit sensors with an exploration algorithm to enable the robot to know where it is, where it has been and where it should go next. The resulting systems are substantially more efficient than previous approaches, creating more complete maps while reducing the algorithm run time by half.

The new systems work in low-light, treacherous conditions where communication is spotty, like caves, tunnels and abandoned structures. A version of the group’s exploration system powered Team Explorer, an entry from CMU and Oregon State University in DARPA’s Subterranean Challenge. Team Explorer placed fourth in the final competition but won the Most Sectors Explored Award for mapping more of the route than any other team.

“All of our work is open-sourced. We are not holding anything back. We want to strengthen society with the capabilities of building autonomous exploration robots,” said Chao Cao, a Ph.D. student in robotics and the lead operator for Team Explorer. “It’s a fundamental capability. Once you have it, you can do a lot more.”

Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation

by Andi Peng, Aviv Netanyahu, Mark Ho, Tianmin Shu, Andreea Bobu, Julie Shah, Pulkit Agrawal in arXiv

Imagine purchasing a robot to perform household tasks. This robot was built and trained in a factory on a certain set of tasks and has never seen the items in your home. When you ask it to pick up a mug from your kitchen table, it might not recognize your mug. So, the robot fails.

“Right now, the way we train these robots, when they fail, we don’t really know why. So you would just throw up your hands and say, ‘OK, I guess we have to start over.’ A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback,” says Andi Peng, an electrical engineering and computer science (EECS) graduate student at MIT.

Peng and her collaborators at MIT, New York University, and the University of California at Berkeley created a framework that enables humans to quickly teach a robot what they want it to do, with a minimal amount of effort.

When a robot fails, the system uses an algorithm to generate counterfactual explanations that describe what needed to change for the robot to succeed. For instance, maybe the robot would have been able to pick up the mug if the mug were a certain color. It shows these counterfactuals to the human and asks for feedback on why the robot failed. Then the system utilizes this feedback and the counterfactual explanations to generate new data it uses to fine-tune the robot. Fine-tuning involves tweaking a machine-learning model that has already been trained to perform one task, so it can perform a second, similar task.

The researchers tested this technique in simulations and found that it could teach a robot more efficiently than other methods. The robots trained with this framework performed better, while the training process consumed less of a human’s time. This framework could help robots learn faster in new environments without requiring a user to have technical knowledge. In the long run, this could be a step toward enabling general-purpose robots to efficiently perform daily tasks for the elderly or individuals with disabilities in a variety of settings.

Robots often fail due to distribution shift — the robot is presented with objects and spaces it did not see during training, and it doesn’t understand what to do in this new environment. One way to retrain a robot for a specific task is imitation learning. The user could demonstrate the correct task to teach the robot what to do. If a user tries to teach a robot to pick up a mug, but demonstrates with a white mug, the robot could learn that all mugs are white. It may then fail to pick up a red, blue, or “Tim-the-Beaver-brown” mug. Training a robot to recognize that a mug is a mug, regardless of its color, could take thousands of demonstrations.

“I don’t want to have to demonstrate with 30,000 mugs. I want to demonstrate with just one mug. But then I need to teach the robot so it recognizes that it can pick up a mug of any color,” Peng says.

To accomplish this, the researchers’ system determines what specific object the user cares about (a mug) and what elements aren’t important for the task (perhaps the color of the mug doesn’t matter). It uses this information to generate new, synthetic data by changing these “unimportant” visual concepts. This process is known as data augmentation.

The framework has three steps. First, it shows the task that caused the robot to fail. Then it collects a demonstration from the user of the desired actions and generates counterfactuals by searching over all features in the space that show what needed to change for the robot to succeed. The system shows these counterfactuals to the user and asks for feedback to determine which visual concepts do not impact the desired action. Then it uses this human feedback to generate many new augmented demonstrations. In this way, the user could demonstrate picking up one mug, but the system would produce demonstrations showing the desired action with thousands of different mugs by altering the color. It uses these data to fine-tune the robot. Creating counterfactual explanations and soliciting feedback from the user is critical for the technique to succeed, Peng says.

Toward Robotic Sensing and Swimming in Granular Environments using Underactuated Appendages

by Shivam Chopra, Drago Vasile, Saurabh Jadhav, Michael T Tolley, Nick Gravish in Advanced Intelligent Systems

This robot can swim under the sand and dig itself out too, thanks to two front limbs that mimic the oversized flippers of turtle hatchlings.

It’s the only robot that is able to travel in sand at a depth of 5 inches. It can also travel at a speed of 1.2 millimeters per second-roughly 4 meters, or 13 feet, per hour. This may seem slow but is comparable to other subterranean animals like worms and clams. The robot is equipped with force sensors at the end of its limbs that allow it to detect obstacles while in motion. It can operate untethered and be controlled via WiFi.

Robots that can move through sand face significant challenges like dealing with higher forces than robots that move in air or water. They also get damaged more easily. However, the potential benefits of solving locomotion in sand include inspection of grain silos, measurements for soil contaminants, seafloor digging, extraterrestrial exploration,and search and rescue.

Overview of the robot.

The robot is the result of several experiments conducted by a team of roboticists at the University of California San Diego to better understand sand and how robots could travel through it. Sand is particularly challenging because of the friction between sand grains that leads to large forces; difficulty sensing obstacles; and the fact that it switches between behaving like a liquid and a solid depending on the context.

The team believed that observing animals would be key to developing a bot that can swim in sand and dig itself out of sand as well. After considering worms, they landed on sea turtle hatchlings, which have enlarged front fins that allow them to surface after hatching. Turtle-like flippers can generate large propulsive forces; allow the robot to steer; and have the potential to detect obstacles. Scientists still do not fully understand how robots with flipper-like appendages move within sand. The research team at UC San Diego conducted extensive simulations and testing, and finally landed on a tapered body design and a shovel-shaped nose.

“We needed to build a robot that is both strong and streamlined,” said Shivam Chopra, lead author of the paper and a Ph.D. student in the research group of professor Nick Gravish at the Jacobs School of Engineering at UC San Diego.

The bot detects obstacles by monitoring changes in the torque generated by the movement of its flippers. It can detect obstacles above its body, but not below or directly in front of it. To keep the robot at level depth in the sand, researchers designed two foil-like surfaces, which they call terrafoils, on the sides of the bot’s nose. This allowed them to control lift, as the robot had a tendency to keep its nose pointed toward the surface.

ROSE: Rotation-based Squeezing Robotic Gripper toward Universal Handling of Objects

by Son Tien Bui, Shinya Kawano, Van Anh Ho in Robotics: Science and Systems 2023

Although grasping objects is a relatively straightforward task for us humans, there is a lot of mechanics involved in this simple task. Picking up an object requires fine control of the fingers, of their positioning, and of the pressure each finger applies, which in turn necessitates intricate sensing capabilities. It’s no wonder that robotic grasping and manipulation is a very active research area within the field of robotics.

Today, industrial robotic hands have replaced humans in various complex and hazardous activities, including in restaurants, farms, factories, and manufacturing plants. In general, soft robotic grippers are better suited for tasks in which the objects to be picked up are fragile, such as fruits and vegetables. However, while soft robots are promising as harvesting tools, they usually share a common disadvantage: their price tag. Most soft robotic gripper designs require the intricate assembly of multiple pieces. This drives up development and maintenance costs.

Fortunately, a research team from the Japan Advanced Institute of Technology (JAIST), led by Associate Professor Van Anh Ho, have come up with a groundbreaking solution to these problems. Taking a leaf from nature, they have developed an innovative soft robotic gripper called ‘ROSE,’ which stands for ‘Rotation-based Squeezing Gripper.’

What makes ROSE so impressive is its design. The soft gripping part has the shape of a cylindrical funnel or sleeve and is connected to a hard circular base, which in turn is attached to the shaft of an actuator. The funnel must be placed over the object meant to be picked up, covering a decent portion of its surface area. Then, the actuator makes the base turn, which causes the flexible funnel’s skin to wrap tightly around the object. This mechanism was loosely inspired by the changing shapes of roses, which bloom during the day and close up during the night.

ROSE offers substantial advantages compared to more conventional grippers. First, it is much less expensive to manufacture. The hard parts can all be 3D-printed, whereas the funnel itself can be easily produced using a mold and liquid silicone rubber. This ensures that the design is easily scalable and is suitable for mass production.

Second, ROSE can easily pick up a wide variety of objects without complex control and sensing mechanisms. Unlike grippers that rely on finger-like structures, ROSE’s sleeve applies a gentler, more uniform pressure. This makes ROSE better suited for handling fragile produce, such as strawberries and pears, as well as slippery objects. Weighing less than 200 grams, the gripper can achieve an impressive payload-to-weight ratio of 6812%.

Third, ROSE is extremely durable and sturdy. The team showed that it could successfully continue to pick up objects even after 400,000 trials. Moreover, the funnel still works properly in the presence of significant cracks or cuts.

“The proposed gripper excels in demanding scenarios, as evidenced by its ability to withstand a severe test in which we cut the funnel into four separate sections at full height,” remarks Assoc. Prof. Ho, “This test underscores the gripper’s exceptional resilience and optimal performance in challenging conditions.”

Finally, ROSE can be endowed with sensing capabilities. The researchers achieved this by placing multiple cameras on top of the circular base, pointing at the inside of the funnel, which was covered in markers, whose position could be picked up by the cameras and analyzed through image processing algorithms. This promising approach allows for size and shape estimation of the grasped object.

The research team notes that ROSE could be an enticing option for various applications, including harvesting operations and sorting items in factories. It could also find a home in cluttered environments such as farms, professional kitchens, and warehouses.

“The ROSE gripper holds significant potential to revolutionize gripping applications and gain widespread acceptance across various fields,” concludes Assoc. Prof. Ho, “Its straightforward yet robust and dependable design is set to inspire researchers and manufacturers to embrace it for a broad variety of gripping tasks in the near future.”

An Electric Self‐Sensing and Variable‐Stiffness Artificial Muscle

by Chen Liu, James J. C. Busfield, Ketao Zhang in Advanced Intelligent Systems

Researchers from Queen Mary University of London have made groundbreaking advancements in bionics with the development of a new electric variable-stiffness artificial muscle. Published in Advanced Intelligent Systems, this innovative technology possesses self-sensing capabilities and has the potential to revolutionize soft robotics and medical applications. The artificial muscle seamlessly transitions between soft and hard states, while also sensing forces and deformations. With flexibility and stretchability similar to natural muscle, it can be integrated into intricate soft robotic systems and adapt to various shapes. By adjusting voltages, the muscle rapidly changes its stiffness and can monitor its own deformation through resistance changes. The fabrication process is simple and reliable, making it ideal for a range of applications, including aiding individuals with disabilities or patients in rehabilitation training.

In a study, researchers from Queen Mary University of London have made significant advancements in the field of bionics with the development of a new type of electric variable-stiffness artificial muscle that possesses self-sensing capabilities. This innovative technology has the potential to revolutionize soft robotics and medical applications.

Muscle contraction hardening is not only essential for enhancing strength but also enables rapid reactions in living organisms. Taking inspiration from nature, the team of researchers at QMUL’s School of Engineering and Materials Science has successfully created an artificial muscle that seamlessly transitions between soft and hard states while also possessing the remarkable ability to sense forces and deformations.

Dr. Ketao Zhang, a Lecturer at Queen Mary and the lead researcher, explains the importance of variable stiffness technology in artificial muscle-like actuators. “Empowering robots, especially those made from flexible materials, with self-sensing capabilities is a pivotal step towards true bionic intelligence,” says Dr. Zhang.

The cutting-edge artificial muscle developed by the researchers exhibits flexibility and stretchability similar to natural muscle, making it ideal for integration into intricate soft robotic systems and adapting to various geometric shapes. With the ability to withstand over 200% stretch along the length direction, this flexible actuator with a striped structure demonstrates exceptional durability. By applying different voltages, the artificial muscle can rapidly adjust its stiffness, achieving continuous modulation with a stiffness change exceeding 30 times. Its voltage-driven nature provides a significant advantage in terms of response speed over other types of artificial muscles. Additionally, this novel technology can monitor its deformation through resistance changes, eliminating the need for additional sensor arrangements and simplifying control mechanisms while reducing costs.

The approximate fabrication process of the SSVS-AM.

The fabrication process for this self-sensing artificial muscle is simple and reliable. Carbon nanotubes are mixed with liquid silicone using ultrasonic dispersion technology and coated uniformly using a film applicator to create the thin layered cathode, which also serves as the sensing part of the artificial muscle. The anode is made directly using a soft metal mesh cut, and the actuation layer is sandwiched between the cathode and the anode. After the liquid materials cure, a complete self-sensing variable-stiffness artificial muscle is formed.

The potential applications of this flexible variable stiffness technology are vast, ranging from soft robotics to medical applications. The seamless integration with the human body opens up possibilities for aiding individuals with disabilities or patients in performing essential daily tasks. By integrating the self-sensing artificial muscle, wearable robotic devices can monitor a patient’s activities and provide resistance by adjusting stiffness levels, facilitating muscle function restoration during rehabilitation training.

“While there are still challenges to be addressed before these medical robots can be deployed in clinical settings, this research represents a crucial stride towards human-machine integration,” highlights Dr. Zhang. “It provides a blueprint for the future development of soft and wearable robots.”

Polymorphic display and texture integrated systems controlled by capillarity

by Jonghyun Ha, Yun Seong Kim, Chengzhang Li, Jonghyun Hwang, Sze Chai Leung, Ryan Siu, Sameh Tawfick in Science Advances

Flexible displays that can change color, convey information and even send veiled messages via infrared radiation are now possible, thanks to new research from the University of Illinois Urbana-Champaign. Engineers inspired by the morphing skins of animals like chameleons and octopuses have developed capillary-controlled robotic flapping fins to create switchable optical and infrared light multipixel displays that are 1,000 times more energy efficient than light-emitting devices.

The new study led by mechanical science and engineering professor Sameh Tawfick demonstrates how bendable fins and fluids can simultaneously switch between straight or bent and hot and cold by controlling the volume and temperature of tiny fluid-filled pixels. Varying the volume of fluids within the pixels can change the directions in which the flaps flip — similar to old-fashioned flip clocks — and varying the temperature allows the pixels to communicate via infrared energy.

Tawfick’s interest in the interaction of elastic and capillary forces — or elasto-capillarity — started as a graduate student, spanned the basic science of hair wetting and led to his research in soft robotic displays at Illinois.

“An everyday example of elasto-capillarity is what happens to our hair when we get in the shower,” Tawfick said. “When our hair gets wet, it sticks together and bends or bundles as capillary forces are applied and released when it dries out.”

Fapping fins driven by capillarity and hydrodynamics.

In the lab, the team created small boxes, or pixels, a few millimeters in size, that contain fins made of a flexible polymer that bend when the pixels are filled with fluid and drained using a system of tiny pumps. The pixels can have single or multiple fins and are arranged into arrays that form a display to convey information, Tawfick said.

“We are not limited to cubic pixel boxes, either,” Tawfick said. “The fins can be arranged in various orientations to create different images, even along curved surfaces. The control is precise enough to achieve complex motions, like simulating the opening of a flower bloom.”

The study reports that another feature of the new displays is the ability to send two simultaneous signals — one that can be seen with the human eye and another that can only be seen with an infrared camera.

“Because we can control the temperature of these individual droplets, we can display messages that can only be seen using an infrared device,” Tawfick said, “Or we can send two different messages at the same time.”

However, there are a few limitations to the new displays, Tawfick said. While building the new devices, the team found that the tiny pumps needed to control the pixel fluids were not commercially available, and the entire device is sensitive to gravity — meaning that it only works while in a horizontal position.

“Once we turn the display by 90 degrees, the performance is greatly degraded, which is detrimental to applications like billboards and other signs intended for the public,” Tawfick said. “The good news is, we know that when liquid droplets become small enough, they become insensitive to gravity, like when you see a rain droplet sticking on your window and it doesn’t fall. We have found that if we use fluid droplets that are five times smaller, gravity will no longer be an issue.”

The team said that because the science behind gravity’s effect on droplets is well understood, it will provide the focal point for their next application of the emerging technology.

Tawfick said he is very excited to see where this technology is headed because it brings a fresh idea to a big market space of large reflective displays. “We have developed a whole new breed of displays that require minimal energy, are scaleable and even flexible enough to be placed onto curved surfaces.”

Recognition of Human Chef’s Intentions for Incremental Learning of Cookbook by Robotic Salad Chef

by Grzegorz Sochacki, Arsen Abdulali, Narges Khadem Hosseini, Fumiya Iida in IEEE Access

Researchers have trained a robotic ‘chef’ to watch and learn from cooking videos, and recreate the dish itself.

The researchers, from the University of Cambridge, programmed their robotic chef with a ‘cookbook’ of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which recipe was being prepared and make it. In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their results demonstrate how video content can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.

Robotic chefs have been featured in science fiction for decades, but in reality, cooking is a challenging problem for a robot. Several commercial companies have built prototype robot chefs, although none of these are currently commercially available, and they lag well behind their human counterparts in terms of skill. Human cooks can learn new recipes through observation, whether that’s watching another person cook or watching a video on YouTube, but programming a robot to make a range of dishes is costly and time-consuming.

“We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can — by identifying the ingredients and how they go together in the dish,” said Grzegorz Sochacki from Cambridge’s Department of Engineering, the paper’s first author.

Sochacki, a PhD candidate in Professor Fumiya Iida’s Bio-Inspired Robotics Laboratory, and his colleagues devised eight simple salad recipes and filmed themselves making them. They then used a publicly available neural network to train their robot chef. The neural network had already been programmed to identify a range of different objects, including the fruits and vegetables used in the eight salad recipes (broccoli, carrot, apple, banana and orange).

Schematics of the robot assessing novelty of human chef demonstration and learning new recipe when demonstration does not match any recipe in its cookbook.

Using computer vision techniques, the robot analysed each frame of video and was able to identify the different objects and features, such as a knife and the ingredients, as well as the human demonstrator’s arms, hands and face. Both the recipes and the videos were converted to vectors and the robot performed mathematical operations on the vectors to determine the similarity between a demonstration and a vector.

By correctly identifying the ingredients and the actions of the human chef, the robot could determine which of the recipes was being prepared. The robot could infer that if the human demonstrator was holding a knife in one hand and a carrot in the other, the carrot would then get chopped up.

Of the 16 videos it watched, the robot recognised the correct recipe 93% of the time, even though it only detected 83% of the human chef’s actions. The robot was also able to detect that slight variations in a recipe, such as making a double portion or normal human error, were variations and not a new recipe. The robot also correctly recognised the demonstration of a new, ninth salad, added it to its cookbook and made it.

“It’s amazing how much nuance the robot was able to detect,” said Sochacki. “These recipes aren’t complex — they’re essentially chopped fruits and vegetables, but it was really effective at recognising, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.”

The videos used to train the robot chef are not like the food videos made by some social media influencers, which are full of fast cuts and visual effects, and quickly move back and forth between the person preparing the food and the dish they’re preparing. For example, the robot would struggle to identify a carrot if the human demonstrator had their hand wrapped around it — for the robot to identify the carrot, the human demonstrator had to hold up the carrot so that the robot could see the whole vegetable.

“Our robot isn’t interested in the sorts of food videos that go viral on social media — they’re simply too hard to follow,” said Sochacki. “But as these robot chefs get better and faster at identifying ingredients in food videos, they might be able to use sites like YouTube to learn a whole range of recipes.”

Improved control of a prosthetic limb by surgically creating electro-neuromuscular constructs with implanted electrodes

by Jan Zbinden, Paolo Sassu, Enzo Mastinu, Eric J. Earley, Maria Munoz-Novoa, Rickard Brånemark, Max Ortiz-Catalan in Science Translational Medicine

Prosthetic limbs are the most common solution to replace a lost extremity. However, they are hard to control and often unreliable with only a couple of movements available. Remnant muscles in the residual limb are the preferred source of control for bionic hands. This is because patients can contract muscles at will, and the electrical activity generated by the contractions can be used to tell the prosthetic hand what to do, for instance, open or close. A major problem at higher amputation levels, such as above the elbow, is that not many muscles remain to command the many robotic joints needed to truly restore the function of an arm and hand.

A multidisciplinary team of surgeons and engineers has circumvented this problem by reconfiguring the residual limb and integrating sensors and a skeletal implant to connect with a prosthesis electrically and mechanically. By dissecting the peripheral nerves and redistributing them to new muscle targets used as biological amplifiers, the bionic prosthesis can now access much more information so the user can command many robotic joints.

The research was led by Professor Max Ortiz Catalan, Founding Director of the Center for Bionics and Pain Research (CBPR) in Sweden, Head of Neural Prosthetics Research at the Bionics Institute in Australia, and Professor of Bionics at Chalmers University of Technology in Sweden.

“In this article, we show that rewiring nerves to different muscle targets in a distributed and concurrent manner is not only possible but also conducive to improved prosthetic control. A key feature of our work is that we have the possibility to clinically implement more refine surgical procedures and embed sensors in the neuromuscular constructs at the time of the surgery, which we then connect to the electronic system of the prosthesis via an osseointegrated interface. A.I. algorithms take care of the rest.”

Prosthetic limbs are commonly attached to the body by a socket that compresses the residual limb causing discomfort and is mechanically unstable. An alternative to socket attachment is to use a titanium implant placed within the residual bone which becomes strongly anchored — this is known as osseointegration. Such skeletal attachment allows for comfortable and more efficient mechanical connection of the prosthesis to the body.

“It is rewarding to see that our cutting-edge surgical and engineering innovation can provide such a high level of functionality for an individual with an arm amputation. This achievement is based on over 30 years of gradual development of the concept, in which I am proud to have contributed” comments Dr. Rickard Brånemark, research affiliate at MIT, associate professor at Gothenburg University, CEO of Integrum, a leading expert on osseointegration for limb prostheses, who conducted the implantation of the interface.

The surgery took place at the Sahlgrenska University Hospital, Sweden, where CBPR is located. The neuromuscular reconstruction procedure was conducted by Dr. Paolo Sassu, who also led the first hand transplantation performed in Scandinavia.

“The incredible journey we have undertaken together with the bionic engineers at CBPR has allowed us to combine new microsurgical techniques with sophisticated implanted electrodes that provide single-finger control of a prosthetic arm as well as sensory feedback. Patients who have suffered from an arm amputation might now see a brighter future,” says Dr. Sassu, who is presently working at the Istituto Ortopedico Rizzoli in Italy.

Artificial intelligence-based model to classify cardiac functions from chest radiographs: a multi-institutional, retrospective model development and validation study

by Daiju Ueda et al. in The Lancet Digital Health

AI (artificial intelligence) may sound like a cold robotic system, but Osaka Metropolitan University scientists have shown that it can deliver heartwarming — or, more to the point, “heart-warning” — support. They unveiled an innovative use of AI that classifies cardiac functions and pinpoints valvular heart disease with unprecedented accuracy, demonstrating continued progress in merging the fields of medicine and technology to advance patient care.

Valvular heart disease, one cause of heart failure, is often diagnosed using echocardiography. This technique, however, requires specialized skills, so there is a corresponding shortage of qualified technicians. Meanwhile, chest radiography is one of the most common tests to identify diseases, primarily of the lungs. Even though the heart is also visible in chest radiographs, little was known heretofore about the ability of chest radiographs to detect cardiac function or disease. Chest radiographs, or chest X-Rays, are performed in many hospitals and very little time is required to conduct them, making them highly accessible and reproducible. Accordingly, the research team led by Dr. Daiju Ueda, from the Department of Diagnostic and Interventional Radiology at the Graduate School of Medicine of Osaka Metropolitan University, reckoned that if cardiac function and disease could be determined from chest radiographs, this test could serve as a supplement to echocardiography.

Representative saliency maps for the external test dataset.

Dr. Ueda’s team successfully developed a model that utilizes AI to accurately classify cardiac functions and valvular heart diseases from chest radiographs. Since AI trained on a single dataset faces potential bias, leading to low accuracy, the team aimed for multi-institutional data. Accordingly, a total of 22,551 chest radiographs associated with 22,551 echocardiograms were collected from 16,946 patients at four facilities between 2013 and 2021. With the chest radiographs set as input data and the echocardiograms set as output data, the AI model was trained to learn features connecting both datasets.

The AI model was able to categorize precisely six selected types of valvular heart disease, with the Area Under the Curve, or AUC, ranging from 0.83 to 0.92. (AUC is a rating index that indicates the capability of an AI model and uses a value range from 0 to 1, with the closer to 1, the better.) The AUC was 0.92 at a 40% cut-off for detecting left ventricular ejection fraction — an important measure for monitoring cardiac function.

“It took us a very long time to get to these results, but I believe this is significant research,” stated Dr. Ueda. “In addition to improving the efficiency of doctors’ diagnoses, the system might also be used in areas where there are no specialists, in night-time emergencies, and for patients who have difficulty undergoing echocardiography.”

Pleobot: a modular robotic solution for metachronal swimming

by Sara Oliveira Santos, Nils Tack, Yunxing Su, Francisco Cuenca-Jiménez, Oscar Morales-Lopez, P. Antonio Gomez-Valdez, Monica M. Wilhelmus in Scientific Reports

Picture a network of interconnected, autonomous robots working together in a coordinated dance to navigate the pitch-black surroundings of the ocean while carrying out scientific surveys or search-and-rescue missions.

In a new study, a team led by Brown University researchers has presented important first steps in building these types of underwater navigation robots. In the study, the researchers outline the design of a small robotic platform called Pleobot that can serve as both a tool to help researchers understand the krill-like swimming method and as a foundation for building small, highly maneuverable underwater robots.

Pleobot is currently made of three articulated sections that replicate krill-like swimming called metachronal swimming. To design Pleobot, the researchers took inspiration from krill, which are remarkable aquatic athletes and display mastery in swimming, accelerating, braking and turning. They demonstrate in the study the capabilities of Pleobot to emulate the legs of swimming krill and provide new insights on the fluid-structure interactions needed to sustain steady forward swimming in krill. According to the study, Pleobot has the potential to allow the scientific community to understand how to take advantage of 100 million years of evolution to engineer better robots for ocean navigation.

“Experiments with organisms are challenging and unpredictable,” said Sara Oliveira Santos, a Ph.D. candidate at Brown’s School of Engineering and lead author of the new study. “Pleobot allows us unparalleled resolution and control to investigate all the aspects of krill-like swimming that help it excel at maneuvering underwater. Our goal was to design a comprehensive tool to understand krill-like swimming, which meant including all the details that make krill such athletic swimmers.”

Morphology and kinematic parameters of the pleopod.

The effort is a collaboration between Brown researchers in the lab of Assistant Professor of Engineering Monica Martinez Wilhelmus and scientists in the lab of Francisco Cuenca-Jimenez at the Universidad Nacional Autónoma de México.

A major aim of the project is to understand how metachronal swimmers, like krill, manage to function in complex marine environments and perform massive vertical migrations of over 1,000 meters — equivalent to stacking three Empire State Buildings — twice daily.

“We have snapshots of the mechanisms they use to swim efficiently, but we do not have comprehensive data,” said Nils Tack, a postdoctoral associate in the Wilhelmus lab. “We built and programmed a robot that precisely emulates the essential movements of the legs to produce specific motions and change the shape of the appendages. This allows us to study different configurations to take measurements and make comparisons that are otherwise unobtainable with live animals.”

The metachronal swimming technique can lead to remarkable maneuverability that krill frequently display through the sequential deployment of their swimming legs in a back to front wave-like motion. The researchers believe that in the future, deployable swarm systems can be used to map Earth’s oceans, participate in search-and-recovery missions by covering large areas, or be sent to moons in the solar system, such as Europa, to explore their oceans.

“Krill aggregations are an excellent example of swarms in nature: they are composed of organisms with a streamlined body, traveling up to one kilometer each way, with excellent underwater maneuverability,” Wilhelmus said. “This study is the starting point of our long-term research aim of developing the next generation of autonomous underwater sensing vehicles. Being able to understand fluid-structure interactions at the appendage level will allow us to make informed decisions about future designs.”

The researchers can actively control the two leg segments and have passive control of Pleobot’s biramous fins. This is believed to be the first platform that replicates the opening and closing motion of these fins. The construction of the robotic platform was a multi-year project, involving a multi-disciplinary team in fluid mechanics, biology and mechatronics. The researchers built their model at 10 times the scale of krill, which are usually about the size of a paperclip. The platform is primarily made of 3D printable parts and the design is open-access, allowing other teams to use Pleobot to continue answering questions on metachronal swimming not just for krill but for other organisms like lobsters.

In the study, the group reveals the answer to one of the many unknown mechanisms of krill swimming: how they generate lift in order not to sink while swimming forward. If krill are not swimming constantly, they will start sinking because they are a little heavier than water. To avoid this, they still have to create some lift even while swimming forward to be able to remain at that same height in the water, said Oliveira Santos.

Upcoming events

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--