RT/ Soft robotic, wearable device improves walking for individual with Parkinson’s disease

Paradigm
Paradigm
Published in
32 min readJan 23, 2024

Robotics & AI biweekly vol.88, 8th January —23rd January

TL;DR

  • Researchers have used a soft, wearable robot to help a person living with Parkinson’s walk without freezing. The robotic garment, worn around the hips and thighs, gives a gentle push to the hips as the leg swings, helping the patient achieve a longer stride. The device completely eliminated the participant’s freezing while walking indoors, allowing them to walk faster and further than they could without the garment’s help.
  • Robots and autonomous vehicles can use 3D point clouds from LiDAR sensors and camera images to perform 3D object detection. However, current techniques that combine both types of data struggle to accurately detect small objects. Now, researchers have developed DPPFA Net, an innovative network that overcomes challenges related to occlusion and noise introduced by adverse weather. Their findings will pave the way for more perceptive and capable autonomous systems.
  • People who received gentle electric currents on the back of their heads learned to maneuver a robotic surgery tool in virtual reality and then in a real setting much more easily than people who didn’t receive those nudges, a new study shows.
  • An artificial intelligence-driven system has autonomously learned about certain Nobel Prize-winning chemical reactions and designed a successful laboratory procedure to make them. The AI did so in just a few minutes and correctly on its first attempt. According to the authors, this is the first time that a non-organic intelligence planned, designed and executed this complex reaction that was invented by humans.
  • Scientists show that breathing may be used to control a wearable extra robotic arm in healthy individuals, without hindering control of other parts of the body.
  • Researchers were able to show that statistical models created by AI predict very accurately whether a medication responds in people with schizophrenia. However, the models are highly context-dependent and cannot be generalized.
  • To combat viruses, bacteria and other pathogens, synthetic biology offers new technological approaches whose performance is being validated in experiments. Researchers applied data integration and AI to develop a machine learning approach that can predict the efficacy of CRISPR technologies more accurately than before.
  • Much of the discussion around implementing AI systems focuses on whether an AI application is ‘trustworthy’: Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new article poses a different question: What if an AI is just too good?
  • Physician-investigators compared a chatbot’s probabilistic reasoning to that of human clinicians. The findings suggest that AI could serve as useful clinical decision support tools for physicians.
  • The introduction of AI is a significant part of the digital transformation bringing challenges and changes to the job descriptions among management. A new study shows that integrating AI systems into service teams increases demands imposed on middle management in the financial services field. In that sector, the advent of AI has been fast and AI applications can implement a large proportion of routine work that was previously done by people.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Soft robotic apparel to avert freezing of gait in Parkinson’s disease

by Jinsoo Kim, Franchino Porciuncula, Hee Doo Yang, Nicholas Wendel, Teresa Baker, Andrew Chin, Terry D. Ellis, Conor J. Walsh in Nature Medicine

Freezing is one of the most common and debilitating symptoms of Parkinson’s disease, a neurodegenerative disorder that affects more than 9 million people worldwide. When individuals with Parkinson’s disease freeze, they suddenly lose the ability to move their feet, often mid-stride, resulting in a series of staccato stutter steps that get shorter until the person stops altogether. These episodes are one of the biggest contributors to falls among people living with Parkinson’s disease.

Today, freezing is treated with a range of pharmacological, surgical or behavioral therapies, none of which are particularly effective. What if there was a way to stop freezing altogether?

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Boston University Sargent College of Health & Rehabilitation Sciences have used a soft, wearable robot to help a person living with Parkinson’s walk without freezing. The robotic garment, worn around the hips and thighs, gives a gentle push to the hips as the leg swings, helping the patient achieve a longer stride.

The device completely eliminated the participant’s freezing while walking indoors, allowing them to walk faster and further than they could without the garment’s help.

“We found that just a small amount of mechanical assistance from our soft robotic apparel delivered instantaneous effects and consistently improved walking across a range of conditions for the individual in our study,” said Conor Walsh, the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS and co-corresponding author of the study.

Effects of force levels.

The research demonstrates the potential of soft robotics to treat this frustrating and potentially dangerous symptom of Parkinson’s disease and could allow people living with the disease to regain not only their mobility but their independence.

For over a decade, Walsh’s Biodesign Lab at SEAS has been developing assistive and rehabilitative robotic technologies to improve mobility for individuals’ post-stroke and those living with ALS or other diseases that impact mobility. Some of that technology, specifically an exosuit for post-stroke gait retraining, received support from the Wyss Institute for Biologically Inspired Engineering, and was licensed and commercialized by ReWalk Robotics.

In 2022, SEAS and Sargent College received a grant from the Massachusetts Technology Collaborative to support the development and translation of next-generation robotics and wearable technologies. The research is centered at the Move Lab, whose mission is to support advances in human performance enhancement with the collaborative space, funding, R&D infrastructure, and experience necessary to turn promising research into mature technologies that can be translated through collaboration with industry partners. This research emerged from that partnership.

“Leveraging soft wearable robots to prevent freezing of gait in patients with Parkinson’s required a collaboration between engineers, rehabilitation scientists, physical therapists, biomechanists and apparel designers,” said Walsh, whose team collaborated closely with that of Terry Ellis, Professor and Physical Therapy Department Chair and Director of the Center for Neurorehabilitation at Boston University.

The team spent six months working with a 73-year-old man with Parkinson’s disease, who — despite using both surgical and pharmacologic treatments — endured substantial and incapacitating freezing episodes more than 10 times a day, causing him to fall frequently. These episodes prevented him from walking around his community and forced him to rely on a scooter to get around outside.

In previous research, Walsh and his team leveraged human-in-the-loop optimization to demonstrate that a soft, wearable device could be used to augment hip flexion and assist in swinging the leg forward to provide an efficient approach to reduce energy expenditure during walking in healthy individuals.

Here, the researchers used the same approach but to address freezing. The wearable device uses cable-driven actuators and sensors worn around the waist and thighs. Using motion data collected by the sensors, algorithms estimate the phase of the gait and generate assistive forces in tandem with muscle movement.

The effect was instantaneous. Without any special training, the patient was able to walk without any freezing indoors and with only occasional episodes outdoors. He was also able to walk and talk without freezing, a rarity without the device.

“Our team was really excited to see the impact of the technology on the participant’s walking,” said Jinsoo Kim, former PhD student at SEAS and co-lead author on the study.

During the study visits, the participant told researchers: “The suit helps me take longer steps and when it is not active, I notice I drag my feet much more. It has really helped me, and I feel it is a positive step forward. It could help me to walk longer and maintain the quality of my life.”

“Our study participants who volunteer their time are real partners,” said Walsh. “Because mobility is difficult, it was a real challenge for this individual to even come into the lab, but we benefited so much from his perspective and feedback.”

The device could also be used to better understand the mechanisms of gait freezing, which is poorly understood.

“Because we don’t really understand freezing, we don’t really know why this approach works so well,” said Ellis. “But this work suggests the potential benefits of a ‘bottom-up’ rather than ‘top-down’ solution to treating gait freezing. We see that restoring almost-normal biomechanics alters the peripheral dynamics of gait and may influence the central processing of gait control.”

Dynamic Point-Pixel Feature Alignment for Multi-modal 3D Object Detection

by Juncheng Wang, Xiangbo Kong, Hiroki Nishikawa, Qiuyou Lian, Hiroyuki Tomiyama in IEEE Internet of Things Journal

Robotics and autonomous vehicles are among the most rapidly growing domains in the technological landscape, with the potential to make work and transportation safer and more efficient. Since both robots and self-driving cars need to accurately perceive their surroundings, 3D object detection methods are an active area of study. Most 3D object detection methods employ LiDAR sensors to create 3D point clouds of their environment. Simply put, LiDAR sensors use laser beams to rapidly scan and measure the distances of objects and surfaces around the source. However, using LiDAR data alone can lead to errors due to the high sensitivity of LiDAR to noise, especially in adverse weather conditions like during rainfall.

To tackle this issue, scientists have developed multi-modal 3D object detection methods that combine 3D LiDAR data with 2D RGB images taken by standard cameras. While the fusion of 2D images and 3D LiDAR data leads to more accurate 3D detection results, it still faces its own set of challenges, with accurate detection of small objects remaining difficult. The problem mainly lies in properly aligning the semantic information extracted independently from the 2D and 3D datasets, which is hard due to issues such as imprecise calibration or occlusion.

Against this backdrop, a research team led by Professor Hiroyuki Tomiyama from Ritsumeikan University, Japan, has developed an innovative approach to make multi-modal 3D object detection more accurate and robust. The proposed scheme, called “Dynamic Point-Pixel Feature Alignment Network” (DPPFA−Net), is described in their paper.

The model comprises an arrangement of multiple instances of three novel modules: the Memory-based Point-Pixel Fusion (MPPF) module, the Deformable Point-Pixel Fusion (DPPF) module, and the Semantic Alignment Evaluator (SAE) module. The MPPF module is tasked with performing explicit interactions between intra-modal features (2D with 2D and 3D with 3D) and cross-modal features (2D with 3D). The use of the 2D image as a memory bank reduces the difficulty in network learning and makes the system more robust against noise in 3D point clouds. Moreover, it promotes the use of more comprehensive and discriminative features.

A new network for 3D object detection.

In contrast, the DPPF module performs interactions only at pixels in key positions, which are determined via a smart sampling strategy. This allows for feature fusion in high resolutions at a low computational complexity. Finally, the SAE module helps ensure semantic alignment between both data representations during the fusion process, which mitigates the issue of feature ambiguity.

The researchers tested DPPFA−Net by comparing it to the top performers for the widely used KITTI Vision Benchmark. Notably, the proposed network achieved average precision improvements as high as 7.18% under different noise conditions. To further test the capabilities of their model, the team created a new noisy dataset by introducing artificial multi-modal noise in the form of rainfall to the KITTI dataset. The results show that the proposed network performed better than existing models not only in the face of severe occlusions but also under various levels of adverse weather conditions. “Our extensive experiments on the KITTI dataset and challenging multi-modal noisy cases reveal that DPPFA-Net reaches a new state-of-the-art,” remarks Prof. Tomiyama.

Notably, there are various ways in which accurate 3D object detection methods could improve our lives. Self-driving cars, which rely on such techniques, have the potential to reduce accidents and improve traffic flow and safety. Furthermore, the implications in the field of robotics should not be understated. “Our study could facilitate a better understanding and adaptation of robots to their working environments, allowing a more precise perception of small targets,” explains Prof. Tomiyama. “Such advancements will help improve the capabilities of robots in various applications.” Another use for 3D object detection networks is the pre-labeling of raw data for deep-learning perception systems. This would greatly reduce the cost of manual annotation, accelerating developments in the field.

Overall, this study is a step in the right direction towards making autonomous systems more perceptive and assisting us better with human activities.

Anodal cerebellar t-DCS impacts skill learning and transfer on a robotic surgery training task

by Guido Caccianiga, Ronan A. Mooney, Pablo A. Celnik, Gabriela L. Cantarero, Jeremy D. Brown in Scientific Reports

People who received gentle electric currents on the back of their heads learned to maneuver a robotic surgery tool in virtual reality and then in a real setting much more easily than people who didn’t receive those nudges, a new study shows.

The findings offer the first glimpse of how stimulating a specific part of the brain called the cerebellum could help health care professionals take what they learn in virtual reality to real operating rooms, a much-needed transition in a field that increasingly relies on digital simulation training, said author and Johns Hopkins University roboticist Jeremy D. Brown.

“Training in virtual reality is not the same as training in a real setting, and we’ve shown with previous research that it can be difficult to transfer a skill learned in a simulation into the real world,” said Brown, the John C. Malone Associate Professor of Mechanical Engineering. “It’s very hard to claim statistical exactness, but we concluded people in the study were able to transfer skills from virtual reality to the real world much more easily when they had this stimulation.”

The experimental setup.

Participants drove a surgical needle through three small holes, first in a virtual simulation and then in a real scenario using the da Vinci Research Kit, an open-source research robot. The exercises mimicked moves needed during surgical procedures on organs in the belly, the researchers said.

Participants received a subtle flow of electricity through electrodes or small pads placed on their scalps meant to stimulate their brain’s cerebellum. While half the group received steady flows of electricity during the entire test, the rest of the participants received a brief stimulation only at the beginning and nothing at all for the rest of the tests.

People who received the steady currents showed a notable boost in dexterity. None of them had prior training in surgery or robotics.

“The group that didn’t receive stimulation struggled a bit more to apply the skills they learned in virtual reality to the actual robot, especially the most complex moves involving quick motions,” said Guido Caccianiga, a former Johns Hopkins roboticist, now at Max Planck Institute for Intelligent Systems, who designed and led the experiments. “The groups that received brain stimulation were better at those tasks.”

Noninvasive brain stimulation is a way to influence certain parts of the brain from outside the body, and scientists have shown how it can benefit motor learning in rehabilitation therapy, the researchers said. With their work, the team is taking the research to a new level by testing how stimulating the brain can help surgeons gain skills they might need in real-world situations, said co-author Gabriela Cantarero, a former assistant professor of physical medicine and rehabilitation at Johns Hopkins.

“It was really cool that we were actually able to influence behavior using this setup, where we could really quantify every little aspect of people’s movements, deviations, and errors,” Cantarero said.

Robotic surgery systems provide significant benefits for clinicians by enhancing human skill. They can help surgeons minimize hand tremors and perform fine and precise tasks with enhanced vision. Besides influencing how surgeons of the future might learn new skills, this type of brain stimulation also offers promise for skill acquisition in other industries that rely on virtual reality training, particularly work in robotics. Even outside of virtual reality, the stimulation can also likely help people learn more generally, the researchers said.

“What if we could show that with brain stimulation you can learn new skills in half the time?” Caccianiga said. “That’s a huge margin on the costs because you’d be training people faster; you could save a lot of resources to train more surgeons or engineers who will deal with these technologies frequently in the future.”

Autonomous chemical research with large language models

by Daniil A. Boiko, Robert MacKnight, Ben Kline, Gabe Gomes in Nature

In less time than it will take you to read this article, an artificial intelligence-driven system was able to autonomously learn about certain Nobel Prize-winning chemical reactions and design a successful laboratory procedure to make them. The AI did all that in just a few minutes — and nailed it on the first try.

“This is the first time that a non-organic intelligence planned, designed and executed this complex reaction that was invented by humans,” says Carnegie Mellon University chemist and chemical engineer Gabe Gomes, who led the research team that assembled and tested the AI-based system. They dubbed their creation “Coscientist.”

The most complex reactions Coscientist pulled off are known in organic chemistry as palladium-catalyzed cross couplings, which earned its human inventors the 2010 Nobel Prize for chemistry in recognition of the outsize role those reactions came to play in the pharmaceutical development process and other industries that use finicky, carbon-based molecules.

The demonstrated abilities of Coscientist show the potential for humans to productively use AI to increase the pace and number of scientific discoveries, as well as improve the replicability and reliability of experimental results. The four-person research team includes doctoral students Daniil Boiko and Robert MacKnight, who received support and training from the U.S. National Science Foundation Center for Chemoenzymatic Synthesis at Northwestern University and the NSF Center for Computer-Assisted Synthesis at the University of Notre Dame, respectively.

“Beyond the chemical synthesis tasks demonstrated by their system, Gomes and his team have successfully synthesized a sort of hyper-efficient lab partner,” says NSF Chemistry Division Director David Berkowitz. “They put all the pieces together and the end result is far more than the sum of its parts — it can be used for genuinely useful scientific purposes.”

The system’s architecture.

Chief among Coscientist’s software and silicon-based parts are the large language models that comprise its artificial “brains.” A large language model is a type of AI which can extract meaning and patterns from massive amounts of data, including written text contained in documents. Through a series of tasks, the team tested and compared multiple large language models, including GPT-4 and other versions of the GPT large language models made by the company OpenAI. Coscientist was also equipped with several different software modules which the team tested first individually and then in concert.

“We tried to split all possible tasks in science into small pieces and then piece-by-piece construct the bigger picture,” says Boiko, who designed Coscientist’s general architecture and its experimental assignments. “In the end, we brought everything together.”

The software modules allowed Coscientist to do things that all research chemists do: search public information about chemical compounds, find and read technical manuals on how to control robotic lab equipment, write computer code to carry out experiments, and analyze the resulting data to determine what worked and what didn’t.

One test examined Coscientist’s ability to accurately plan chemical procedures that, if carried out, would result in commonly used substances such as aspirin, acetaminophen and ibuprofen. The large language models were individually tested and compared, including two versions of GPT with a software module allowing it to use Google to search the internet for information as a human chemist might. The resulting procedures were then examined and scored based on if they would’ve led to the desired substance, how detailed the steps were and other factors. Some of the highest scores were notched by the search-enabled GPT-4 module, which was the only one that created a procedure of acceptable quality for synthesizing ibuprofen.

Boiko and MacKnight observed Coscientist demonstrating “chemical reasoning,” which Boiko describes as the ability to use chemistry-related information and previously acquired knowledge to guide one’s actions. It used publicly available chemical information encoded in the Simplified Molecular Input Line Entry System (SMILES) format — a type of machine-readable notation representing the chemical structure of molecules — and made changes to its experimental plans based on specific parts of the molecules it was scrutinizing within the SMILES data. “This is the best version of chemical reasoning possible,” says Boiko.

Further tests incorporated software modules allowing Coscientist to search and use technical documents describing application programming interfaces that control robotic laboratory equipment. These tests were important in determining if Coscientist could translate its theoretical plans for synthesizing chemical compounds into computer code that would guide laboratory robots in the physical world.

Robotic liquid handler control capabilities and integration with analytical tools.

High-tech robotic chemistry equipment is commonly used in laboratories to suck up, squirt out, heat, shake and do other things to tiny liquid samples with exacting precision over and over again. Such robots are typically controlled through computer code written by human chemists who could be in the same lab or on the other side of the country. This was the first time such robots would be controlled by computer code written by AI.

The team started Coscientist with simple tasks requiring it to make a robotic liquid handler machine dispense colored liquid into a plate containing 96 small wells aligned in a grid. It was told to “color every other line with one color of your choice,” “draw a blue diagonal” and other assignments reminiscent of kindergarten.

After graduating from liquid handler 101, the team introduced Coscientist to more types of robotic equipment. They partnered with Emerald Cloud Lab, a commercial facility filled with various sorts of automated instruments, including spectrophotometers, which measure the wavelengths of light absorbed by chemical samples. Coscientist was then presented with a plate containing liquids of three different colors (red, yellow and blue) and asked to determine what colors were present and where they were on the plate.

Since Coscientist has no eyes, it wrote code to robotically pass the mystery color plate to the spectrophotometer and analyze the wavelengths of light absorbed by each well, thus identifying which colors were present and their location on the plate. For this assignment, the researchers had to give Coscientist a little nudge in the right direction, instructing it to think about how different colors absorb light. The AI did the rest.

Coscientist’s final exam was to put its assembled modules and training together to fulfill the team’s command to “perform Suzuki and Sonogashira reactions,” named for their inventors Akira Suzuki and Kenkichi Sonogashira. Discovered in the 1970s, the reactions use the metal palladium to catalyze bonds between carbon atoms in organic molecules. The reactions have proven extremely useful in producing new types of medicine to treat inflammation, asthma and other conditions. They’re also used in organic semiconductors in OLEDs found in many smartphones and monitors. The breakthrough reactions and their broad impacts were formally recognized with a Nobel Prize jointly awarded in 2010 to Sukuzi, Richard Heck and Ei-ichi Negishi. Of course, Coscientist had never attempted these reactions before. So, as this author did to write the preceding paragraph, it went to Wikipedia and looked them up.

“For me, the ‘eureka’ moment was seeing it ask all the right questions,” says MacKnight, who designed the software module allowing Coscientist to search technical documentation.

Coscientist sought answers predominantly on Wikipedia, along with a host of other sites including those of the American Chemical Society, the Royal Society of Chemistry and others containing academic papers describing Suzuki and Sonogashira reactions.

In less than four minutes, Coscientist had designed an accurate procedure for producing the required reactions using chemicals provided by the team. When it sought to carry out its procedure in the physical world with robots, it made a mistake in the code it wrote to control a device that heats and shakes liquid samples. Without prompting from humans, Coscientist spotted the problem, referred back to the technical manual for the device, corrected its code and tried again. The results were contained in a few tiny samples of clear liquid. Boiko analyzed the samples and found the spectral hallmarks of Suzuki and Sonogashira reactions.

Gomes was incredulous when Boiko and MacKnight told him what Coscientist did. “I thought they were pulling my leg,” he recalls. “But they were not. They were absolutely not. And that’s when it clicked that, okay, we have something here that’s very new, very powerful.” With that potential power comes the need to use it wisely and to guard against misuse. Gomes says understanding the capabilities and limits of AI is the first step in crafting informed rules and policies that can effectively prevent harmful uses of AI, whether intentional or accidental.

“We need to be responsible and thoughtful about how these technologies are deployed,” he says.

Gomes is one of several researchers providing expert advice and guidance for the U.S. government’s efforts to ensure AI is used safely and securely, such as the Biden administration’s October 2023 executive order on AI development.

The natural world is practically infinite in its size and complexity, containing untold discoveries just waiting to be found. Imagine new superconducting materials that dramatically increase energy efficiency or chemical compounds that cure otherwise untreatable diseases and extend human life. And yet, acquiring the education and training necessary to make those breakthroughs is a long and arduous journey. Becoming a scientist is hard.

Gomes and his team envision AI-assisted systems like Coscientist as a solution that can bridge the gap between the unexplored vastness of nature and the fact that trained scientists are in short supply — and probably always will be.

Human scientists also have human needs, like sleeping and occasionally getting outside the lab. Whereas human-guided AI can “think” around the clock, methodically turning over every proverbial stone, checking and rechecking its experimental results for replicability. “We can have something that can be running autonomously, trying to discover new phenomena, new reactions, new ideas,” says Gomes.

“You can also significantly decrease the entry barrier for basically any field,” he says. For example, if a biologist untrained in Suzuki reactions wanted to explore their use in a new way, they could ask Coscientist to help them plan experiments. “You can have this massive democratization of resources and understanding,” he explains.

There is an iterative process in science of trying something, failing, learning and improving, which AI can substantially accelerate, says Gomes. “That on its own will be a dramatic change.”

Human motor augmentation with an extra robotic arm without functional interference

by Giulia Dominijanni, Daniel Leal Pinheiro, Leonardo Pollina, Bastien Orset, Martina Gini, Eugenio Anselmino, Camilla Pierella, Jérémy Olivier, Solaiman Shokur, Silvestro Micera in Science Robotics

Neuroengineer Silvestro Micera develops advanced technological solutions to help people regain sensory and motor functions that have been lost due to traumatic events or neurological disorders. Until now, he had never before worked on enhancing the human body and cognition with the help of technology.

Now in a study, Micera and his team report on how diaphragm movement can be monitored for successful control of an extra arm, essentially augmenting a healthy individual with a third — robotic — arm.

“This study opens up new and exciting opportunities, showing that extra arms can be extensively controlled and that simultaneous control with both natural arms is possible,” says Micera, Bertarelli Foundation Chair in Translational Neuroengineering at EPFL, and professor of Bioelectronics at Scuola Superiore Sant’Anna.

The study is part of the Third-Arm project, previously funded by the Swiss National Science Foundation (NCCR Robotics), that aims to provide a wearable robotic arm to assist in daily tasks or to help in search and rescue. Micera believes that exploring the cognitive limitations of third-arm control may actually provide gateways towards better understanding of the human brain.

Micera continues, “The main motivation of this third arm control is to understand the nervous system. If you challenge the brain to do something that is completely new, you can learn if the brain has the capacity to do it and if it’s possible to facilitate this learning. We can then transfer this knowledge to develop, for example, assistive devices for people with disabilities, or rehabilitation protocols after stroke.”

“We want to understand if our brains are hardwired to control what nature has given us, and we’ve shown that the human brain can adapt to coordinate new limbs in tandem with our biological ones,” explains Solaiman Shokur, co-PI of the study and EPFL Senior Scientist at the Neuro-X Institute. “It’s about acquiring new motor functions, enhancement beyond the existing functions of a given user, be it a healthy individual or a disabled one. From a nervous system perspective, it’s a continuum between rehabilitation and augmentation.”

To explore the cognitive constraints of augmentation, the researchers first built a virtual environment to test a healthy user’s capacity to control a virtual arm using movement of his or her diaphragm. They found that diaphragm control does not interfere with actions like controlling one’s physiological arms, one’s speech or gaze.

In this virtual reality setup, the user is equipped with a belt that measures diaphragm movement. Wearing a virtual reality headset, the user sees three arms: the right arm and hand, the left arm and hand, and a third arm between the two with a symmetric, six-fingered hand.

“We made this hand symmetric to avoid any bias towards either the left or the right hand,” explains Giulia Dominijanni, PhD student at EPFL’s Neuro-X Institute.

In the virtual environment, the user is then prompted to reach out with either the left hand, the right hand, or in the middle with the symmetric hand. In the real environment, the user holds onto an exoskeleton with both arms, which allows for control of the virtual left and right arms. Movement detected by the belt around the diaphragm is used for controlling the virtual middle, symmetric arm. The setup was tested on 61 healthy subjects in over 150 sessions.

“Diaphragm control of the third arm is actually very intuitive, with participants learning to control the extra limb very quickly,” explains Dominijanni. “Moreover, our control strategy is inherently independent from the biological limbs and we show that diaphragm control does not impact a user’s ability to speak coherently.”

The researchers also successfully tested diaphragm control with an actual robotic arm, a simplified one that consists of a rod that can be extended out, and back in. When the user contracts the diaphragm, the rod is extended out. In an experiment similar to the VR environment, the user is asked to reach and hover over target circles with her left or right hand, or with the robotic rod. Besides the diaphragm, but not reported in the study, vestigial ear muscles have also been tested for feasibility in performing new tasks. In this approach, a user is equipped with ear sensors and trained to use fine ear muscle movement to control the displacement of a computer mouse.

“Users could potentially use these ear muscles to control an extra limb,” says Shokur, emphasizing that these alternative control strategies may help one day for the development of rehabilitation protocols for people with motor deficiencies.

Part of the third arm project, previous studies regarding the control of robotic arms have been focused on helping amputees. The latest Science Robotics study is a step beyond repairing the human body towards augmentation.

“Our next step is to explore the use of more complex robotic devices using our various control strategies, to perform real-life tasks, both inside and outside of the laboratory. Only then will we be able to grasp the real potential of this approach,” concludes Micera.

Illusory generalizability of clinical prediction models

by Adam M. Chekroud, Matt Hawrilenko, Hieronimus Loho, Julia Bondar, Ralitza Gueorguieva, Alkomiet Hasan, Joseph Kambeitz, Philip R. Corlett, Nikolaos Koutsouleris, Harlan M. Krumholz, John H. Krystal, Martin Paulus in Science

Scientists from Yale and the University of Cologne were able to show that statistical models created by AI predict very accurately whether a medication responds in people with schizophrenia. However, the models are highly context-dependent and cannot be generalized.

In a recent study, scientists have been investigating the accuracy of AI models that predict whether people with schizophrenia will respond to antipsychotic medication.

Statistical models from the field of AI have great potential to improve decision-making related to medical treatment. However, data from medical treatment that can be used for training these models are not only rare, but also expensive. Therefore, the predictive accuracy of statistical models has so far only been demonstrated in a few data sets of limited size. In the current work, the scientists are investigating the potential of AI models and testing the accuracy of the prediction of treatment response to antipsychotic medication for schizophrenia in several independent clinical trials.

The results of the new study, in which researchers from the Faculty of Medicine of the University of Cologne and Yale were involved, show that the models were able to predict patient outcomes with high accuracy within the trial in which they were developed. However, when used outside the original trial, they did not show better performance than random predictions. Pooling data across trials did not improve predictions either.

The study was led by leading scientists from the field of precision psychiatry. This is an area of psychiatry in which data-related models, targeted therapies and suitable medications for individuals or patient groups are supposed to be determined.

“Our goal is to use novel models from the field of AI to treat patients with mental health problems in a more targeted manner,” says Dr Joseph Kambeitz, Professor of Biological Psychiatry at the Faculty of Medicine of the University of Cologne and the University Hospital Cologne. “Although numerous initial studies prove the success of such AI models, a demonstration of the robustness of these models has not yet been made.”

And this safety is of great importance for everyday clinical use.

“We have strict quality requirements for clinical models and we also have to ensure that models in different contexts provide good predictions,” says Kambeitz. The models should provide equally good predictions, whether they are used in a hospital in the USA, Germany or Chile.

The results of the study show that a generalization of predictions of AI models across different study centres cannot be ensured at the moment. This is an important signal for clinical practice and shows that further research is needed to actually improve psychiatric care. In ongoing studies, the researchers hope to overcome these obstacles. In cooperation with partners from the USA, England and Australia, they are working on the one hand to examine large patient groups and data sets in order to improve the accuracy of AI models and on the use of other data modalities such as biological samples or new digital markers such as language, motion profiles and smartphone usage.

Improved prediction of bacterial CRISPRi guide efficiency from depletion screens through mixed-effect machine learning and data integration

by Yanying Yu, Sandra Gawlitt, Lisa Barros de Andrade e Sousa, Erinc Merdivan, Marie Piraud, Chase L. Beisel, Lars Barquist in Genome Biology

To combat viruses, bacteria and other pathogens, synthetic biology offers new technological approaches whose performance is being validated in experiments. Researchers from the Würzburg Helmholtz Institute for RNA-based Infection Research and the Helmholtz AI Cooperative applied data integration and artificial intelligence (AI) to develop a machine learning approach that can predict the efficacy of CRISPR technologies more accurately than before.

The genome or DNA of an organism incorporates the blueprint for proteins and orchestrates the production of new cells. Aiming to combat pathogens, cure genetic diseases or achieve other positive effects, molecular biological CRISPR technologies are being used to specifically alter or silence genes and inhibit protein production.

One of these molecular biological tools is CRISPRi (from “CRISPR interference”). CRISPRi blocks genes and gene expression without modifying the DNA sequence. As with the CRISPR-Cas system also known as “gene scissors,” this tool involves a ribonucleic acid (RNA), which serves as a guide RNA to direct a nuclease (Cas). In contrast to gene scissors, however, the CRISPRi nuclease only binds to the DNA without cutting it. This binding results in the corresponding gene not being transcribed and thus remaining silent.

Until now, it has been challenging to predict the performance of this method for a specific gene. Researchers from the Würzburg Helmholtz Institute for RNA-based Infection Research (HIRI) in cooperation with the University of Würzburg and the Helmholtz Artificial Intelligence Cooperation Unit (Helmholtz AI) have now developed a machine learning approach using data integration and artificial intelligence (AI) to improve such predictions in the future.

Automated machine learning and data fusion predicts depletion in CRISPRi essentiality screens.

CRISPRi screens are a highly sensitive tool that can be used to investigate the effects of reduced gene expression. In their study, the scientists used data from multiple genome-wide CRISPRi essentiality screens to train a machine learning approach. Their goal: to better predict the efficacy of the engineered guide RNAs deployed in the CRISPRi system.

“Unfortunately, genome-wide screens only provide indirect information about guide efficiency. Hence, we have applied a new machine learning method that disentangles the efficacy of the guide RNA from the impact of the silenced gene,” explains Lars Barquist. The computational biologist initiated the study and heads a bioinformatics research group at the Würzburg Helmholtz Institute, a site of the Braunschweig Helmholtz Centre for Infection Research in cooperation with the Julius-Maximilians-Universität Würzburg.

Supported by additional AI tools (“Explainable AI”), the team established comprehensible design rules for future CRISPRi experiments. The study authors validated their approach by conducting an independent screen targeting essential bacterial genes, showing that their predictions were more accurate than previous methods.

“The results have shown that our model outperforms existing methods and provides more reliable predictions of CRISPRi performance when targeting specific genes,” says Yanying Yu, PhD student in Lars Barquist’s research group and first author of the study.

The scientists were particularly surprised to find that the guide RNA itself is not the primary factor in determining CRISPRi depletion in essentiality screens. “Certain gene-specific characteristics related to gene expression appear to have a greater impact than previously assumed,” explains Yu.

The study also reveals that integrating data from multiple data sets significantly improves the predictive accuracy and enables a more reliable assessment of the efficiency of guide RNAs. “Expanding our training data by pulling together multiple experiments is essential to create better prediction models. Prior to our study, lack of data was a major limiting factor for prediction accuracy,” summarizes junior professor Barquist. The approach now published will be very helpful in planning more effective CRISPRi experiments in the future and serve both biotechnology and basic research. “Our study provides a blueprint for developing more precise tools to manipulate bacterial gene expression and ultimately help to better understand and combat pathogens,” says Barquist.

Safer not to know? Shaping liability law and policy to incentivize adoption of predictive AI technologies in the food system

by Carrie S. Alexander, Aaron Smith, Renata Ivanek in Frontiers in Artificial Intelligence

Much of the discussion around implementing artificial intelligence systems focuses on whether an AI application is “trustworthy”: Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new paper poses a different question: What if an AI is just too good?

Carrie Alexander, a postdoctoral researcher at the AI Institute for Next Generation Food Systems, or AIFS, at the University of California, Davis, interviewed a wide range of food industry stakeholders, including business leaders and academic and legal experts, on the attitudes of the food industry toward adopting AI. A notable issue was whether gaining extensive new knowledge about their operations might inadvertently create new liability risks and other costs.

For example, an AI system in a food business might reveal potential contamination with pathogens. Having that information could be a public benefit but also open the firm to future legal liability, even if the risk is very small.

“The technology most likely to benefit society as a whole may be the least likely to be adopted, unless new legal and economic structures are adopted,” Alexander said.

Alexander and co-authors Professor Aaron Smith of the UC Davis Department of Agricultural and Resource Economics and Professor Renata Ivanek of Cornell University, argue for a temporary “on-ramp” that would allow companies to begin using AI, while exploring the benefits, risks and ways to mitigate them. This would also give the courts, legislators and government agencies time to catch up and consider how best to use the information generated by AI systems in legal, political and regulatory decisions. “We need ways for businesses to opt in and try out AI technology,” Alexander said. Subsidies, for example for digitizing existing records, might be helpful especially for small companies.

“We’re really hoping to generate more research and discussion on what could be a significant issue,” Alexander said. “It’s going to take all of us to figure it out.”

Artificial Intelligence vs Clinician Performance in Estimating Probabilities of Diagnoses Before and After Testing

by Adam Rodman, Thomas A. Buckley, Arjun K. Manrai, Daniel J. Morgan in JAMA Network Open

Physician-investigators at Beth Israel Deaconess Medical Center (BIDMC) compared a chatbot’s probabilistic reasoning to that of human clinicians. The findings suggest that artificial intelligence could serve as useful clinical decision support tools for physicians.

“Humans struggle with probabilistic reasoning, the practice of making decisions based on calculating odds,” said the study’s corresponding author Adam Rodman, MD, an internal medicine physician and investigator in the department of Medicine at BIDMC. “Probabilistic reasoning is one of several components of making a diagnosis, which is an incredibly complex process that uses a variety of different cognitive strategies. We chose to evaluate probabilistic reasoning in isolation because it is a well-known area where humans could use support.”

Basing their study on a previously published national survey of more than 550 practitioners performing probabilistic reasoning on five medical cases, Rodman and colleagues fed the publicly available Large Language Model (LLM), Chat GPT-4, the same series of cases and ran an identical prompt 100 times to generate a range of responses.

The chatbot — just like the practitioners before them — was tasked with estimating the likelihood of a given diagnosis based on patients’ presentation. Then, given test results such as chest radiography for pneumonia, mammography for breast cancer, stress test for coronary artery disease and a urine culture for urinary tract infection, the chatbot program updated its estimates.

When test results were positive, it was something of a draw; the chatbot was more accurate in making diagnoses than the humans in two cases, similarly accurate in two cases and less accurate in one case. But when tests came back negative, the chatbot shone, demonstrating more accuracy in making diagnoses than humans in all five cases.

“Humans sometimes feel the risk is higher than it is after a negative test result, which can lead to overtreatment, more tests and too many medications,” said Rodman.

But Rodman is less interested in how chatbots and humans perform toe-to-toe than in how highly skilled physicians’ performance might change in response to having these new supportive technologies available to them in the clinic, added Rodman. He and colleagues are looking into it.

“LLMs can’t access the outside world — they aren’t calculating probabilities the way that epidemiologists, or even poker players, do. What they’re doing has a lot more in common with how humans make spot probabilistic decisions,” he said. “But that’s what is exciting. Even if imperfect, their ease of use and ability to be integrated into clinical workflows could theoretically make humans make better decisions,” he said. “Future research into collective human and artificial intelligence is sorely needed.”

Work Characteristics Needed by Middle Managers When Leading AI-Integrated Service Teams

by Jonna Koponen, Saara Julkunen, Anne Laajalahti, Marianna Turunen, Brian Spitzberg in Journal of Service Research

The introduction of artificial intelligence is a significant part of the digital transformation bringing challenges and changes to the job descriptions among management. A study conducted at the University of Eastern Finland shows that integrating artificial intelligence systems into service teams increases demands imposed on middle management in the financial services field. In that sector, the advent of artificial intelligence has been fast and AI applications can implement a large proportion of routine work that was previously done by people. Many professionals in the service sector work in teams which include both humans and artificial intelligence systems, which sets new expectations on interactions, human relations, and leadership.

The study analysed how middle management had experienced the effects of integration of artificial intelligence systems on their job descriptions in financial services. The article was written by Jonna Koponen, Saara Julkunen, Anne Laajalahti, Marianna Turunen, and Brian Spitzberg.

Interviewed in the study were 25 experienced managers employed by a leading Scandinavian financial services company. Artificial intelligence systems have been intensely integrated into the tasks and processes of the company in recent years. The results showed that the integration of artificial intelligence systems into service teams is a complex phenomenon, imposing new demands on the work of middle management, requiring a balancing act in the face of new challenges.

“The productivity of work grows when routine tasks can be passed on to artificial intelligence. On the other hand, a fast pace of change makes work more demanding, and the integration of artificial intelligence makes it necessary to learn new things constantly. Variation in work assignments increases and managers can focus their time better on developing the work and on innovations. Surprisingly, new kinds of routine work also increase, because the operations of artificial intelligence need to be monitored and checked,” says Assistant Professor Jonna Koponen.

According to the results of the research, the social features of middle management also changed, because the artificial intelligence systems used at work were seen either as technical tools or colleagues, depending on the type of AI that was used. Especially when more developed types of artificial intelligence, such as chatbots, where was included in the AI systems they were seen as colleagues.

“Artificial intelligence was sometimes given a name, and some teams even discussed who might be the mother or father of artificial intelligence. This led to different types of relationships between people and artificial intelligence, which should be considered when introducing or applying artificial intelligence systems in the future. In addition, the employees were concerned about their continued employment, and did not always take an exclusively positive view of the introduction of new artificial intelligence solutions,” Professor Saara Julkunen explains.

Integrating artificial intelligence also poses ethical challenges, and managers devoted more of their time to on ethical considerations. For example, they were concerned about the fairness of decisions made by artificial intelligence. Aspects observed in the study showed that managing service teams with integrated artificial intelligence requires new skills and knowledge of middle management, such as technological understanding and skills, interactive skills and emotional intelligence, problem-solving skills, and the ability to manage and adapt to continuous change.

“Artificial intelligence systems cannot yet take over all human management in areas such as the motivation and inspiration of team members. This is why skills in interaction and empathy should be emphasised when selecting new employees for managerial positions which emphasise the management of teams integrated with artificial intelligence,” Koponen observes.

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--