RT/ ‘Brainless’ robot can navigate complex obstacles

Paradigm
Paradigm
Published in
28 min readSep 22, 2023

Robotics biweekly vol.82, 8th September — 22nd September

TL;DR

  • Researchers who created a soft robot that could navigate simple mazes without human or computer direction have now built on that work, creating a ‘brainless’ soft robot that can navigate more complex and dynamic environments.
  • The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but researchers have now harnessed the biological concept for application in artificial intelligence to develop the first artificial, multisensory integrated neuron.
  • A research team has developed groundbreaking ‘soft valve’ technology — an all-in-one solution that integrates sensors and control valves while maintaining complete softness.
  • Researchers have developed small robotic devices that can change how they move through the air by ‘snapping’ into a folded position during their descent. Each device has an onboard battery-free actuator, a solar power-harvesting circuit and controller to trigger these shape changes in mid-air.
  • A team of scientists developed a new machine learning model for discovering critical-element-free permanent magnet materials based on the predicted Curie temperature of new material combinations.
  • Many of today’s artificial intelligence systems loosely mimic the human brain. In a new paper, researchers suggest that another branch of biology — ecology — could inspire a whole new generation of AI to be more powerful, resilient, and socially responsible.
  • An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.
  • Even when humans see AI-based assistants purely as tools, they ascribe partial responsibility for decisions to them, as a new study shows.
  • Investigators found that ChatGPT was about 72 percent accurate in overall clinical decision making, from coming up with possible diagnoses to making final diagnoses and care management decisions.
  • Machine learning model provides quick method for determining the composition of solid chemical mixtures using only photographs of the sample.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Physically intelligent autonomous soft robotic maze escaper

by Yao Zhao, Yaoye Hong, Yanbin Li, Fangjie Qi, Haitao Qing, Hao Su, Jie Yin in Science Advances

Researchers who created a soft robot that could navigate simple mazes without human or computer direction have now built on that work, creating a “brainless” soft robot that can navigate more complex and dynamic environments.

“In our earlier work, we demonstrated that our soft robot was able to twist and turn its way through a very simple obstacle course,” says Jie Yin, co-corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at North Carolina State University. “However, it was unable to turn unless it encountered an obstacle. In practical terms this meant that the robot could sometimes get stuck, bouncing back and forth between parallel obstacles.

“We’ve developed a new soft robot that is capable of turning on its own, allowing it to make its way through twisty mazes, even negotiating its way around moving obstacles. And it’s all done using physical intelligence, rather than being guided by a computer.”

Self-escaping performances of the twisted, helical, and hybrid twist-helical LCE ribbons from a simple parallel confined space on a hot surface.

Physical intelligence refers to dynamic objects — like soft robots — whose behavior is governed by their structural design and the materials they are made of, rather than being directed by a computer or human intervention.

As with the earlier version, the new soft robots are made of ribbon-like liquid crystal elastomers. When the robots are placed on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion; the warmer the surface, the faster the robot rolls.

However, while the previous version of the soft robot had a symmetrical design, the new robot has two distinct halves. One half of the robot is shaped like a twisted ribbon that extends in a straight line, while the other half is shaped like a more tightly twisted ribbon that also twists around itself like a spiral staircase.

This asymmetrical design means that one end of the robot exerts more force on the ground than the other end. Think of a plastic cup that has a mouth wider than its base. If you roll it across the table, it doesn’t roll in a straight line — it makes an arc as it travels across the table. That’s due to its asymmetrical shape.

“The concept behind our new robot is fairly simple: because of its asymmetrical design, it turns without having to come into contact with an object,” says Yao Zhao, first author of the paper and a postdoctoral researcher at NC State. “So, while it still changes directions when it does come into contact with an object — allowing it to navigate mazes — it cannot get stuck between parallel objects. Instead, its ability to move in arcs allows it to essentially wiggle its way free.”

The researchers demonstrated the ability of the asymmetrical soft robot design to navigate more complex mazes — including mazes with moving walls — and fit through spaces narrower than its body size. The researchers tested the new robot design on both a metal surface and in sand.

“This work is another step forward in helping us develop innovative approaches to soft robot design — particularly for applications where soft robots would be able to harvest heat energy from their environment,” Yin says.

A bio-inspired visuotactile neuron for multisensory integration

by Muhtasim Ul Karim Sadaf, Najam U Sakib, Andrew Pannone, Harikrishnan Ravichandran, Saptarshi Das in Nature Communications

The feel of a cat’s fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but Penn State researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron. Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work.

“Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other,” said Das, who also has joint appointments in electrical engineering and in materials science and engineering. “A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”

For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed — particularly when the inputs from both are faint.

“Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute. “The requirements for different sensors are based on the context — in a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we’re seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”

Multisensory integration.

The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.

“This is because visual memory can subsequently influence and aid the tactile responses for navigation,” Sadaf said. “This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response.”

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

It’s the equivalent of seeing an “on” light on the stove and feeling heat coming off of a burner — seeing the light on doesn’t necessarily mean the burner is hot yet, but a hand only needs to feel a nanosecond of heat before the body reacts and pulls the hand away from the potential danger. The input of light and heat triggered signals that induced the hand’s response. In this case, the researchers measured the artificial neuron’s version of this by seeing signaling outputs resulted from visual and tactile input cues.

To simulate touch input, the tactile sensor used triboelectric effect, in which two layers slide against one another to produce electricity, meaning the touch stimuli was encoded into electrical impulses. To simulate visual input, the researchers shined a light into the monolayer molybdenum disulfide photo memtransistor — or a transistor that can remember visual input, like how a person can hold onto the general layout of a room after a quick flash illuminates it. They found that the sensory response of the neuron — simulated as electrical output — increased when both visual and tactile signals were weak.

“Interestingly, this effect resonates remarkably well with its biological counterpart — a visual memory naturally enhances the sensitivity to tactile stimulus,” said co-first author Najam U Sakib, a third-year doctoral student in engineering science and mechanics. “When cues are weak, you need to combine them to better understand the information, and that’s what we saw in the results.”

Das explained that an artificial multisensory neuron system could enhance sensor technology’s efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

“The super additive summation of weak visual and tactile cues is the key accomplishment of our research,” said co-author Andrew Pannone, a fourth-year doctoral student in engineering science and mechanics. “For this work, we only looked into two senses. We’re working to identify the proper scenario to incorporate more senses and see what benefits they may offer.”

A soft, self-sensing tensile valve for perceptive soft robots

by Jun Kyu Choe, Junsoo Kim, Hyeonseo Song, Joonbum Bae, Jiyun Kim in Nature Communications

Soft inflatable robots have emerged as a promising paradigm for applications that require inherent safety and adaptability. However, the integration of sensing and control systems in these robots has posed significant challenges without compromising their softness, form factor, or capabilities. Addressing this obstacle, a research team jointly led by Professor Jiyun Kim (Department of New Material Engineering, UNIST) and Professor Jonbum Bae (Department of Mechanical Engineering, UNIST) has developed groundbreaking “soft valve” technology — an all-in-one solution that integrates sensors and control valves while maintaining complete softness.

Traditionally, soft robot bodies coexisted with rigid electronic components for perception purposes. The study conducted by this research team introduces a novel approach to overcome this limitation by creating soft analogs of sensors and control valves that operate without electricity. The resulting tube-shaped part serves dual functions: detecting external stimuli and precisely controlling driving motion using only air pressure. By eliminating the need for electricity-dependent components, these all-soft valves enable safe operation underwater or in environments where sparks may pose risks — while simultaneously reducing weight burdens on robotic systems. Moreover, each component is inexpensive at approximately 800 Won.

“Previous soft robots had flexible bodies but relied on hard electronic parts for stimulus detection sensors and drive control units,” explained Professor Kim. “Our study focuses on making both sensors and drive control parts using soft materials.”

Soft self-sensing tensile valve (STV) transducing strain into manageable proportional output pressures.

The research team showcased various applications utilizing this groundbreaking technology. They created universal tongs capable of delicately picking up fragile items such as potato chips — preventing breakage caused by excessive force exerted by conventional rigid robot hands. Additionally, they successfully employed these all-soft components to develop wearable elbow assist robots designed to reduce muscle burden caused by repetitive tasks or strenuous activities involving arm movements. The elbow support automatically adjusts according to the angle at which an individual’s arm is bent — a breakthrough contributing to a 63% average decrease in the force exerted on the elbow when wearing the robot.

The soft valve operates by utilizing air flow within a tube-shaped structure. When tension is applied to one end of the tube, a helically wound thread inside compresses it, controlling inflow and outflow of air. This accordion-like motion allows for precise and flexible movements without relying on electrical power. Furthermore, the research team confirmed that by programming different structures or numbers of threads within the tube, they could accurately control airflow variations. This programmability enables customized adjustments to suit specific situations and requirements — providing flexibility in driving unit response even with consistent external forces applied to the end of the tube.

“These newly developed components can be easily employed using material programming alone, eliminating electronic devices,” expressed Professor Bae with excitement about this development. “This breakthrough will significantly contribute to advancements in various wearable systems.”

This groundbreaking soft valve technology marks a significant step toward fully soft, electronics-free robots capable of autonomous operation — a crucial milestone for enhancing safety and adaptability across numerous industries.

Solar-powered shape-changing origami microfliers

by Kyle Johnson, Vicente Arroyos, Amélie Ferran, Raul Villanueva, Dennis Yin, Tilboon Elberier, Alberto Aliseda, Sawyer Fuller, Vikram Iyer, Shyamnath Gollakota in Science Robotics

Researchers at the University of Washington have developed small robotic devices that can change how they move through the air by “snapping” into a folded position during their descent.

When these “microfliers” are dropped from a drone, they use a Miura-ori origami fold to switch from tumbling and dispersing outward through the air to dropping straight to the ground. To spread out the fliers, the researchers control the timing of each device’s transition using a few methods: an onboard pressure sensor (estimating altitude), an onboard timer or a Bluetooth signal.

Microfliers weigh about 400 milligrams — about half as heavy as a nail — and can travel the distance of a football field when dropped from 40 meters (about 131 feet) in a light breeze. Each device has an onboard battery-free actuator, a solar power-harvesting circuit and controller to trigger these shape changes in mid-air. Microfliers also have the capacity to carry onboard sensors to survey temperature, humidity and other conditions while soaring.

“Using origami opens up a new design space for microfliers,” said co-senior author Vikram Iyer, UW assistant professor in the Paul G. Allen School of Computer Science & Engineering. “We combine the Miura-ori fold, which is inspired by geometric patterns found in leaves, with power harvesting and tiny actuators to allow our fliers to mimic the flight of different leaf types in mid-air. In its unfolded flat state, our origami structure tumbles chaotically in the wind, similar to an elm leaf. But switching to the folded state changes the airflow around it and enables a stable descent, similarly to how a maple leaf falls. This highly energy efficient method allows us to have battery-free control over microflier descent, which was not possible before.”

The circuits are assembled and patterned directly onto the flexible material that makes up the microfliers, as shown here.Mark Stone/University of Washington

These robotic systems overcome several design challenges. The devices:

  • are stiff enough to avoid accidentally transitioning to the folded state before the signal.
  • transition between states rapidly. The devices’ onboard actuators need only about 25 milliseconds to initiate the folding.
  • change shape while untethered from a power source. The microfliers’ power-harvesting circuit uses sunlight to provide energy to the actuator.

The current microfliers can only transition in one direction — from the tumbling state to the falling state. This switch allows researchers to control the descent of multiple microfliers at the same time, so they disperse in different directions on their way down. Future devices will be able to transition in both directions, the researchers said. This added functionality will allow for more precise landings in turbulent wind conditions.

Physics-Informed Machine-Learning Prediction of Curie Temperatures and Its Promise for Guiding the Discovery of Functional Magnetic Materials

by Prashant Singh, Tyler Del Rose, Andriy Palasyuk, Yaroslav Mudryk in Chemistry of Materials

A team of scientists from Ames National Laboratory developed a new machine learning model for discovering critical-element-free permanent magnet materials. The model predicts the Curie temperature of new material combinations. It is an important first step in using artificial intelligence to predict new permanent magnet materials. This model adds to the team’s recently developed capability for discovering thermodynamically stable rare earth materials.

High performance magnets are essential for technologies such as wind energy, data storage, electric vehicles, and magnetic refrigeration. These magnets contain critical materials such as cobalt and rare earth elements like Neodymium and Dysprosium. These materials are in high demand but have limited availability. This situation is motivating researchers to find ways to design new magnetic materials with reduced critical materials.

Machine learning (ML) is a form of artificial intelligence. It is driven by computer algorithms that use data and trial-and-error algorithms to continually improve its predictions. The team used experimental data on Curie temperatures and theoretical modeling to train the ML algorithm. Curie temperature is the maximum temperature at which a material maintains its magnetism.

“Finding compounds with the high Curie temperature is an important first step in the discovery of materials that can sustain magnetic properties at elevated temperatures,” said Yaroslav Mudryk, a scientist at Ames Lab and senior leader of the research team. “This aspect is critical for the design of not only permanent magnets but other functional magnetic materials.”

According to Mudryk, discovering new materials is a challenging activity because the search is traditionally based on experimentation, which is expensive and time-consuming. However, using a ML method can save time and resources.

Prashant Singh, a scientist at Ames Lab and member of the research team, explained that a major part of this effort was to develop an ML model using fundamental science. The team trained their ML model using experimentally known magnetic materials. The information about these materials establishes a relationship between several electronic and atomic structure features and Curie temperature. These patterns give the computer a basis for finding potential candidate materials.

To test the model, the team used compounds based on Cerium, Zirconium, and Iron. This idea was proposed by Andriy Palasyuk, a scientist at Ames Lab and member of the research team. He wanted to focus on unknown magnet materials based on earth-abundant elements. “The next super magnet must not only be superb in performance, but also rely on abundant domestic components,” said Palasyuk.

Palasyuk worked with Tyler Del Rose, another scientist at Ames Lab and member of the research team, to synthesize and characterize the alloys. They found that the ML model was successful in predicting the Curie temperature of material candidates. This success is an important first step in creating a high-throughput way of designing new permanent magnets for future technological applications.

A synergistic future for AI and ecology

by Barbara A. Han, Kush R. Varshney, Shannon LaDeau, Ajit Subramaniam, Kathleen C. Weathers, Jacob Zwart in Proceedings of the National Academy of Sciences

Many of today’s artificial intelligence systems loosely mimic the human brain. In a new paper, researchers suggest that another branch of biology — ecology — could inspire a whole new generation of AI to be more powerful, resilient, and socially responsible.

Tthe paper argues for a synergy between AI and ecology that could both strengthen AI and help to solve complex global challenges, such as disease outbreaks, loss of biodiversity, and climate change impacts. The idea arose from the observation that AI can be shockingly good at certain tasks, but still far from useful at others — and that AI development is hitting walls that ecological principles could help it to overcome.

“The kinds of problems that we deal with regularly in ecology are not only challenges that AI could benefit from in terms of pure innovation — they’re also the kinds of problems where if AI could help, it could mean so much for the global good,” explained Barbara Han, a disease ecologist at Cary Institute of Ecosystem Studies, who co-led the paper along with IBM Research’s Kush Varshney. “It could really benefit humankind.”

Ecologists — Han included — are already using artificial intelligence to search for patterns in large data sets and to make more accurate predictions, such as whether new viruses might be capable of infecting humans, and which animals are most likely to harbor those viruses. However, the new paper argues that there are many more possibilities for applying AI in ecology, such as in synthesizing big data and finding missing links in complex systems.

Scientists typically try to understand the world by comparing two variables at a time — for example, how does population density affect the number of cases of an infectious disease? The problem is that, like most complex ecological systems, predicting disease transmission depends on many variables, not just one, explained co-author Shannon LaDeau, a disease ecologist at Cary Institute. Ecologists don’t always know what all of those variables are, they’re limited to the ones that can be easily measured (as opposed to social and cultural factors, for example), and it’s hard to capture how those different variables interact.

“Compared to other statistical models, AI can incorporate greater amounts of data and a diversity of data sources, and that might help us discover new interactions and drivers that we may not have thought were important,” said LaDeau. “There is a lot of promise for developing AI to better capture more types of data, like the socio-cultural insights that are really hard to boil down to a number.”

In helping to uncover these complex relationships and emergent properties, artificial intelligence could generate unique hypotheses to test and open up whole new lines of ecological research, said LaDeau.

The number of papers per year for the query on Web of Science: ((TS=(“artificial intelligence” OR “machine learning”))) AND WC=(Ecology OR “environmental sciences”).

Artificial intelligence systems are notoriously fragile, with potentially devastating consequences, such as misdiagnosing cancer or causing a car crash. The incredible resilience of ecological systems could inspire more robust and adaptable AI architectures, the authors argue. In particular, Varshney said that ecological knowledge could help to solve the problem of mode collapse in artificial neural networks, the AI systems that often power speech recognition, computer vision, and more.

“Mode collapse is when you’re training an artificial neural network on something, and then you train it on something else and it forgets the first thing that it was trained on,” he explained. “By better understanding why mode collapse does or doesn’t happen in natural systems, we may learn how to make it not happen in AI.”

Inspired by ecological systems, a more robust AI might include feedback loops, redundant pathways, and decision-making frameworks. These flexibility upgrades could also contribute to a more ‘general intelligence’ for AIs that could enable reasoning and connection-making beyond the specific data that the algorithm was trained on.

Ecology could also help to reveal why AI-driven large language models, which power popular chatbots such as ChatGPT, show emergent behaviors that are not present in smaller language models. These behaviors include ‘hallucinations’ — when an AI generates false information. Because ecology examines complex systems at multiple levels and in holistic ways, it is good at capturing emergent properties such as these and can help to reveal the mechanisms behind such behaviors. Furthermore, the future evolution of artificial intelligence depends on fresh ideas. The CEO of OpenAI, the creators of ChatGPT, has said that further progress will not come from simply making models bigger.

“There will have to be other inspirations, and ecology offers one pathway for new lines of thinking,” said Varshney.

While ecology and artificial intelligence have been advancing in similar directions independently, the researchers say that closer and more deliberate collaboration could yield not-yet-imagined advances in both fields. Resilience offers a compelling example for how both fields could benefit by working together. For ecology, AI advancements in measuring, modeling, and predicting natural resilience could help us to prepare for and respond to climate change. For AI, a clearer understanding of how ecological resilience works could inspire more resilient AIs that are then even better at modeling and investigating ecological resilience, representing a positive feedback loop.

Closer collaboration also promises to promote greater social responsibility in both fields. Ecologists are working to incorporate diverse ways of understanding the world from Indigenous and other traditional knowledge systems, and artificial intelligence could help to merge these different ways of thinking. Finding ways to integrate different types of data could help to improve our understanding of socio-ecological systems, de-colonize the field of ecology, and correct biases in AI systems.

“AI models are built on existing data, and are trained and retrained when they go back to the existing data,” said co-author Kathleen Weathers, a Cary Institute ecosystem scientist. “When we have data gaps that exclude women over 60, people of color, or traditional ways of knowing, we are creating models with blindspots that can perpetuate injustices.”

Achieving convergence between AI and ecology research will require building bridges between these two siloed disciplines, which currently use different vocabularies, operate within different scientific cultures, and have different funding sources. The new paper is just the beginning of this process.

Neuronal diversity can improve machine learning for physics and beyond

by Anshul Choudhary, Anil Radhakrishnan, John F. Lindner, Sudeshna Sinha, William L. Ditto in Scientific Reports

An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.

“We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”

Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

Schematic stochastic gradient descent meta-learning nested loops. Neural-network weights and biases θadjust to lower lossesL(θ,θs), during an inner loop, while periodically the sub-network weights θs open extra dimensions and themselves adjust to allow even lower losses, during an outer loop. Rainbow colors code time t.

Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network. Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.

“Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.

“Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”

The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy. According to Ditto, the diversity-based AI is up to 10 times more accurate than conventional AI in solving more complicated problems, such as predicting a pendulum’s swing or the motion of galaxies.

“We have shown that if you give an AI the ability to look inward and learn how it learns it will change its internal structure — the structure of its artificial neurons — to embrace diversity and improve its ability to learn and solve problems efficiently and more accurately,” Ditto says. “Indeed, we also observed that as the problems become more complex and chaotic the performance improves even more dramatically over an AI that does not embrace diversity.”

Intelligence brings responsibility — Even smart AI assistants are held responsible

by Louis Longin, Bahador Bahrami, Ophelia Deroy in iScience

Even when humans see AI-based assistants purely as tools, they ascribe partial responsibility for decisions to them, as a new study shows.

Future AI-based systems may navigate autonomous vehicles through traffic with no human input. Research has shown that people judge such futuristic AI systems to be just as responsible as humans when they make autonomous traffic decisions. However, real-life AI assistants are far removed from this kind of autonomy. They provide human users with supportive information such as navigation and driving aids. So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? A team led by Louis Longin from the Chair of Philosophy of Mind has now investigated how people assess responsibility in these cases.

“We all have smart assistants in our pockets,” says Longin. “Yet a lot of the experimental evidence we have on responsibility gaps focuses on robots or autonomous vehicles where AI is literally in the driver’s seat, deciding for us. Investigating cases where we are still the ones making the final decision, but use AI more like a sophisticated instrument, is essential.”

Experimental design and expectations.

A philosopher specialized in the interaction between humans and AI, Longin, working in collaboration with his colleague Dr. Bahador Bahrami and Prof. Ophelia Deroy, Chair of Philosophy of Mind, investigated how 940 participants judged a human driver using either a smart AI-powered verbal assistant, a smart AI-powered tactile assistant, or a non-AI navigation instrument. Participants also indicated whether they saw the navigation aid as responsible, and to which degree it was a tool.

The results reveal an ambivalence: Participants strongly asserted that smart assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them. No such division of responsibility occurred for the non-AI powered instrument.

No less surprising for the authors was that the smart assistants were also considered more responsible for positive rather than negative outcomes. “People might apply different moral standards for praise and blame. When a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems” suggests Dr. Bahrami, who is an expert on collective responsibility.

In the study, the authors found no difference between smart assistants that used language and those that alarmed their users by a tactile vibration of the wheel. “The two provided the same information in this case, ‘Hey, careful, something ahead,’ but of course, ChatGPT in practice gives much more information,” says Ophelia Deroy, whose research examines our conflicting attitudes toward artificial intelligence as a form of animist beliefs. In relation to the additional information provided by novel language-based AI systems like ChatGPT, Deroy adds: “The richer the interaction, the easier it is to anthropomorphize.”

“In sum, our findings support the idea that AI assistants are seen as something more than mere recommendation tools but remain nonetheless far from human standards,” says Longin.

The authors believe that the findings of the new study will have a far-reaching impact on the design and social discourse around AI assistants: “Organizations that develop and release smart assistants should think about how social and moral norms are affected,” Longin concludes.

Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study

by Arya Rao, Michael Pang, John Kim, Meghana Kamineni, Winston Lie, Anoop K Prasad, Adam Landman, Keith Dreyer, Marc D Succi in Journal of Medical Internet Research

A new study led by investigators from Mass General Brigham has found that ChatGPT was about 72 percent accurate in overall clinical decision making, from coming up with possible diagnoses to making final diagnoses and care management decisions. The large-language model (LLM) artificial intelligence chatbot performed equally well in both primary care and emergency settings across all medical specialties.

“Our paper comprehensively assesses decision support via ChatGPT from the very beginning of working with a patient through the entire care scenario, from differential diagnosis all the way through testing, diagnosis, and management,” said corresponding author Marc Succi, MD, associate chair of innovation and commercialization and strategic innovation leader at Mass General Brigham and executive director of the MESH Incubator. “No real benchmarks exists, but we estimate this performance to be at the level of someone who has just graduated from medical school, such as an intern or resident. This tells us that LLMs in general have the potential to be an augmenting tool for the practice of medicine and support clinical decision making with impressive accuracy.”

Changes in artificial intelligence technology are occurring at a fast pace and transforming many industries, including health care. But the capacity of LLMs to assist in the full scope of clinical care has not yet been studied. In this comprehensive, cross-specialty study of how LLMs could be used in clinical advisement and decision making, Succi and his team tested the hypothesis that ChatGPT would be able to work through an entire clinical encounter with a patient and recommend a diagnostic workup, decide the clinical management course, and ultimately make the final diagnosis.

Experimental workflow for determining ChatGPT accuracy in solving clinical vignettes. Panel A: Schematic of user interface with ChatGPT for this experiment. Blue boxes indicate prompts given to ChatGPT and green boxes indicate ChatGPT responses. Nonitalicized text indicates information given to ChatGPT without a specific question attached. Panel B: Schematic of experimental workflow. Prompts were developed from Merck Sharpe & Dohme (MSD) vignettes and converted to ChatGPT-compatible text input. Questions requiring the interpretation of images were removed. Three independent users tested each prompt. Two independent scorers calculated scores for all outputs; these were compared to generate a consensus score. diag: diagnostic questions; diff: differential diagnoses; dx: diagnosis questions; HPI: history of present illness; mang: management questions; PE: physical exam; ROS: review of systems.

The study was done by pasting successive portions of 36 standardized, published clinical vignettes into ChatGPT. The tool first was asked to come up with a set of possible, or differential, diagnoses based on the patient’s initial information, which included age, gender, symptoms, and whether the case was an emergency. ChatGPT was then given additional pieces of information and asked to make management decisions as well as give a final diagnosis — simulating the entire process of seeing a real patient. The team compared ChatGPT’s accuracy on differential diagnosis, diagnostic testing, final diagnosis, and management in a structured blinded process, awarding points for correct answers and using linear regressions to assess the relationship between ChatGPT’s performance and the vignette’s demographic information.

The researchers found that overall, ChatGPT was about 72 percent accurate and that it was best in making a final diagnosis, where it was 77 percent accurate. It was lowest-performing in making differential diagnoses, where it was only 60 percent accurate. And it was only 68 percent accurate in clinical management decisions, such as figuring out what medications to treat the patient with after arriving at the correct diagnosis. Other notable findings from the study included that ChatGPT’s answers did not show gender bias and that its overall performance was steady across both primary and emergency care.

“ChatGPT struggled with differential diagnosis, which is the meat and potatoes of medicine when a physician has to figure out what to do,” said Succi. “That is important because it tells us where physicians are truly experts and adding the most value — in the early stages of patient care with little presenting information, when a list of possible diagnoses is needed.”

The authors note that before tools like ChatGPT can be considered for integration into clinical care, more benchmark research and regulatory guidance is needed. Next, Succi’s team is looking at whether AI tools can improve patient care and outcomes in hospitals’ resource-constrained areas.

The emergence of artificial intelligence tools in health has been groundbreaking and has the potential to positively reshape the continuum of care. Mass General Brigham, as one of the nation’s top integrated academic health systems and largest innovation enterprises, is leading the way in conducting rigorous research on new and emerging technologies to inform the responsible incorporation of AI into care delivery, workforce support, and administrative processes.

“Mass General Brigham sees great promise for LLMs to help improve care delivery and clinician experience,” said co-author Adam Landman, MD, MS, MIS, MHS, chief information officer and senior vice president of digital at Mass General Brigham. “We are currently evaluating LLM solutions that assist with clinical documentation and draft responses to patient messages with focus on understanding their accuracy, reliability, safety, and equity. Rigorous studies like this one are needed before we integrate LLM tools into clinical care.”

Machine Learning-Based Analysis of Molar and Enantiomeric Ratios and Reaction Yields Using Images of Solid Mixtures

by Yuki Ide, Hayato Shirakura, Taichi Sano, Muthuchamy Murugavel, Yuya Inaba, Sheng Hu, Ichigaku Takigawa, Yasuhide Inokuma in Industrial & Engineering Chemistry Research

Machine learning model provides quick method for determining the composition of solid chemical mixtures using only photographs of the sample.

Have you ever accidentally ruined a recipe in the kitchen by adding salt instead of sugar? Due to their similar appearance, it’s an easy mistake to make. Similarly, checking with the naked eye is also used in chemistry labs to provide quick, initial assessments of reactions; however, just like in the kitchen, the human eye has its limitations and can be unreliable. To address this, researchers at the Institute of Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University led by Professor Yasuhide Inokuma have developed a machine learning model that can distinguish the composition ratio of solid mixtures of chemical compounds using only photographs of the samples.

The model was designed and developed using mixtures of sugar and salt as a test case. The team employed a combination of random cropping, flipping and rotating of the original photographs in order to create a larger number of sub images for training and testing. This enabled the model to be developed using only 300 original images for training. The trained model was roughly twice as accurate as the naked eye of even the most expert member of the team.

“I think it’s fascinating that with machine learning we have been able to reproduce and even exceed the accuracy of the eyes of experienced chemists,” commented Inokuma. “This tool should be able to help new chemists achieve an experienced eye more quickly.”

After the successful test case, researchers applied this model to the evaluation of different chemical mixtures. The model successfully distinguished different polymorphs and enantiomers, both of which are extremely similar versions of the same molecule with subtle differences in atomic or molecular arrangement. Distinguishing these subtle differences is important in the pharmaceutical industry and normally requires a more time-consuming process.

The model was even able to handle more complex mixtures, accurately assessing the percentage of a target molecule in a four-component mixture. Reaction yield was also analyzed, determining the progress of a thermal decarboxylation reaction.

The team further demonstrated the versatility of their model, showing that it could accurately analyze images taken with a mobile phone, after supplemental training was performed. The researchers anticipate a wide variety of applications, both in the research lab and in industry.

“We see this as being applicable in situations where constant, rapid evaluation is required, such as monitoring reactions at a chemical plant or as an analysis step in an automated process using a synthesis robot,” explained Specially Appointed Assistant Professor Yuki Ide. “Additionally, this could act as an observation tool for those who have impaired vision.”

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--