RT/ Shape-changing smart speaker lets users mute different areas of a room

Paradigm
Paradigm
Published in
29 min readOct 12, 2023

Robotics biweekly vol.83, 22nd September — 12th October

TL;DR

  • A team has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers.
  • Patterns of chemical interactions are thought to create patterns in nature such as stripes and spots. This new study shows that the mathematical basis of these patterns also governs how sperm tail moves.
  • Scientists have created a non-invasive movement tracking method called GlowTrack that uses fluorescent dye markers to train artificial intelligence to capture movement, from a single mouse digit to the human hand. GlowTrack has applications spanning biology, robotics, medicine, and beyond.
  • Researchers propose a unified, scalable framework to measure agricultural greenhouse gas emissions.
  • New physics-based self-learning machines could replace the current artificial neural networks and save energy.
  • Scientists have shown that their steerable lung robot can autonomously maneuver the intricacies of the lung, while avoiding important lung structures.
  • A new study shows that human instruction is still necessary to detect and compensate for unintended, and sometimes negative, changes in neurosurgeon behavior after virtual reality AI training. This finding has implications for other fields of training.
  • Researchers combined soft microactuators with high-energy-density chemical fuel to create an insect-scale quadrupedal robot that is powered by combustion and can outrace, outlift, outflex and outleap its electric-driven competitors.
  • A new study has found that people can learn to use supernumerary robotic arms as effectively as working with a partner in just one hour of training.
  • Researchers have designed a robot which can change form to tackle varying scenarios.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Creating speech zones with self-distributing acoustic swarms

by Malek Itani, Tuochao Chen, Takuya Yoshioka, Shyamnath Gollakota in Nature Communications

In virtual meetings, it’s easy to keep people from talking over each other. Someone just hits mute. But for the most part, this ability doesn’t translate easily to recording in-person gatherings. In a bustling cafe, there are no buttons to silence the table beside you.

The ability to locate and control sound — isolating one person talking from a specific location in a crowded room, for instance — has challenged researchers, especially without visual cues from cameras.

A team led by researchers at the University of Washington has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. With the help of the team’s deep-learning algorithms, the system lets users mute certain areas or separate simultaneous conversations, even if two adjacent people have similar voices. Like a fleet of Roombas, each about an inch in diameter, the microphones automatically deploy from, and then return to, a charging station. This allows the system to be moved between environments and set up automatically. In a conference room meeting, for instance, such a system might be deployed instead of a central microphone, allowing better control of in-room audio.

Creating speech zones using our acoustic swarms.

“If I close my eyes and there are 10 people talking in a room, I have no idea who’s saying what and where they are in the room exactly. That’s extremely hard for the human brain to process. Until now, it’s also been difficult for technology,” said co-lead author Malek Itani, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “For the first time, using what we’re calling a robotic ‘acoustic swarm,’ we’re able to track the positions of multiple people talking in a room and separate their speech.”

Previous research on robot swarms has required using overhead or on-device cameras, projectors or special surfaces. The UW team’s system is the first to accurately distribute a robot swarm using only sound.

The team’s prototype consists of seven small robots that spread themselves across tables of various sizes. As they move from their charger, each robot emits a high frequency sound, like a bat navigating, using this frequency and other sensors to avoid obstacles and move around without falling off the table. The automatic deployment allows the robots to place themselves for maximum accuracy, permitting greater sound control than if a person set them. The robots disperse as far from each other as possible since greater distances make differentiating and locating people speaking easier. Today’s consumer smart speakers have multiple microphones, but clustered on the same device, they’re too close to allow for this system’s mute and active zones.

“If I have one microphone a foot away from me, and another microphone two feet away, my voice will arrive at the microphone that’s a foot away first. If someone else is closer to the microphone that’s two feet away, their voice will arrive there first,” said co-lead authorTuochao Chen, a UW doctoral student in the Allen School. “We developed neural networks that use these time-delayed signals to separate what each person is saying and track their positions in a space. So you can have four people having two conversations and isolate any of the four voices and locate each of the voices in a room.”

Acoustic swarm dispersal.

The team tested the robots in offices, living rooms and kitchens with groups of three to five people speaking. Across all these environments, the system could discern different voices within 1.6 feet (50 centimeters) of each other 90% of the time, without prior information about the number of speakers. The system was able to process three seconds of audio in 1.82 seconds on average — fast enough for live streaming, though a bit too long for real-time communications such as video calls.

As the technology progresses, researchers say, acoustic swarms might be deployed in smart homes to better differentiate people talking with smart speakers. That could potentially allow only people sitting on a couch, in an “active zone,” to vocally control a TV, for example.

Researchers plan to eventually make microphone robots that can move around rooms, instead of being limited to tables. The team is also investigating whether the speakers can emit sounds that allow for real-world mute and active zones, so people in different parts of a room can hear different audio. The current study is another step toward science fiction technologies, such as the “cone of silence” in “Get Smart” and”Dune,” the authors write.

Of course, any technology that evokes comparison to fictional spy tools will raise questions of privacy. Researchers acknowledge the potential for misuse, so they have included guards against this: The microphones navigate with sound, not an onboard camera like other similar systems. The robots are easily visible and their lights blink when they’re active. Instead of processing the audio in the cloud, as most smart speakers do, the acoustic swarms process all the audio locally, as a privacy constraint. And even though some people’s first thoughts may be about surveillance, the system can be used for the opposite, the team says.

“It has the potential to actually benefit privacy, beyond what current smart speakers allow,” Itani said. “I can say, ‘Don’t record anything around my desk,’ and our system will create a bubble 3 feet around me. Nothing in this bubble would be recorded. Or if two groups are speaking beside each other and one group is having a private conversation, while the other group is recording, one conversation can be in a mute zone, and it will remain private.”

The reaction-diffusion basis of animated patterns in eukaryotic flagella

by James F. Cass, Hermes Bloomfield-Gadêlha in Nature Communications

Patterns of chemical interactions are thought to create patterns in nature such as stripes and spots. This new study shows that the mathematical basis of these patterns also governs how sperm tail moves.

The findings reveal that flagella movement of, for example, sperm tails and cilia, follow the same template for pattern formation that was discovered by the famous mathematician Alan Turing. Flagellar undulations make stripe patterns in space-time, generating waves that travel along the tail to drive the sperm and microbes forward.

Alan Turing is most well-known for helping to break the enigma code during WWII. However he also developed a theory of pattern formation that predicted that chemical patterns may appear spontaneously with only two ingredients: chemicals spreading out (diffusing) and reacting together. Turing first proposed the so-called reaction-diffusion theory for pattern formation.

Turing helped to pave the way for a whole new type of enquiry using reaction-diffusion mathematics to understand natural patterns. Today, these chemical patterns first envisioned by Turing are called Turing patterns. Although not yet proven by experimental evidence, these patterns are thought to govern many patterns across nature, such as leopard spots, the whorl of seeds in the head of a sunflower, and patterns of sand on the beach. Turing’s theory can be applied to various fields, from biology and robotics to astrophysics.

Modelling overview.

Mathematician Dr Hermes Gadêlha, head of the Polymaths Lab, and his PhD student James Cass conducted this research in the School of Engineering Mathematics and Technology at the University of Bristol. Gadêlha explained: “Live spontaneous motion of flagella and cilia is observed everywhere in nature, but little is known about how they are orchestrated.

“They are critical in health and disease, reproduction, evolution, and survivorship of almost every aquatic microorganism in earth.”

The team was inspired by recent observations in low viscosity fluids that the surrounding environment plays a minor role on the flagellum. They used mathematical modelling, simulations, and data fitting to show that flagellar undulations can arise spontaneously without the influence of their fluid environment. Mathematically this is equivalent to Turing’s reaction-diffusion system that was first proposed for chemical patterns.

In the case of sperm swimming, chemical reactions of molecular motors power the flagellum, and bending movement diffuses along the tail in waves. The level of generality between visual patterns and patterns of movement is striking and unexpected, and shows that only two simple ingredients are needed to achieve highly complex motion.

Dr Gadêlha added: “We show that this mathematical ‘recipe’ is followed by two very distant species — bull sperm and Chlamydomonas (a green algae that is used as a model organism across science), suggesting that nature replicates similar solutions.

“Travelling waves emerge spontaneously even when the flagellum is uninfluenced by the surrounding fluid. This means that the flagellum has a fool-proof mechanism to enable swimming in low viscosity environments, which would otherwise be impossible for aquatic species.

“It is the first time that model simulations compare well with experimental data. “We are grateful to the researchers that made their data freely available, without which we would not have been able to proceed with this mathematical study.”

These findings may be used in future to better understand fertility issues associated with abnormal flagellar motion and other ciliopathies; diseases caused by ineffective cilia in human bodies. This could also be further explored for robotic applications, artificial muscles, and animated materials, as the team discovered a simple ‘mathematical recipe’ for making patterns of movement.

Dr Gadêlha is also a member of the SoftLab at Bristol Robotics Laboratory (BRL), where he uses pattern formation mathematics to innovate the next generation of soft-robots.

“In 1952, Turing unlocked the reaction-diffusion basis of chemical patterns,” said Dr Gadêlha. “We show that the ‘atom’ of motion in the cellular world, the flagellum, uses Turing’s template to shape, instead, patterns of movement driving tail motion that pushes sperm forwards.

“Although this is a step closer to mathematically decode spontaneous animation in nature, our reaction-diffusion model is far too simple to fully capture all complexity. Other models may exist, in the space of models, with equal, or even better, fits with experiments, that we simply have no knowledge of their existence yet, and thus substantial more research is still needed!”

Large-scale capture of hidden fluorescent labels for training generalizable markerless motion capture models

by Daniel J. Butler, Alexander P. Keim, Shantanu Ray, Eiman Azim in Nature Communications

Movement offers a window into how the brain operates and controls the body. From clipboard-and-pen observation to modern artificial intelligence-based techniques, tracking human and animal movement has come a long way. Current cutting-edge methods utilize artificial intelligence to automatically track parts of the body as they move. However, training these models is still time-intensive and limited by the need for researchers to manually mark each body part hundreds to thousands of times.

Now, Associate Professor Eiman Azim and team have created GlowTrack, a non-invasive movement tracking method that uses fluorescent dye markers to train artificial intelligence. GlowTrack is robust, time-efficient, and high definition — capable of tracking a single digit on a mouse’s paw or hundreds of landmarks on a human hand. The technique has applications spanning from biology to robotics to medicine and beyond.

“Over the last several years, there has been a revolution in tracking behavior as powerful artificial intelligence tools have been brought into the laboratory,” says Azim, senior author and holder of the William Scandling Developmental Chair. “Our approach makes these tools more versatile, improving the ways we capture diverse movements in the laboratory. Better quantification of movement gives us better insight into how the brain controls behavior and could aid in the study of movement disorders like amyotrophic lateral sclerosis (ALS) and Parkinson’s disease.”

Current methods to capture animal movement often require researchers to manually and repeatedly mark body parts on a computer screen — a time-consuming process subject to human error and time constraints. Human annotation means that these methods can usually only be used in a narrow testing environment, since artificial intelligence models specialize to the limited amount of training data they receive. For example, if the light, orientation of the animal’s body, camera angle, or any number of other factors were to change, the model would no longer recognize the tracked body part.

Hidden fluorescent labels for training versatile landmark detectors.

To address these limitations, the researchers used fluorescent dye to label parts of the animal or human body. With these “invisible” fluorescent dye markers, an enormous amount of visually diverse data can be created quickly and fed into the artificial intelligence models without the need for human annotation. Once fed this robust data, these models can be used to track movements across a much more diverse set of environments and at a resolution that would be far more difficult to achieve with manual human labeling.

This opens the door for easier comparison of movement data between studies, as different laboratories can use the same models to track body movement across a variety of situations. According to Azim, comparison and reproducibility of experiments are essentialin the process of scientific discovery.

“Fluorescent dye markers were the perfect solution,” says first author Daniel Butler, a Salk bioinformatics analyst. Like the invisible ink on a dollar bill that lights up only when you want it to, our fluorescent dye markers can be turned on and off in the blink of an eye, allowing us to generate a massive amount of training data.”

In the future, the team is excited to support diverse applications of GlowTrack and pair its capabilities with other tracking tools that reconstruct movements in three dimensions, and with analysis approaches that can probe these vast movement datasets for patterns.

“Our approach can benefit a host of fields that need more sensitive, reliable, and comprehensive tools to capture and quantify movement,” says Azim. “I am eager to see how other scientists and non-scientists adopt these methods, and what unique, unforeseen applications might arise.”

A scalable framework for quantifying field-level agricultural carbon outcomes

by Kaiyu Guan, Zhenong Jin, Bin Peng, Jinyun Tang, et al in Earth-Science Reviews

Increased government investment in climate change mitigation is prompting agricultural sectors to find reliable methods for measuring their contribution to climate change. With that in mind, a team led by scientists at the University of Illinois Urbana-Champaign proposed a supercomputing solution to help measure individual farm field-level greenhouse gas emissions.

Although locally tested in the Midwest, the new approach can be scaled up to national and global levels and help the industry grasp the best practices for reducing emissions. The new study, directed by natural resources and environmental sciences professor Kaiyu Guan, synthesized more than 25 of the group’s previous studies to quantify greenhouse gas emissions produced by U.S. farmland. The findings — completed in collaboration with partners from the University of Minnesota, Lawrence Berkeley National Laboratory and Project Drawdown, a climate solutions nonprofit organization.

“There are many farming practices that can go a long way to reduce greenhouse gas emissions, but the scientific community has struggled to find a consistent method for measuring how well these practices work,” Guan said.

Guan’s team built a solution based on “agricultural carbon outcomes,” which it defines as the related changes in greenhouse gas emissions from farmers adopting climate mitigation practices like cover cropping, precision nitrogen fertilizer management and use of controlled drainage techniques.

Conceptual diagram of quantifying agroecosystem carbon outcomes at the field level for agroecosystems.

“We developed what we call a ‘system of systems’ solution, which means we integrated a variety of sensing techniques and combined them with advanced ecosystem models,” said Bin Peng, co-author of the study and a senior research scientist at the U. of I. Institute for Sustainability, Energy and Environment. “For example, we fuse ground-based imaging with satellite imagery and process that data with algorithms to generate information about crop emissions before and after farmers adopt various mitigation practices.”

“Artificial intelligence also plays a critical role in realizing our ambitious goals to quantify every field’s carbon emission,” said Zhenong Jin, a professor at the University of Minnesota who co-led the study. “Unlike traditional model-data fusion approaches, we used knowledge-guided machine learning, which is a new way to bring together the power of sensing data, domain knowledge and artificial intelligence techniques.”

The study also details how emissions and agricultural practices data can be cross-checked against economic, policy and carbon market data to find best-practice and realistic greenhouse gas mitigation solutions locally to globally — especially in economies struggling to farm in an environmentally conscious manner. To compute the vast amount of information from millions of individual farms, the team is using supercomputing platforms available at the National Center for Supercomputing Applications. “Access to the resources at NCSA allows for this monumental task,” Guan said.

“The real beauty of our tool is that it is both very generic and scalable, meaning it can be applied to virtually any agricultural system in any country to obtain reliable emissions data using our targeted procedure and techniques,” Peng said.

The challenge of this work will be encouraging widespread adoption of the system, the researchers said.

“Given the U.S. government’s $19 billion investment in the Inflation Reduction Act and the upcoming Farm Bill, farmers will be able to adopt more conservation practices,” Guan said. “This work will help researchers and policymakers to ‘speak the same language’ by using this tool that we believe is very valuable in this time of increasing government investment in climate mitigation.”

“Bringing more scientific rigor to estimating emissions on farmlands is a huge task. We need credible tools that are simple and practical,” said Paul West, a senior scientist at Project Drawdown and a collaborator on this research. “Our research brings a big step closer to meeting the challenge.”

Self-Learning Machines Based on Hamiltonian Echo Backpropagation

by Víctor López-Pastor, Florian Marquardt in Physical Review X

Artifical intelligence not only affords impressive performance, but also creates significant demand for energy. The more demanding the tasks for which it is trained, the more energy it consumes. Víctor López-Pastor and Florian Marquardt, two scientists at the Max Planck Institute for the Science of Light in Erlangen, Germany, present a method by which artificial intelligence could be trained much more efficiently. Their approach relies on physical processes instead of the digital artificial neural networks currently used.

The amount of energy required to train GPT-3, which makes ChatGPT an eloquent and apparently well-informed Chatbot, has not been revealed by Open AI, the company behind that artificial intelligence (AI). According to the German statistics company Statista, this would require 1000 megawatt hours — about as much as 200 German households with three or more people consume annually. While this energy expenditure has allowed GPT-3 to learn whether the word ‘deep’ is more likely to be followed by the word ‘sea’ or ‘learning’ in its data sets, by all accounts it has not understood the underlying meaning of such phrases.

In order to reduce the energy consumption of computers, and particularly AI-applications, in the past few years several research institutions have been investigating an entirely new concept of how computers could process data in the future. The concept is known as neuromorphic computing. Although this sounds similar to artificial neural networks, it in fact has little to do with them as artificial neural networks run on conventional digital computers. This means that the software, or more precisely the algorithm, is modelled on the brain’s way of working, but digital computers serve as the hardware. They perform the calculation steps of the neuronal network in sequence, one after the other, differentiating between processor and memory.

“The data transfer between these two components alone devours large quantities of energy when a neural network trains hundreds of billions of parameters, i.e. synapses, with up to one terabyte of data” says Florian Marquardt, director of the Max Planck Institute for the Science of Light and professor at the University of Erlangen. The human brain is entirely different and would probably never have been evolutionarily competitive, had it worked with an energy efficiency similar to that of computers with silicon transistors. It would most likely have failed due to overheating.

Different types of physical learning machines.

The brain is characterized by undertaking the numerous steps of a thought process in parallel and not sequentially. The nerve cells, or more precisely the synapses, are both processor and memory combined. Various systems around the world are being treated as possible candidates for the neuromorphic counterparts to our nerve cells, including photonic circuits utilizing light instead of electrons to perform calculations. Their components serve simultaneously as switches and memory cells.

Together with Víctor López-Pastor, a doctoral student at the Max Planck Institute for the Science of Light, Florian Marquardt has now devised an efficient training method for neuromorphic computers.

“We have developed the concept of a self-learning physical machine,” explains Florian Marquardt. “The core idea is to carry out the training in the form of a physical process, in which the parameters of the machine are optimized by the process itself.”

When training conventional artificial neural networks, external feedback is necessary to adjust the strengths of the many billions of synaptic connections. “Not requiring this feedback makes the training much more efficient,” says Florian Marquardt. Implementing and training an artificial intelligence on a self-learning physical machine would not only save energy, but also computing time. “Our method works regardless of which physical process takes place in the self-learning machine, and we do not even need to know the exact process,” explains Florian Marquardt. “However, the process must fulfil a few conditions.” Most importantly it must be reversible, meaning it must be able to run forwards or backwards with a minimum of energy loss. “In addition, the physical process must be non-linear, meaning sufficiently complex” says Florian Marquardt. Only non-linear processes can accomplish the complicated transformations between input data and results. A pinball rolling over a plate without colliding with another is a linear action. However, if it is disturbed by another, the situation becomes non-linear.

Examples of reversible, non-linear processes can be found in optics. Indeed, Víctor López-Pastor and Florian Marquardt are already collaborating with an experimental team developing an optical neuromorphic computer. This machine processes information in the form of superimposed light waves, whereby suitable components regulate the type and strength of the interaction. The researchers’ aim is to put the concept of the self-learning physical machine into practice. “We hope to be able to present the first self-learning physical machine in three years,” says Florian Marquardt. By then, there should be neural networks which think with many more synapses and are trained with significantly larger amounts of data than today’s.

As a consequence there will likely be an even greater desire to implement neural networks outside conventional digital computers and to replace them with efficiently trained neuromorphic computers. “We are therefore confident that self-learning physical machines have a strong chance of being used in the further development of artificial intelligence,” says the physicist.

Autonomous medical needle steering in vivo

by Alan Kuntz, Maxwell Emerson, Tayfun Efe Ertop, Inbar Fried, Mengyu Fu, Janine Hoelscher, Margaret Rox, Jason Akulian, Erin A. Gillaspie, Yueh Z. Lee, Fabien Maldonado, Robert J. Webster, Ron Alterovitz in Science Robotics

Lung cancer is the leading cause of cancer-related deaths in the United States. Some tumors are extremely small and hide deep within lung tissue, making it difficult for surgeons to reach them. To address this challenge, UNC -Chapel Hill and Vanderbilt University researchers have been working on an extremely bendy but sturdy robot capable of traversing lung tissue.

Their research has reached a new milestone. In a new paper, Ron Alterovitz, PhD, in the UNC Department of Computer Science, and Jason Akulian, MD MPH, in the UNC Department of Medicine, have proven that their robot can autonomously go from “Point A” to “Point B” while avoiding important structures, such as tiny airways and blood vessels, in a living laboratory model.

“This technology allows us to reach targets we can’t otherwise reach with a standard or even robotic bronchoscope,” said Dr. Akulian, co-author on the paper and Section Chief of Interventional Pulmonology and Pulmonary Oncology in the UNC Division of Pulmonary Disease and Critical Care Medicine. “It gives you that extra few centimeters or few millimeters even, which would help immensely with pursuing small targets in the lungs.”

The development of the autonomous steerable needle robot leveraged UNC’s highly collaborative culture by blending medicine, computer science, and engineering expertise. In addition to Alterovitz and Akulian, the development effort included Yueh Z. Lee, MD, PhD, at the UNC Department of Radiology, as well as Robert J. Webster III at Vanderbilt University and Alan Kuntz at the University of Utah.

The robot is made of several separate components. A mechanical control provides controlled thrust of the needle to go forward and backward and the needle design allows for steering along curved paths. The needle is made from a nickel-titanium alloy and has been laser etched to increase its flexibility, allowing it to move effortlessly through tissue.

As it moves forward, the etching on the needle allows it to steer around obstacles with ease. Other attachments, such as catheters, could be used together with the needle to perform procedures such as lung biopsies.

Overview of the semiautonomous medical robot’s three stages in the lungs.

To drive through tissue, the needle needs to know where it is going. The research team used CT scans of the subject’s thoracic cavity and artificial intelligence to create three-dimensional models of the lung, including the airways, blood vessels, and the chosen target. Using this 3-D model and once the needle has been positioned for launch, their AI-driven software instructs it to automatically travel from “Point A” to “Point B” while avoiding important structures.

“The autonomous steerable needle we’ve developed is highly compact, but the system is packed with a suite of technologies that allow the needle to navigate autonomously in real-time,” said Alterovitz, the principal investigator on the project and senior author on the paper. “It’s akin to a self-driving car, but it navigates through lung tissue, avoiding obstacles like significant blood vessels as it travels to its destination.”

The needle can also account for respiratory motion. Unlike other organs, the lungs are constantly expanding and contracting in the chest cavity. This can make targeting especially difficult in a living, breathing subject. According to Akulian, it’s like shooting at a moving target.

The researchers tested their robot while the laboratory model performed intermittent breath holding. Every time the subject’s breath is held, the robot is programmed to move forward.

“There remain some nuances in terms of the robot’s ability to acquire targets and then actually get to them effectively,” said Akulian, who is also a member of the UNC Lineberger Comprehensive Cancer Center, “and while there’s still a lot of work to be done, I’m very excited about continuing to push the boundaries of what we can do for patients with the world-class experts that are here.”

“We plan to continue creating new autonomous medical robots that combine the strengths of robotics and AI to improve medical outcomes for patients facing a variety of health challenges while providing guarantees on patient safety,” added Alterovitz.

AI in Surgical Curriculum Design and Unintended Outcomes for Technical Competencies in Simulation Training

by Ali M. Fazlollahi, Recai Yilmaz, Alexander Winkler-Schwartz, Nykan Mirchi, Nicole Ledwos, Mohamad Bakhaidar, Ahmad Alsayegh, Rolando F. Del Maestro in JAMA Network Open

Virtual reality simulators can help learners improve their technical skills faster and with no risk to patients. In the field of neurosurgery, they allow medical students to practice complex operations before using a scalpel on a real patient. When combined with artificial intelligence, these tutoring systems can offer tailored feedback like a human instructor, identifying areas where the students need to improve and making suggestions on how to achieve expert performance.

A new study from the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University, however, shows that human instruction is still necessary to detect and compensate for unintended, and sometimes negative, changes in neurosurgeon behaviour after virtual reality AI training.

In the study, 46 medical students performed a tumour removal procedure on a virtual reality simulator. Half of them were randomly selected to receive instruction from an AI-powered intelligent tutor called the Virtual Operative Assistant (VOA), which uses a machine learning algorithm to teach surgical techniques and provide personalized feedback. The other half served as a control group by receiving no feedback. The students’ work was then compared to performance benchmarks selected by a team of established neurosurgeons.

Comparing the results, AI-tutored students caused 55 per cent less damage to healthy tissues than the control group. AI-tutored students also showed a 59 per cent reduction in average distance between instruments in each hand and 46 per cent less maximum force applied, both important safety measures.

However, AI-tutored students also showed some negative outcomes. For example, their dominant hand movements had 50 per cent lower velocity and 45 per cent lower acceleration than the control group, making their operations less efficient. The speed at which they removed tumour tissue was also 29 per cent lower in the AI-tutored group than the control group.

These unintended outcomes underline the importance of human instructors in the learning process, to promote both safety and efficiency in students.

Performance in the Learning Objectives of the Virtual Operative Assistant (VOA) Curriculum.

“AI systems are not perfect,” says Ali Fazlollahi, a medical student researcher at the Neurosurgical Simulation and Artificial Intelligence Learning Centre and the study’s first author. “Achieving mastery will still require some level of apprenticeship from an expert. Programs adopting AI will enable learners to monitor their competency and focus their intraoperative learning time with instructors more efficiently and on their individual tailored learning goals. We’re currently working towards finding an optimal hybrid mode of instruction in a crossover trial.”

Fazlollahi says his findings have implications beyond neurosurgery because many of the same principles are applied in other fields of skills’ training.

“This includes surgical education, not just neurosurgery, and also a range of other fields from aviation to military training and construction,” he says. “Using AI alone to design and run a technical skills curriculum can lead to unintended outcomes that will require oversight from human experts to ensure excellence in training and patient care.”

“Intelligent tutors powered by AI are becoming a valuable tool in the evaluation and training of the next generation of neurosurgeons,” says Dr. Rolando Del Maestro, the study’s senior author. “However, it is essential that surgical educators are an integral part of the development, application, and monitoring of these AI systems to maximize their ability to increase the mastery of neurosurgical skills and improve patient outcomes.”

Powerful, soft combustion actuators for insect-scale robots

by Cameron A. Aubin, Ronald H. Heisser, Ofek Peretz, Julia Timko, Jacqueline Lo, E. Farrell Helbling, Sadaf Sobhani, Amir D. Gat, Robert F. Shepherd in Science

Cornell researchers combined soft microactuators with high-energy-density chemical fuel to create an insect-scale quadrupedal robot that is powered by combustion and can outrace, outlift, outflex and outleap its electric-driven competitors.

The lead author is postdoctoral researcher Cameron Aubin, Ph.D. ’23. The project was led by Rob Shepherd, associate professor of mechanical and aerospace engineering in Cornell Engineering, whose Organic Robotics Lab has previously used combustion to create a braille display for electronics.

As anyone who has witnessed an ant carry off food from a picnic knows, insects are far stronger than their puny size suggests. However, robots at that scale have yet to reach their full potential. One of the challenges is “motors and engines and pumps don’t really work when you shrink them down to this size,” Aubin said, so researchers have tried to compensate by creating bespoke mechanisms to perform such functions. So far, the majority of these robots have been tethered to their power sources — which usually means electricity.

“We thought using a high-energy-density chemical fuel, just like we would put in an automobile, would be one way that we could increase the onboard power and performance of these robots,” he said. “We’re not necessarily advocating for the return of fossil fuels on a large scale, obviously. But in this case, with these tiny, tiny robots, where a milliliter of fuel could lead to an hour of operation, instead of a battery that is too heavy for the robot to even lift, that’s kind of a no brainer.”

While the team has yet to create a fully untethered model — Aubin says they are halfway there — the current iteration “absolutely throttles the competition, in terms of their force output.”

The four-legged robot, which is just over an inch long and weighs the equivalent of one and a half paperclips, is 3D-printed with a flame-resistant resin. The body contains a pair of separated combustion chambers that lead to the four actuators, which serve as the feet. Each actuator/foot is a hollow cylinder capped with a piece of silicone rubber, like a drum skin, on the bottom. When offboard electronics are used to create a spark in the combustion chambers, premixed methane and oxygen are ignited, the combustion reaction inflates the drum skin, and the robot pops up into the air.

This combustion-powered quadrupedal robot is capable of multi-gait movements and can leap 60 centimeters in the air, or roughly 20 times its body length.

The robot’s actuators are capable of reaching 9.5 newtons of force, compared to approximately 0.2 newtons for those of other similarly sized robots. It also operates at frequencies greater than 100 hertz, achieves displacements of 140% and can lift 22 times its body weight.

“Being powered by combustion allows them to do a lot of things that robots at this scale haven’t been able to do at this point,” Aubin said. “They can navigate really difficult terrains and clear obstacles. It’s an incredible jumper for its size. It’s also really fast on the ground. All of that is due to the force density and the power density of these fuel-driven actuators.”

The actuator design also enables a high degree of control. By essentially turning a knob, the operator can adjust the speed and frequency of sparking, or vary the fuel feed in real time, triggering a dynamic range of responses. A little fuel and some high-frequency sparking makes the robot skitter across the ground. Add a bit more fuel and less sparking and the robot will slow down and hop. Crank the fuel all the way up and give it one good spark and the robot will leap 60 centimeters in the air, roughly 20 times its body length, according to Aubin.

“To do all those multi-gait movements is something that you don’t typically see with robots at this scale,” Aubin said. “They’re either crawlers or jumpers, but not both.”

The researchers envision stringing together even more actuators in parallel arrays so they can produce both very fine and very forceful articulations on the macro scale. The team also plans to continue work on creating an untethered version. That goal will require a shift from a gaseous fuel to a liquid fuel that the robot can carry on board, along with smaller electronics.

“Everybody points to these insect-scale robots as being things that could be used for search and rescue, exploration, environmental monitoring, surveillance, navigation in austere environments,” Aubin said. “We think that the performance increases that we’ve given this robot using these fuels bring us closer to reality where that’s actually possible.”

Can Training Make Three Arms Better Than Two Heads for Trimanual Coordination?

by Yanpei Huang, Jonathan Eden, Ekaterina Ivanova, Etienne Burdet in IEEE Open Journal of Engineering in Medicine and Biology

One-hour training is enough for people to carry a task alone with their supernumerary robotic arms as effectively as with a partner, study finds.

A new study by researchers at Queen Mary University of London, Imperial College London and The University of Melbourne has found that people can learn to use supernumerary robotic arms as effectively as working with a partner in just one hour of training.

The study investigated the potential of supernumerary robotic arms to help people perform tasks that require more than two hands. The idea of human augmentation with additional artificial limbs has long been in science fiction, like in Doctor Octopus in The Amazing Spider-Man (1963).

“Many tasks in daily life, such as opening a door while carrying a big package, require more than two hands,” said Dr Ekaterina Ivanova, lead author of the study from Queen Mary University of London. “Supernumerary robotic arms have been proposed as a way to allow people to do these tasks more easily, but until now, it was not clear how easy they would be to use.”

Comparison of solo trimanual and dyad operation of a task requiring simultaneous fine control of three virtual hands (VHs).

The study involved 24 participants who were asked to perform a variety of tasks with a supernumerary robotic arm. The participants were either given one hour of training in how to use the arm, or they were asked to work with a partner.

The results showed that the participants who had received training on the supernumerary arm performed the tasks just as well as the participants who were working with a partner. This suggests that supernumerary robotic arms can be a viable alternative to working with a partner, and that they can be learned to use effectively in a relatively short amount of time.

“Our findings are promising for the development of supernumerary robotic arms,” said Dr Ivanova. “They suggest that these arms could be used to help people with a variety of tasks, such as surgery, industrial work, or rehabilitation.”

Tetraflex: A Multigait Soft Robot for Object Transportation in Confined Environments

by P. Wharton et al in IEEE Robotics and Automation Letters

A team at the University of Bristol and based at the Bristol Robotics Laboratory have built a tetrahedron shaped robot with flexible piping known as Tetraflex that can move through small gaps or over challenging terrain. It can also encapsulate fragile objects such as an egg and transport them safely within its soft body.

The findings show that the Tetraflex robot is capable of locomoting in multiple different ways. This makes the robot potentially useful for mobility in a challenging or confined environments such as navigating rubble to reach survivors of an earthquake, performing oil rig inspections or even exploring other planets.

The object transport capability demonstrated also adds another dimension to potential applications. This could be used to pick up and transport payloads from otherwise inaccessible locations, helping with ecological surveying or in nuclear decommissioning.

Lead author Peter Wharton from Bristol’s School of Engineering Mathematics and Technology explained, “The robot is composed of soft struts connected by rigid nodes. Each strut is formed of an airtight rubber bellow and the length of the strut can be controlled by varying the air pressure within the bellow.

Robot. Credit: Peter Wharton.

“Higher pressures cause the bellow to extend, and lower pressures cause it to contract. By controlling the pressure in each bellow simultaneously we can control the robot shape and size change.

“After this, it was simply a matter of experimenting with different patterns of shape change that would generate useful motions such as rolling or crawling along a surface.”

Their design uses soft struts which can change length freely and independently. By changing the lengths of the struts by the right amount and in the right sequence, they can generate multiple different ways such as rolling or crawling), change the size of the robot, and even envelop and transport payloads.

Peter said, “I would say these capabilities are a natural consequence of working with such a versatile structure and we hope that other interesting capabilities can be developed in the future.

“The most exciting aspect of this study for me is the versatility of Tetraflex and how we might be able to use these robots to explore challenging terrain and achieve tasks in areas humans cannot access. The multiple gaits available to Tetraflex and object transport capability show this versatility well.”

The team have already enjoyed some success, entering an earlier version of Tetraflex in the RoboSoft 2022 Locomotion Competition in Edinburgh and coming third, demonstrating movement over sand, through small gaps and between obstacles. After exploring some capabilities of Tetraflex in locomotion and object transport, they now plan to apply machine learning algorithms which could allow them to really thoroughly explore movement patterns, as well as optimizing their current ones.

He added, “There could be some really creative and effective ways of moving around or interacting with the environment that we haven’t yet discovered.”

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--