RT/ Robotic proxy brings remote users to life in real time

Paradigm
Paradigm
Published in
26 min readMay 16, 2023

Robotics biweekly vol.74, 29h April — 16th May

TL;DR

  • Researchers have developed a robot, called ReMotion, that occupies physical space on a remote user’s behalf, automatically mirroring the user’s movements in real time and conveying key body language that is lost in standard virtual environments.
  • An experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.
  • Engineers have discovered a new way to program robots to help people with dementia locate medicine, glasses, phones and other objects they need but have lost.
  • As AI becomes increasingly realistic, our trust in those with whom we communicate may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems impact our trust in the individuals we interact with.
  • Intrigued to see if the many limbs could be helpful for locomotion in this world, a team of physicists, engineers, and mathematicians are using this style of movement to their advantage. They developed a new theory of multilegged locomotion and created many-legged robotic models, discovering the robot with redundant legs could move across uneven surfaces without any additional sensing or control technology as the theory predicted.
  • Automation uncovers combinations of amino acids that feed two bacterial species and could tell us much more about the 90% of bacteria that humans have hardly studied. An artificial intelligence system enables robots to conduct autonomous scientific experiments — as many as 10,000 per day — potentially driving a drastic leap forward in the pace of discovery in areas from medicine to agriculture to environmental science.
  • Mechanically responsive molecular crystals are extremely useful in soft robotics, which requires a versatile actuation technology. Crystals driven by the photothermal effect are particularly promising for achieving high-speed actuation. However, the response (bending) observed in these crystals is usually small. Now, scientists address this issue by inducing large resonated natural vibrations in anisole crystals with UV light illumination at the natural vibration frequency of the crystal.
  • A lab has developed a deep neural network that improves the accuracy of their unique devices for detecting pathogen biomarkers.
  • Research found ChatGPT correctly answered 46 per cent of questions from a study resource commonly used by physicians when preparing for board certification in ophthalmology. When researchers conducted the same test one month later, ChatGPT scored more than 10 per cent higher.
  • Researchers recently set out to explore what happens when live fish are placed in the same environment as a robotic fish. Their findings could both inform the development of fish-inspired robots and shed some new light on the behavior of real fish.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

ReMotion: Supporting Remote Collaboration in Open Space with Automatic Robotic Embodiment

by Mose Sakashita, Ruidong Zhang, Xiaoyi Li, Hyunju Kim, Michael Russo, Cheng Zhang, Malte F. Jung, François Guimbretière in Association for Computing Machinery CHI Conference on Human Factors in Computing Systems

Cornell University researchers have developed a robot, called ReMotion, that occupies physical space on a remote user’s behalf, automatically mirroring the user’s movements in real time and conveying key body language that is lost in standard virtual environments.

“Pointing gestures, the perception of another’s gaze, intuitively knowing where someone’s attention is — in remote settings, we lose these nonverbal, implicit cues that are very important for carrying out design activities,” said Mose Sakashita, a doctoral student of information science.

Sakashita is the lead author of “ReMotion: Supporting Remote Collaboration in Open Space with Automatic Robotic Embodiment,” which he presented at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems in Hamburg, Germany. “With ReMotion, we show that we can enable rapid, dynamic interactions through the help of a mobile, automated robot.”

Mose Sakashita, a doctoral student in the field of information science, with the ReMotion robot.

The lean, nearly six-foot-tall device is outfitted with a monitor for a head, omnidirectional wheels for feet and game-engine software for brains. It automatically mirrors the remote user’s movements — thanks to another Cornell-made device, NeckFace, which the remote user wears to track head and body movements. The motion data is then sent remotely to the ReMotion robot in real-time.

Telepresence robots are not new, but remote users generally need to steer them manually, distracting from the task at hand, researchers said. Other options such as virtual reality and mixed reality collaboration can also require an active role from the user and headsets may limit peripheral awareness, researchers added.

In a small study, nearly all participants reported having a better connection with their remote teammates when using ReMotion compared to an existing telerobotic system. Participants also reported significantly higher shared attention among remote collaborators.

In its current form, ReMotion only works with two users in a one-on-one remote environment, and each user must occupy physical spaces of identical size and layout. In future work, ReMotion developers intend to explore asymmetrical scenarios, like a single remote team member collaborating virtually via ReMotion with multiple teammates in a larger room.

With further development, Sakashita says ReMotion could be deployed in virtual collaborative environments as well as in classrooms and other educational settings.

The social consequences of Machine Allocation Behavior: Fairness, interpersonal perceptions and performance

by Houston Claure, Seyun Kim, René F. Kizilcec, Malte Jung in Computers in Human Behavior

A Cornell University-led experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.

Most studies on algorithmic fairness focus on the algorithm or the decision itself, but researchers sought to explore the relationships among the people affected by the decisions.

“We are starting to see a lot of situations in which AI makes decisions on how resources should be distributed among people,” said Malte Jung, associate professor of information science, whose group conducted the study. “We want to understand how that influences the way people perceive one another and behave towards each other. We see more and more evidence that machines mess with the way we interact with each other.”

In an earlier study, a robot chose which person to give a block to and studied the reactions of each individual to the machine’s allocation decisions.

“We noticed that every time the robot seemed to prefer one person, the other one got upset,” said Jung. “We wanted to study this further, because we thought that, as machines making decisions becomes more a part of the world — whether it be a robot or an algorithm — how does that make a person feel?”

Using open-source software, Houston Claure — the study’s first author and postdoctoral researcher at Yale University — developed a two-player version of Tetris, in which players manipulate falling geometric blocks in order to stack them without leaving gaps before the blocks pile to the top of the screen. Claure’s version, Co-Tetris, allows two people (one at a time) to work together to complete each round.

An “allocator” — either human or AI, which was conveyed to the players — determines which player takes each turn. Jung and Claure devised their experiment so that players would have either 90% of the turns (the “more” condition), 10% (“less”) or 50% (“equal”).

The researchers found, predictably, that those who received fewer turns were acutely aware that their partner got significantly more. But they were surprised to find that feelings about it were largely the same regardless of whether a human or an AI was doing the allocating.

The effect of these decisions is what the researchers have termed “machine allocation behavior” — similar to the established phenomenon of “resource allocation behavior,” the observable behavior people exhibit based on allocation decisions. Jung said machine allocation behavior is “the concept that there is this unique behavior that results from a machine making a decision about how something gets allocated.”

The researchers also found that fairness didn’t automatically lead to better game play and performance. In fact, equal allocation of turns led, on average, to a worse score than unequal allocation.

“If a strong player receives most of the blocks,” Claure said, “the team is going to do better. And if one person gets 90%, eventually they’ll get better at it than if two average players split the blocks.”

Where is My Phone?: Towards Developing an Episodic Memory Model for Companion Robots to Track Users’ Salient Objects

by Juhi Shah, Ali Ayub, Chrystopher L. Nehaniv, Kerstin Dautenhahn in HRI ’23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction

Engineers at the University of Waterloo have discovered a new way to program robots to help people with dementia locate medicine, glasses, phones and other objects they need but have lost.

And while the initial focus is on assisting a specific group of people, the technology could someday be used by anyone who has searched high and low for something they’ve misplaced.

“The long-term impact of this is really exciting,” said Dr. Ali Ayub, a post-doctoral fellow in electrical and computer engineering. “A user can be involved not just with a companion robot but a personalized companion robot that can give them more independence.”

Ayub and three colleagues were struck by the rapidly rising number of people coping with dementia, a condition that restricts brain function, causing confusion, memory loss and disability. Many of these individuals repeatedly forget the location of everyday objects, which diminishes their quality of life and places additional burdens on caregivers. Engineers believed a companion robot with an episodic memory of its own could be a game-changer in such situations. And they succeeded in using artificial intelligence to create a new kind of artificial memory.

The research team began with a Fetch mobile manipulator robot, which has a camera for perceiving the world around it. Next, using an object-detection algorithm, they programmed the robot to detect, track and keep a memory log of specific objects in its camera view through stored video. With the robot capable of distinguishing one object from another, it can record the time and date objects enter or leave its view.

Researchers then developed a graphical interface to enable users to choose objects they want to be tracked and, after typing the objects’ names, search for them on a smartphone app or computer. Once that happens, the robot can indicate when and where it last observed the specific object. Tests have shown the system is highly accurate. And while some individuals with dementia might find the technology daunting, Ayub said caregivers could readily use it. Moving forward, researchers will conduct user studies with people without disabilities, then people with dementia.

Suspicious Minds: the Problem of Trust and Conversational Agents

by Jonas Ivarsson, Oskar Lindwall in Computer Supported Cooperative Work (CSCW)

As AI becomes increasingly realistic, our trust in those with whom we communicate may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems impact our trust in the individuals we interact with.

In one scenario, a would-be scammer, believing he is calling an elderly man, is instead connected to a computer system that communicates through pre-recorded loops. The scammer spends considerable time attempting the fraud, patiently listening to the “man’s” somewhat confusing and repetitive stories. Oskar Lindwall, a professor of communication at the University of Gothenburg, observes that it often takes a long time for people to realize they are interacting with a technical system.

He has, in collaboration with Professor of informatics Jonas Ivarsson, exploring how individuals interpret and relate to situations where one of the parties might be an AI agent. The article highlights the negative consequences of harboring suspicion toward others, such as the damage it can cause to relationships.

Ivarsson provides an example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to search for evidence of deception. The authors argue that being unable to fully trust a conversational partner’s intentions and identity may result in excessive suspicion even when there is no reason for it.

Their study discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot. The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who you are communicating with. Ivarsson questions whether AI should have such human-like voices, as they create a sense of intimacy and lead people to form impressions based on the voice alone.

In the case of the would-be fraudster calling the “older man,” the scam is only exposed after a long time, which Lindwall and Ivarsson attribute to the believability of the human voice and the assumption that the confused behavior is due to age. Once an AI has a voice, we infer attributes such as gender, age, and socio-economic background, making it harder to identify that we are interacting with a computer.

The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency. Communication with others involves not only deception but also relationship-building and joint meaning-making. The uncertainty of whether one is talking to a human or a computer affects this aspect of communication. While it might not matter in some situations, such as cognitive-behavioral therapy, other forms of therapy that require more human connection may be negatively impacted.

Jonas Ivarsson and Oskar Lindwall analyzed data made available on YouTube. They studied three types of conversations and audience reactions and comments. In the first type, a robot calls a person to book a hair appointment, unbeknownst to the person on the other end. In the second type, a person calls another person for the same purpose. In the third type, telemarketers are transferred to a computer system with pre-recorded speech.

Multilegged matter transport: A framework for locomotion on noisy landscapes

by Baxi Chong, Juntao He, Daniel Soto, Tianyu Wang, Daniel Irvine, Grigoriy Blekherman, Daniel I. Goldman in Science

Centipedes are known for their wiggly walk. With tens to hundreds of legs, they can traverse any terrain without stopping.

“When you see a scurrying centipede, you’re basically seeing an animal that inhabits a world that is very different than our world of movement,” said Daniel Goldman, the Dunn Family Professor in the School of Physics. “Our movement is largely dominated by inertia. If I swing my leg, I land on my foot and I move forward. But in the world of centipedes, if they stop wiggling their body parts and limbs, they basically stop moving instantly.”

Intrigued to see if the many limbs could be helpful for locomotion in this world, a team of physicists, engineers, and mathematicians at the Georgia Institute of Technology are using this style of movement to their advantage. They developed a new theory of multilegged locomotion and created many-legged robotic models, discovering the robot with redundant legs could move across uneven surfaces without any additional sensing or control technology as the theory predicted. These robots can move over complex, bumpy terrain — and there is potential to use them for agriculture, space exploration, and even search and rescue.

Centipedes are known for their wiggly walk. With tens to hundreds of legs, they can traverse any terrain without stopping.

For the paper, the researchers were motivated by mathematician Claude Shannon’s communication theory, which demonstrates how to reliably transmit signals over distance, to understand why a multilegged robot was so successful at locomotion. The theory of communication suggests that one way to ensure a message gets from point A to point B on a noisy line isn’t to send it as an analog signal, but to break it into discrete digital units and repeat these units with an appropriate code.

“We were inspired by this theory, and we tried to see if redundancy could be helpful in matter transportation,” said Baxi Chong, a physics postdoctoral researcher. “So, we started this project to see what would happen if we had more legs on the robot: four, six, eight legs, and even 16 legs.”

A team led by Chong, including School of Mathematics postdoctoral fellow Daniel Irvine and Professor Greg Blekherman, developed a theory that proposes that adding leg pairs to the robot increases its ability to move robustly over challenging surfaces — a concept they call spatial redundancy. This redundancy makes the robot’s legs successful on their own without the need for sensors to interpret the environment. If one leg falters, the abundance of legs keeps it moving regardless. In effect, the robot becomes a reliable system to transport itself and even a load from A to B on difficult or “noisy” landscapes. The concept is comparable to how punctuality can be guaranteed on wheeled transport if the track or rail is smooth enough but without having to engineer the environment to create this punctuality.

“With an advanced bipedal robot, many sensors are typically required to control it in real time,” Chong said. “But in applications such as search and rescue, exploring Mars, or even micro robots, there is a need to drive a robot with limited sensing. There are many reasons for such sensor-free initiative. The sensors can be expensive and fragile, or the environments can change so fast that it doesn’t allow enough sensor-controller response time.”

To test this, Juntao He, a Ph.D. student in robotics, conducted a series of experiments where he and Daniel Soto, a master’s student in the George W. Woodruff School of Mechanical Engineering, built terrains to mimic an inconsistent natural environment. He then tested the robot by increasing its number of legs by two each time, starting with six and eventually expanding to 16. As the leg count increased, the robot could more agilely move across the terrain, even without sensors[PGR1] , as the theory predicted. Eventually, they tested the robot outdoors on real terrain, where it was able to traverse in a variety of environments.

“It’s truly impressive to witness the multilegged robot’s proficiency in navigating both lab-based terrains and outdoor environments,” Juntao said. “While bipedal and quadrupedal robots heavily rely on sensors to traverse complex terrain, our multilegged robot utilizes leg redundancy and can accomplish similar tasks with open-loop control.”

The researchers are already applying their discoveries to farming. Goldman has co-founded a company that aspires to use these robots to weed farmland where weedkillers are ineffective.

“They’re kind of like a Roomba but outside for complex ground,” Goldman said. “A Roomba works because it has wheels that function well on flat ground. Until the development of our framework, we couldn’t confidently predict locomotor reliability on bumpy, rocky, debris-ridden terrain. We now have the beginnings of such a scheme, which could be used to ensure that our robots traverse a crop field in a certain amount of time.”

The researchers also want to refine the robot. They know why the centipede robot framework is functional, but now they’re determining the optimal number of legs to achieve motion without sensing in a way that is cost-effective yet still retains the benefits.

“In this paper, we asked, ‘How do you predict the minimum number of legs to achieve such tasks?’” Chong said. “Currently we only prove that the minimum number exists, but we don’t know that exact number of legs needed. Further, we need to better understand the tradeoff between energy, speed, power, and robustness in such a complex system.”

BacterAI maps microbial metabolism without prior knowledge

by Adam C. Dama, Kevin S. Kim, Danielle M. Leyva, Annamarie P. Lunkes, Noah S. Schmid, Kenan Jijakli, Paul A. Jensen in Nature Microbiology

An artificial intelligence system enables robots to conduct autonomous scientific experiments — as many as 10,000 per day — potentially driving a drastic leap forward in the pace of discovery in areas from medicine to agriculture to environmental science.

That artificial intelligence platform, dubbed BacterAI, mapped the metabolism of two microbes associated with oral health — with no baseline information to start with. Bacteria consume some combination of the 20 amino acids needed to support life, but each species requires specific nutrients to grow. The U-M team wanted to know what amino acids are needed by the beneficial microbes in our mouths so they can promote their growth.

“We know almost nothing about most of the bacteria that influence our health. Understanding how bacteria grow is the first step toward reengineering our microbiome,” said Paul Jensen, U-M assistant professor of biomedical engineering who was at the University of Illinois when the project started.

Media selected by BacterAI are not random.

Figuring out the combination of amino acids that bacteria like is tricky, however. Those 20 amino acids yield more than a million possible combinations, just based on whether each amino acid is present or not. Yet BacterAI was able to discover the amino acid requirements for the growth of both Streptococcus gordonii and Streptococcus sanguinis.

To find the right formula for each species, BacterAI tested hundreds of combinations of amino acids per day, honing its focus and changing combinations each morning based on the previous day’s results. Within nine days, it was producing accurate predictions 90% of the time. Unlike conventional approaches that feed labeled data sets into a machine-learning model, BacterAI creates its own data set through a series of experiments. By analyzing the results of previous trials, it comes up with predictions of what new experiments might give it the most information. As a result, it figured out most of the rules for feeding bacteria with fewer than 4,000 experiments.

“When a child learns to walk, they don’t just watch adults walk and then say ‘Ok, I got it,’ stand up, and start walking. They fumble around and do some trial and error first,” Jensen said. “We wanted our AI agent to take steps and fall down, to come up with its own ideas and make mistakes. Every day, it gets a little better, a little smarter.”

Little to no research has been conducted on roughly 90% of bacteria, and the amount of time and resources needed to learn even basic scientific information about them using conventional methods is daunting. Automated experimentation can drastically speed up these discoveries. The team ran up to 10,000 experiments in a single day. But the applications go beyond microbiology. Researchers in any field can set up questions as puzzles for AI to solve through this kind of trial and error.

“With the recent explosion of mainstream AI over the last several months, many people are uncertain about what it will bring in the future, both positive and negative,” said Adam Dama, a former engineer in the Jensen Lab and lead author of the study. “But to me, it’s very clear that focused applications of AI like our project will accelerate everyday research.”

Photothermally induced natural vibration for versatile and high-speed actuation of crystals

by Yuki Hagiwara, Shodai Hasebe, Hiroki Fujisawa, Junko Morikawa, Toru Asahi, Hideko Koshima in Nature Communications

Mechanically responsive molecular crystals are extremely useful in soft robotics, which requires a versatile actuation technology. Crystals driven by the photothermal effect are particularly promising for achieving high-speed actuation. However, the response (bending) observed in these crystals is usually small. Now, scientists from Japan address this issue by inducing large resonated natural vibrations in anisole crystals with UV light illumination at the natural vibration frequency of the crystal.

Every material possesses a unique natural vibration frequency such that when an external periodic force is applied to this material close to this frequency, the vibrations are greatly amplified. In the parlance of physics, this phenomenon is known as “resonance.” Resonance is ubiquitous in our daily life, and, depending on the context, could be deemed desirable or undesirable. For instance, musical instruments like the guitar relies on resonance for sound amplification. On the other hand, buildings and bridges are more likely to collapse under an earthquake if the ground vibration frequency matches their natural frequency.

Interestingly, natural vibration has not received much attention in material actuation, which relies on the action of mechanically responsive crystals. Versatile actuation technologies are highly desirable in the field of soft robotics. Although crystal actuation based on processes like photoisomerisation and phase transitions have been widely studied, these processes lack versatility since they require specific crystals to work. One way to improve versatility is by employing photothermal crystals, which show bending due to light-induced heating. While promising for achieving high-speed actuation, the bending angle is usually small (<0.5°), making the actuation inefficient.

Bending behaviour of 1β crystal III by the photothermal effect and the natural vibration (390 Hz) by irradiation with UV light (375 nm, 1456 mW cm−2).

Now, a team of scientists from Waseda University and Tokyo Institute of Technology in Japan has managed to overcome this drawback with nothing more than the age-old phenomenon of resonated natural vibration. The team, led by Dr. Hideko Koshima from Waseda University in Japan, used 2,4-dinitroanisole β-phase crystals () to demonstrate large-angle photothermally resonated high-speed bending induced by pulsed UV irradiation.

“Initially, the goal of this research was to create crystals that bend largely due to the photothermal effect. Therefore, we chose 2,4-dinitroanisole (1) β-phase crystal (), which has a large thermal expansion coefficient,” explains Koshima, speaking of the team’s motivation behind the study. “We serendipitously discovered fast and small natural vibration induced by the photothermal effect. Furthermore, we achieved high-speed and large bending by photothermally resonating the natural vibration.”

In their work, the team first cooled a methanol solution of commercially available anisole 1 to obtain hexagonal, rod-shaped single crystals. To irradiate them with UV light, they used a pulsed UV laser with a wavelength of 375 nm and observed the bending response of the crystal using a digital high-speed microscope. They found that the rod-shaped crystals showed, under UV irradiation, a fast natural vibration at 390 Hz with a large photothermal bending of nearly 1°, which is larger than the value of 0.2° previously reported in other crystals. Further, the bending angle due to the natural vibraton increased to nearly 4° when irradiated with pulsed UV light at 390 Hz (same as the crystal’s natural frequency). In addition to this large bending, the team observed a high response frequency of 700 Hz along with the highest energy conversion efficiency recorded till date.

These findings were further confirmed through simulations performed by the team. To their excitement, the simulation results showed excellent agreement with experimental data.

“Our findings show that any light-absorbing crystal can exhibit high-speed, versatile actuation through resonated natural vibrations. This can open doors to the applications of photothermal crystals, leading eventually to real-life soft robots with high-speed actuation capability and perhaps a society with humans and robots living in harmony,” concludes Koshima.

Machine learning at the edge for AI-enabled multiplexed pathogen detection

by Vahid Ganjalizadeh, Gopikrishnan G. Meena, Matthew A. Stott, Aaron R. Hawkins, Holger Schmidt in Scientific Reports

Sophisticated systems for the detection of biomarkers — molecules such as DNA or proteins that indicate the presence of a disease — are crucial for real-time diagnostic and disease-monitoring devices.

Holger Schmidt, distinguished professor of electrical and computer engineering at UC Santa Cruz, and his group have long been focused on developing unique, highly sensitive devices called optofluidic chips to detect biomarkers. Schmidt’s graduate student Vahid Ganjalizadeh led an effort to use machine learning to enhance their systems by improving its ability to accurately classify biomarkers. The deep neural network he developed classifies particle signals with 99.8 percent accuracy in real time, on a system that is relatively cheap and portable for point-of-care applications, as shown in a new paper.

When taking biomarker detectors into the field or a point-of-care setting such as a health clinic, the signals received by the sensors may not be as high quality as those in a lab or a controlled environment. This may be due to a variety of factors, such as the need to use cheaper chips to bring down costs, or environmental characteristics such as temperature and humidity.

Deep Neural Network and dataset.

To address the challenges of a weak signal, Schmidt and his team developed a deep neural network that can identify the source of that weak signal with high confidence. The researchers trained the neural network with known training signals, teaching it to recognize potential variations it could see, so that it can recognize patterns and identify new signals with very high accuracy. First, a parallel cluster wavelet analysis (PCWA) approach designed in Schmidt’s lab detects that a signal is present. Then, the neural network processes the potentially weak or noisy signal, identifying its source. This system works in real time, so users are able to receive results in a fraction of a second.

“It’s all about making the most of possibly low quality signals, and doing that really fast and efficiently,” Schmidt said.

A smaller version of the neural network model can run on portable devices. In the paper, the researchers run the system over a Google Coral Dev board, a relatively cheap edge device for accelerated execution of artificial intelligence algorithms. This means the system also requires less power to execute the processing compared to other techniques.

“Unlike some research that requires running on supercomputers to do high-accuracy detection, we proved that even a compact, portable, relatively cheap device can do the job for us,” Ganjalizadeh said. “It makes it available, feasible, and portable for point-of-care applications.”

The entire system is designed to be used completely locally, meaning the data processing can happen without internet access, unlike other systems that rely on cloud computing. This also provides a data security advantage, because results can be produced without the need to share data with a cloud server provider. It is also designed to be able to give results on a mobile device, eliminating the need to bring a laptop into the field.

“You can build a more robust system that you could take out to under-resourced or less- developed regions, and it still works,” Schmidt said.

This improved system will work for any other biomarkers Schmidt’s lab’s systems have been used to detect in the past, such as COVID-19, Ebola, flu, and cancer biomarkers. Although they are currently focused on medical applications, the system could potentially be adapted for the detection of any type of signal.

To push the technology further, Schmidt and his lab members plan to add even more dynamic signal processing capabilities to their devices. This will simplify the system and combine the processing techniques needed to detect signals at both low and high concentrations of molecules. The team is also working to bring discrete parts of the setup into the integrated design of the optofluidic chip.

Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment

by Andrew Mihalache, Marko M. Popovic, Rajeev H. Muni in JAMA Ophthalmology

A study of ChatGPT found the artificial intelligence tool answered less than half of the test questions correctly from a study resource commonly used by physicians when preparing for board certification in ophthalmology.

The study led by St. Michael’s Hospital, a site of Unity Health Toronto, found ChatGPT correctly answered 46 per cent of questions when initially conducted in Jan. 2023. When researchers conducted the same test one month later, ChatGPT scored more than 10 per cent higher. The potential of AI in medicine and exam preparation has garnered excitement since ChatGPT became publicly available in Nov. 2022. It’s also raising concern for the potential of incorrect information and cheating in academia. ChatGPT is free, available to anyone with an internet connection, and works in a conversational manner.

“ChatGPT may have an increasing role in medical education and clinical practice over time, however it is important to stress the responsible use of such AI systems,” said Dr. Rajeev H. Muni, principal investigator of the study and a researcher at the Li Ka Shing Knowledge Institute at St. Michael’s. “ChatGPT as used in this investigation did not answer sufficient multiple choice questions correctly for it to provide substantial assistance in preparing for board certification at this time.”

Researchers used a dataset of practice multiple choice questions from the free trial of OphthoQuestions, a common resource for board certification exam preparation. To ensure ChatGPT’s responses were not influenced by concurrent conversations, entries or conversations with ChatGPT were cleared prior to inputting each question and a new ChatGPT account was used. Questions that used images and videos were not included because ChatGPT only accepts text input. Of 125 text-based multiple-choice questions, ChatGPT answered 58 (46 per cent) questions correctly when the study was first conducted in Jan. 2023. Researchers repeated the analysis on ChatGPT in Feb. 2023, and the performance improved to 58 per cent.

“ChatGPT is an artificial intelligence system that has tremendous promise in medical education. Though it provided incorrect answers to board certification questions in ophthalmology about half the time, we anticipate that ChatGPT’s body of knowledge will rapidly evolve,” said Dr. Marko Popovic, a co-author of the study and a resident physician in the Department of Ophthalmology and Vision Sciences at the University of Toronto.

ChatGPT closely matched how trainees answer questions, and selected the same multiple-choice response as the most common answer provided by ophthalmology trainees 44 per cent of the time. ChatGPT selected the multiple-choice response that was least popular among ophthalmology trainees 11 per cent of the time, second least popular 18 per cent of the time, and second most popular 22 per cent of the time.

“ChatGPT performed most accurately on general medicine questions, answering 79 per cent of them correctly. On the other hand, its accuracy was considerably lower on questions for ophthalmology subspecialties. For instance, the chatbot answered 20 per cent of questions correctly on oculoplastics and zero per cent correctly from the subspecialty of retina. The accuracy of ChatGPT will likely improve most in niche subspecialties in the future,” said Andrew Mihalache, lead author of the study and undergraduate student at Western University.

Proactivity of fish and leadership of self-propelled robotic fish during interaction

by Ziye Zhou et al in Bioinspiration & Biomimetics

In recent decades, engineers have created a wide range of robotic systems inspired by animals, including four legged robots, as well as systems inspired by snakes, insects, squid and fish. Studies exploring the interactions between these robots and their biological counterparts, however, as still relatively rare. Researchers at Peking University and China Agricultural University recently set out to explore what happens when live fish are placed in the same environment as a robotic fish. Their findings could both inform the development of fish-inspired robots and shed some new light on the behavior of real fish.

“Our research team has been focusing on the development of self-propelled robotic fish for a considerable amount of time,” Dr. Junzhi Yu, one of the researchers who carried out the study, told Tech Xplore. “During our field experiments, we observed an exciting phenomenon where live fish were observed following the swimming robotic fish. We are eager to further explore the underlying principles behind this phenomenon and gain a deeper understanding of this ‘fish following’ behavior.”

The key objective of the recent work by Dr. Yu and his colleagues was to gather new insight about how fish interact with the fish-like robots developed by their lab, as this could benefit both the robotics and biology research communities. The robotic fish used in their experiments was carefully designed to replicate the appearance, body shape, and movements of koi fish, large and colorful freshwater fish originating from Eastern Asia.

Relationship between attraction ratio and Sr. The figure shows the relationship between the attraction ratio and the Strouhal number of the robotic fish when interaction occurs is illustrated. It can be observed that when the Strouhal number Sr is less than 1.95, there is a negative correlation between the attraction ratio and the Strouhal number, with a Spearman rank correlation coefficient of ρ = −0.266 and a P value of p = 0.033. When the Strouhal number is greater than 1.95, there is a jump step in the attraction ratio, which directly jumps to a higher level of 0.4 to 0.6.

“Using the central pattern generator (CPG) model, we have developed a control system that generates rhythm signals for the oscillations of our system’s two concatenated joints,” Dr. Yu explained. “These signals drive the flexible caudal fin to swing, and produce anti-Karman vortex street, which enables our robotic fish to achieve a body-caudal fin (BCF) motion similar to that of koi fish. This design allows our robotic fish to self-propelled swim efficiently, making it an ideal tool for studying fish behavior.”

In their experiments, Dr. Yu and his colleagues placed one or two prototypes of their koi fish-like robot in the same tank with one or more live fishes. They then observed how the fish behaved in the presence of this robot and assessed whether their behavior varied based on how many other live fish were present in the tank with them.

“The most notable achievement of our study is the analysis of experiments on quantity variation and parameter variation,” Dr. Yu said. “Through extensive experimentation, we discovered that live fish exhibit significantly lower proactivity when alone, and the most proactive case is one where a robotic fish is interacting with two real fish. In addition, our experiments on parameter variation indicated that live fish may respond more proactively to robotic fish that swim with high frequency and low amplitude, but they may also move together with the robotic fish at high frequency and high amplitude.”

The researchers’ observations shed an interesting new light on the collective behavior of fish, which could potentially guide the design of additional fish-like robots. Their work could also inspire other teams to explore the interactions between live animals and their robotic counterparts. This could in turn help to better understand the social behavior of these animals and how they would respond to robots if they were eventually introduced in their environment.

“One promising direction for the further development of robotic fish is the use of flexible materials such as dielectric elastomer to create silent and vibration-free propulsion technology,” Dr. Yu added. “This will enable us to achieve a higher level of bionic interaction between robotic fish and real fish, opening new possibilities for studying aquatic environments and marine life. With continued research and development in this area, we hope to develop some commercial products for interactive demonstrations.”

Upcoming events

ICRA 2023: 29 May–2 June 2023, London, UK

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--