RT/ Choosing exoskeleton settings like a radio station

Paradigm
Paradigm
Published in
31 min readNov 2, 2023

Robotics biweekly vol.84, 19th October — 2nd November

TL;DR

  • Taking inspiration from music streaming services, a team of engineers has designed the simplest way for users to program their own exoskeleton assistance settings.
  • Robotic prosthetic ankles that are controlled by nerve impulses allow amputees to move more ‘naturally,’ improving their stability, according to a new study.
  • Investigators found that AI algorithm can detect an abnormal heart rhythm in people not yet showing symptoms. The algorithm, which identified hidden signals in common medical diagnostic testing, may help doctors better prevent strokes and other cardiovascular complications in people with atrial fibrillation — the most common type of heart rhythm disorder.
  • Researchers have designed an algorithm that can intercept a man-in-the-middle (MitM) cyberattack on an unmanned military robot and shut it down in seconds. The algorithm, tested in real time, achieved a 99% success rate.
  • For the first time, big data and AI are being used to model hidden patterns in nature, not just for one bird species, but for entire ecological communities across continents. And the models follow each species’ full annual life cycle, from breeding to fall migration to nonbreeding grounds, and back north again during spring migration.
  • Can AI get hungry? Develop a taste for certain foods? Not yet, but a team of researchers is developing a novel electronic tongue that mimics how taste influences what we eat based on both needs and wants, providing a possible blueprint for AI that processes information more like a human being.
  • Researchers developed the first AI to date that can intelligently design robots from scratch by compressing billions of years of evolution into mere seconds. It’s not only fast but also runs on a lightweight computer and designs wholly novel structures from scratch — without human-labeled, bias-filled datasets.
  • Robots helped achieve a major breakthrough in our understanding of how insect flight evolved. The study is a result of a six-year long collaboration between roboticists and biophysicists.
  • Hybrid insect computer robots could pioneer a new future for robotics. It involves using electrical stimuli to control an insect’s movement. Now, an international research group has conducted a study on the relationship between electrical stimulation in stick insects’ leg muscles and the resulting torque (the twisting force that causes the leg to move).
  • As the internet quickly fills with viral videos of futuristic robots darting and racing around like the animals they’re built to mimic, researchers say that there’s an element of their movement’s programming that should not be overlooked: rhythm.

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

User preference optimization for control of ankle exoskeletons using sample efficient active learning

by Ung Hee Lee, Varun S. Shetty, Patrick W. Franks, Jie Tan, Georgios Evangelopoulos, Sehoon Ha, Elliott J. Rouse in Science Robotics

Taking inspiration from music streaming services, a team of engineers at the University of Michigan, Google and Georgia Tech has designed the simplest way for users to program their own exoskeleton assistance settings.

Of course, what’s simple for the users is more complex underneath, as a machine learning algorithm repeatedly offers pairs of assistance profiles that are most likely to be comfortable for the wearer. The user then selects one of these two, and the predictor offers another assistance profile that it believes might be better. This approach enables users to set the exoskeleton assistance based on their preferences using a very simple interface, conducive to implementing on a smartwatch or phone.

“It’s essentially like Pandora music,” said Elliott Rouse, U-M associate professor of robotics and mechanical engineering and corresponding author of the study. “You give it feedback, a thumbs up or thumbs down, and it curates a radio station based on your feedback. This is a similar idea, but it’s with exoskeleton assistance settings. In both cases, we are creating a model of the user’s preferences and using this model to optimize the user’s experience.”

Setup of a graphical user interface (GUI).

The team tested the approach with 14 participants, each wearing a pair of ankle exoskeletons as they walked at a steady pace of about 2.3 miles per hour. The volunteers could take as much time as they wanted between choices, although they were limited to 50 choices. Most participants were choosing the same assistance profile repeatedly by the 45th decision. After 50 rounds, the experimental team began testing the users to see whether the final assistance profile was truly the best — pairing it against 10 randomly generated (but plausible) profiles. On average, participants chose the settings suggested by the algorithm about nine out of 10 times, which highlights the accuracy of the proposed approach.

“By using clever algorithms and a touch of AI, our system figures out what users want with easy yes-or-no questions,” said Ung Hee Lee, a recent U-M doctoral graduate from mechanical engineering and first author of the study, now at the robotics company Nuro. “I’m excited that this approach will make wearable robots comfortable and easy to use, bringing them closer to becoming a normal part of our day-to-day life.”

The control algorithm manages four exoskeleton settings: how much assistance to give (peak torque), how long to go between peaks (timing), and how the exoskeleton both ramps up and reduces the assistance on either side of each peak. This assistance approach is based on how our calf muscle adds force to propel us forward in each step. Rouse reports that few groups are enabling users to set their own exoskeleton settings.

“In most cases, controllers are tuned based on biomechanical or physiological results. The researchers are adjusting the settings on their laptops, minimizing the user’s metabolic rate. Right now, that’s the gold standard for exoskeleton assessment and control,” Rouse said.

“I think our field overemphasizes testing with metabolic rate. People are actually very insensitive to changes in their own metabolic rate, so we’re developing exoskeletons to do something that people can’t actually perceive.”

In contrast, user preference approaches not only focus on what users can perceive but also enable them to prioritize qualities that they feel are valuable. The study builds on the team’s previous effort to enable users to apply their own settings to an ankle exoskeleton. In that study, users had a touchscreen grid that put the level of assistance on one axis and the timing of the assistance on another. Users tried different points on the grid until they found one that worked well for them.

Once users had discovered what was comfortable, over the course of a couple of hours, they were then able to find their settings on the grid within a couple of minutes. The new study cuts down that longer period of discovering which settings feel best as well as offering two new parameters: how the assistance ramps up and down.

The data from that earlier study were used to feed the machine learning predictor. An evolutionary algorithm produces variations based on the assistance profiles that those earlier users preferred, and then the predictor — a neural network — ranked those assistance profiles. With each choice the users made, new potential assistance profiles were generated, ranked and presented to the user alongside their previous choice.

Neural prosthesis control restores near-normative neuromechanics in standing postural control

by Aaron Fleming, Wentao Liu, He (Helen) Huang in Science Robotics

Robotic prosthetic ankles that are controlled by nerve impulses allow amputees to move more “naturally,” improving their stability, according to a new study from North Carolina State University and the University of North Carolina at Chapel Hill.

“This work focused on ‘postural control,’ which is surprisingly complicated,” says Helen Huang, corresponding author of the study and the Jackson Family Distinguished Professor in the Joint Department of Biomedical Engineering at NC State and UNC.

“Basically, when we are standing still, our bodies are constantly making adjustments in order to keep us stable. For example, if someone bumps into us when we are standing in line, our legs make a wide range of movements that we are not even necessarily aware of in order to keep us upright. We work with people who have lower limb amputations, and they tell us that achieving this sort of stability with prosthetic devices is a significant challenge. And this study demonstrates that robotic prosthetic ankles which are controlled using electromyographic (EMG) signals are exceptionally good at allowing users to achieve this natural stability.” EMG signals are the electrical signals recorded from an individual’s muscles.

Experimental platform illustration.

The new study builds on previous work, which demonstrated that neural control of a powered prosthetic ankle can restore a range of abilities, including standing on challenging surfaces and squatting. For this study, the researchers worked with five people who had amputations below the knee on one leg. Study participants were fitted with a prototype robotic prosthetic ankle that responds to EMG signals that are picked up by sensors on the leg.

“Basically, the sensors are placed over the muscles at the site of the amputation,” says Aaron Fleming, co-author of the study and recent Ph.D. graduate from NC State. “When a study participant thinks about moving the amputated limb, this sends electrical signals through the residual muscle in the lower limb. The sensors pick these signals up through the skin and translate those signals into commands for the prosthetic device.”

The researchers conducted general training for study participants using the prototype device, so that they were somewhat familiar with the technology. Study participants were then tasked with responding to an “expected perturbation,” meaning they had to respond to something that might throw off their balance. In everyday life, this could be something like catching a ball or picking up your groceries. However, in order to replicate the conditions precisely over the course of the study, the researchers developed a mechanical system designed to challenge the stability of participants. Study participants were asked to respond to the expected perturbation under two conditions: using the prosthetic devices they normally used; and using the robotic prosthetic prototype.

“We found that study participants were significantly more stable when using the robotic prototype,” Fleming says. “They were less likely to stumble or fall.”

“Specifically, the robotic prototype allowed study participants to change their postural control strategy,” says Huang. “For people who have their intact lower limb, postural stability starts at the ankle. For people who have lost their lower limb, they normally have to compensate for lacking control of the ankle. We found that using the robotic ankle that responds to EMG signals allows users to return to their instinctive response for maintaining stability.”

In a separate portion of the study, researchers asked study participants to sway back and forth while using their normal prosthetic and while using the prototype robotic prosthetic. Study participants were equipped with sensors designed to measure muscle activity across the entire lower body.

“We found that muscle activity patterns in the lower body were very different when people used the two different prostheses,” Huang says. “Basically, muscle activation patterns when using the prototype prosthetic were very similar to the patterns we see in people who have full use of two intact lower limbs. That tells us that the prototype we developed mimics the body’s behavior closely enough to allow people’s ‘normal’ neural patterns to return. This is important, because it suggests that the technology will be somewhat intuitive for users.

“We think this is a clinically significant finding, because postural stability is an important issue for people who use prosthetic devices. We’re now conducting a larger trial with more people to both demonstrate the effects of the technology and identify which individuals may benefit most.”

Deep Learning of Electrocardiograms in Sinus Rhythm From US Veterans to Predict Atrial Fibrillation

by Neal Yuan, Grant Duffy, Sanket S. Dhruva, Adam Oesterle, Cara N. Pellegrini, John Theurer, Marzieh Vali, Paul A. Heidenreich, Salomeh Keyhani, David Ouyang in JAMA Cardiology

Investigators from the Smidt Heart Institute at Cedars-Sinai found that an artificial intelligence (AI) algorithm can detect an abnormal heart rhythm in people not yet showing symptoms.

The algorithm, which identified hidden signals in common medical diagnostic testing, may help doctors better prevent strokes and other cardiovascular complications in people with atrial fibrillation — the most common type of heart rhythm disorder. Previously developed algorithms have been primarily used in white populations. This algorithm works in diverse settings and patient populations, including U.S. veterans and underserved populations.

“This research allows for better identification of a hidden heart condition and informs the best way to develop algorithms that are equitable and generalizable to all patients,” said David Ouyang, MD, a cardiologist in the Department of Cardiology in the Smidt Heart Institute at Cedars-Sinai, a researcher in the Division of Artificial Intelligence in Medicine, and senior author of the study. Experts estimate that about 1 in 3 people with atrial fibrillation do not know they have the condition.

In atrial fibrillation, the electrical signals in the heart that regulate the pumping of blood from the upper chambers to the lower chambers are chaotic. This can cause blood in the upper chambers to pool and form blood clots that can travel to the brain and trigger an ischemic stroke.

To create the algorithm, investigators programmed an artificial intelligence tool to study patterns found in electrocardiogram readings. An electrocardiogram is a test that monitors electrical signals from the heart. People who undergo this test have electrodes placed on their body that detect the heart’s electrical activity.

The program was trained to analyze electrocardiogram readings taken between Jan. 1, 1987, and Dec. 31, 2022, from patients seen at two Veterans Affairs health networks. The algorithm was trained on almost a million electrocardiograms and it accurately predicted patients would have atrial fibrillation within 31 days. The AI model was also applied to medical records from patients at Cedars-Sinai and it similarly — and accurately — predicted cases of atrial fibrillation within 31 days.

“This study of veterans was geographically and ethnically diverse, indicating that the application of this algorithm could benefit the general population in the U.S.,” said Sumeet Chugh, MD, director of the Division of Artificial Intelligence in Medicine in the Department of Medicine and medical director of the Heart Rhythm Center in the Department of Cardiology. “This research exemplifies one of the many ways that investigators in the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine are using AI to address preemptive management of complex and challenging cardiac conditions.”

The investigators plan to continue to study the algorithm as part of prospective clinical trials to learn if it helps identify those at risk for heart attack and stroke. They also plan to develop more AI algorithms.

Trusted Operations of a Military Ground Robot in the Face of Man-in-the-Middle Cyber-Attacks Using Deep Learning Convolutional Neural Networks: Real-Time Experimental Outcomes

by Fendy Santoso, Anthony Finn in IEEE Transactions on Dependable and Secure Computing

Australian researchers have designed an algorithm that can intercept a man-in-the-middle (MitM) cyberattack on an unmanned military robot and shut it down in seconds.

In an experiment using deep learning neural networks to simulate the behaviour of the human brain, artificial intelligence experts from Charles Sturt University and the University of South Australia (UniSA) trained the robot’s operating system to learn the signature of a MitM eavesdropping cyberattack. This is where attackers interrupt an existing conversation or data transfer.

The algorithm, tested in real time on a replica of a United States army combat ground vehicle, was 99% successful in preventing a malicious attack. False positive rates of less than 2% validated the system, demonstrating its effectiveness. UniSA autonomous systems researcher, Professor Anthony Finn, says the proposed algorithm performs better than other recognition techniques used around the world to detect cyberattacks.

The GVR-BOt used in the experiment by UniSA and Charles Sturt AI researchers.

Professor Finn and Dr Fendy Santoso from Charles Sturt Artificial Intelligence and Cyber Futures Institute collaborated with the US Army Futures Command to replicate a man-in-the-middle cyberattack on a GVT-BOT ground vehicle and trained its operating system to recognise an attack.

“The robot operating system (ROS) is extremely susceptible to data breaches and electronic hijacking because it is so highly networked,” Prof Finn says.

“The advent of Industry 4, marked by the evolution in robotics, automation, and the Internet of Things, has demanded that robots work collaboratively, where sensors, actuators and controllers need to communicate and exchange information with one another via cloud services. “The downside of this is that it makes them highly vulnerable to cyberattacks.

“The good news, however, is that the speed of computing doubles every couple of years, and it is now possible to develop and implement sophisticated AI algorithms to guard systems against digital attacks.”

Dr Santoso says despite its tremendous benefits and widespread usage, the robot operating system largely ignores security issues in its coding scheme due to encrypted network traffic data and limited integrity-checking capability.

“Owing to the benefits of deep learning, our intrusion detection framework is robust and highly accurate,” Dr Santoso says. “The system can handle large datasets suitable to safeguard large-scale and real-time data-driven systems such as ROS.”

Prof Finn and Dr Santoso plan to test their intrusion detection algorithm on different robotic platforms, such as drones, whose dynamics are faster and more complex compared to a ground robot.

Deep learning with citizen science data enables estimation of species diversity and composition at continental extents

by Courtney L. Davis, Yiwei Bai, Di Chen, Orin Robinson, Viviana Ruiz‐Gutierrez, Carla P. Gomes, Daniel Fink in Ecology

For the first time, big data and artificial intelligence (AI) are being used to model hidden patterns in nature, not just for one bird species, but for entire ecological communities across continents. And the models follow each species’ full annual life cycle, from breeding to fall migration to nonbreeding grounds, and back north again during spring migration. It begins with the more than 900,000 birders who report their sightings to the Cornell Lab of Ornithology’s eBird program, one of the world’s largest biodiversity science projects. When combined with innovations in technology and artificial intelligence-the same innovations that power self-driving cars and real-time language translation-these sightings are revealing more than ever about patterns of bird biodiversity, and the processes that underlie them.

The development and application of this revolutionary computational tool is the result of a collaboration between the Cornell Lab of Ornithology and the Cornell Institute for Computational Sustainability.

“This method uniquely tells us which species occur where, when, with what other species, and under what environmental conditions,” said lead author Courtney Davis, a researcher at the Cornell Lab. “With that type of information, we can identify and prioritize landscapes of high conservation value — vital information in this era of ongoing biodiversity loss.”

Overview of modeling framework used in our study.

“This model is very general and is suitable for various tasks, provided there’s enough data,” Gomes said. “This work on joint bird species distribution modeling is about predicting the presence and absence of species, but we are also developing models to estimate bird abundance — the number of individual birds per species. We’re also aiming to enhance the model by incorporating bird calls alongside visual observations.”

Cross-disciplinary collaborations like this are necessary for the future of biodiversity conservation, according to Daniel Fink, researcher at the Cornell Lab and senior author of the study.

“The task at hand is too big for ecologists to do on their own-we need the expertise of our colleagues in computer science and computational sustainability to develop targeted plans for landscape-scale conservation, restoration, and management around the world.”

An all 2D bio-inspired gustatory circuit for mimicking physiology and psychology of feeding behavior

by Subir Ghosh, Andrew Pannone, Dipanjan Sen, Akshay Wali, Harikrishnan Ravichandran, Saptarshi Das in Nature Communications

Can artificial intelligence (AI) get hungry? Develop a taste for certain foods? Not yet, but a team of Penn State researchers is developing a novel electronic tongue that mimics how taste influences what we eat based on both needs and wants, providing a possible blueprint for AI that processes information more like a human being.

Human behavior is complex, a nebulous compromise and interaction between our physiological needs and psychological urges. While artificial intelligence has made great strides in recent years, AI systems do not incorporate the psychological side of our human intelligence. For example, emotional intelligence is rarely considered as part of AI.

“The main focus of our work was how could we bring the emotional part of intelligence to AI,” said Saptarshi Das, associate professor of engineering science and mechanics at Penn State and corresponding author of the study. “Emotion is a broad field and many researchers study psychology; however, for computer engineers, mathematical models and diverse data sets are essential for design purposes. Human behavior is easy to observe but difficult to measure and that makes it difficult to replicate in a robot and make it emotionally intelligent. There is no real way right now to do that.”

Das noted that our eating habits are a good example of emotional intelligence and the interaction between the physiological and psychological state of the body. What we eat is heavily influenced by the process of gustation, which refers to how our sense of taste helps us decide what to consume based on flavor preferences. This is different than hunger, the physiological reason for eating.

“If you are someone fortunate to have all possible food choices, you will choose the foods you like most,” Das said. “You are not going to choose something that is very bitter, but likely try for something sweeter, correct?”

Anyone who has felt full after a big lunch and still was tempted by a slice of chocolate cake at an afternoon workplace party knows that a person can eat something they love even when not hungry.

“If you are given food that is sweet, you would eat it in spite of your physiological condition being satisfied, unlike if someone gave you say a hunk of meat,” Das said. “Your psychological condition still wants to be satisfied, so you will have the urge to eat the sweets even when not hungry.”

Biological and bio-inspired gustatory systems for feeding.

While there are still many questions regarding the neuronal circuits and molecular-level mechanisms within the brain that underlie hunger perception and appetite control, Das said, advances such as improved brain imaging have offered more information on how these circuits work in regard to gustation.

Taste receptors on the human tongue convert chemical data into electrical impulses. These impulses are then sent through neurons to the brain’s gustatory cortex, where cortical circuits, an intricate network of neurons in the brain shape our perception of taste. The researchers have developed a simplified biomimetic version of this process, including an electronic “tongue” and an electronic “gustatory cortex” made with 2D materials, which are materials one to a few atoms thick. The artificial tastebuds comprise tiny, graphene-based electronic sensors called chemitransistors that can detect gas or chemical molecules. The other part of the circuit uses memtransistors, which is a transistor that remembers past signals, made with molybdenum disulfide. This allowed the researchers to design an “electronic gustatory cortex” that connect a physiology-drive “hunger neuron,” psychology-driven “appetite neuron” and a “feeding circuit.”

For instance, when detecting salt, or sodium chloride, the device senses sodium ions, explained Subir Ghosh, a doctoral student in engineering science and mechanics and co-author of the study.

“This means the device can ‘taste’ salt,” Ghosh said.

The properties of the two different 2D materials complement each other in forming the artificial gustatory system.

“We used two separate materials because while graphene is an excellent chemical sensor, it is not great for circuitry and logic, which is needed to mimic the brain circuit,” said Andrew Pannone, graduate research assistant in engineering science and mechanics and co-author of the study. “For that reason, we used molybdenum disulfide, which is also a semiconductor. By combining these nanomaterials, we have taken the strengths from each of them to create the circuit that mimics the gustatory system.”

The process is versatile enough to be applied to all five primary taste profiles: sweet, salty, sour, bitter and umami. Such a robotic gustatory system has promising potential applications, Das said, ranging from AI-curated diets based on emotional intelligence for weight loss to personalized meal offerings in restaurants. The research team’s upcoming objective is to broaden the electronic tongue’s taste range.

“We are trying to make arrays of graphene devices to mimic the 10,000 or so taste receptors we have on our tongue that are each slightly different compared to the others, which enables us to distinguish between subtle differences in tastes,” Das said. “The example I think of is people who train their tongue and become a wine taster. Perhaps in the future we can have an AI system that you can train to be an even better wine taster.”

An additional next step is to make an integrated gustatory chip.

“We want to fabricate both the tongue part and the gustatory circuit in one chip to simplify it further,” Ghosh said. “That will be our primary focus for the near future in our research.”

After that, the researchers said they envision this concept of gustatory emotional intelligence in an AI system translating to other senses, such as visual, audio, tactile and olfactory emotional intelligence to aid development of future advanced AI.

“The circuits we have demonstrated were very simple, and we would like to increase the capacity of this system to explore other tastes,” Pannone said. “But beyond that, we want to introduce other senses and that would require different modalities, and perhaps different materials and/or devices. These simple circuits could be more refined and made to replicate human behavior more closely. Also, as we better understand how our own brain works, that will enable us to make this technology even better.”

Efficient automatic design of robots

by David Matthews, Andrew Spielberg, Daniela Rus, Sam Kriegman, Josh Bongard in Proceedings of the National Academy of Sciences

A team led by Northwestern University researchers has developed the first artificial intelligence (AI) to date that can intelligently design robots from scratch.

To test the new AI, the researchers gave the system a simple prompt: Design a robot that can walk across a flat surface. While it took nature billions of years to evolve the first walking species, the new algorithm compressed evolution to lightning speed — designing a successfully walking robot in mere seconds. But the AI program is not just fast. It also runs on a lightweight personal computer and designs wholly novel structures from scratch. This stands in sharp contrast to other AI systems, which often require energy-hungry supercomputers and colossally large datasets. And even after crunching all that data, those systems are tethered to the constraints of human creativity — only mimicking humans’ past works without an ability to generate new ideas.

“We discovered a very fast AI-driven design algorithm that bypasses the traffic jams of evolution, without falling back on the bias of human designers,” said Northwestern’s Sam Kriegman, who led the work. “We told the AI that we wanted a robot that could walk across land. Then we simply pressed a button and presto! It generated a blueprint for a robot in the blink of an eye that looks nothing like any animal that has ever walked the earth. I call this process ‘instant evolution.’”

Kriegman is an assistant professor of computer science, mechanical engineering and chemical and biological engineering at Northwestern’s McCormick School of Engineering, where he is a member of the Center for Robotics and Biosystems. David Matthews, a scientist in Kriegman’s laboratory, is the paper’s first author. Kriegman and Matthews worked closely with co-authors Andrew Spielberg and Daniela Rus (Massachusetts Institute of Technology) and Josh Bongard (University of Vermont) for several years before their breakthrough discovery.

In early 2020, Kriegman garnered widespread media attention for developing xenobots, the first living robots made entirely from biological cells. Now, Kriegman and his team view their new AI as the next advance in their quest to explore the potential of artificial life. The robot itself is unassuming — small, squishy and misshapen. And, for now, it is made of inorganic materials. But Kriegman says it represents the first step in a new era of AI-designed tools that, like animals, can act directly on the world.

“When people look at this robot, they might see a useless gadget,” Kriegman said. “I see the birth of a brand-new organism.”

Efficient automatic design.

While the AI program can start with any prompt, Kriegman and his team began with a simple request to design a physical machine capable of walking on land. That’s where the researchers’ input ended and the AI took over. The computer started with a block about the size of a bar of soap. It could jiggle but definitely not walk. Knowing that it had not yet achieved its goal, AI quickly iterated on the design. With each iteration, the AI assessed its design, identified flaws and whittled away at the simulated block to update its structure. Eventually, the simulated robot could bounce in place, then hop forward and then shuffle. Finally, after just nine tries, it generated a robot that could walk half its body length per second — about half the speed of an average human stride. The entire design process — from a shapeless block with zero movement to a full-on walking robot — took just 26 seconds on a laptop.

“Now anyone can watch evolution in action as AI generates better and better robot bodies in real time,” Kriegman said. “Evolving robots previously required weeks of trial and error on a supercomputer, and of course before any animals could run, swim or fly around our world, there were billions upon billions of years of trial and error. This is because evolution has no foresight. It cannot see into the future to know if a specific mutation will be beneficial or catastrophic. We found a way to remove this blindfold, thereby compressing billions of years of evolution into an instant.”

All on its own, AI surprisingly came up with the same solution for walking as nature: Legs. But unlike nature’s decidedly symmetrical designs, AI took a different approach. The resulting robot has three legs, fins along its back, a flat face and is riddled with holes.

“It’s interesting because we didn’t tell the AI that a robot should have legs,” Kriegman said. “It rediscovered that legs are a good way to move around on land. Legged locomotion is, in fact, the most efficient form of terrestrial movement.”

To see if the simulated robot could work in real life, Kriegman and his team used the AI-designed robot as a blueprint. First, they 3D printed a mold of the negative space around the robot’s body. Then, they filled the mold with liquid silicone rubber and let it cure for a couple hours. When the team popped the solidified silicone out of the mold, it was squishy and flexible.

Now, it was time to see if the robot’s simulated behavior — walking — was retained in the physical world. The researchers filled the rubber robot body with air, making its three legs expand. When the air deflated from the robot’s body, the legs contracted. By continually pumping air into the robot, it repeatedly expanded then contracted — causing slow but steady locomotion.

While the evolution of legs makes sense, the holes are a curious addition. AI punched holes throughout the robot’s body in seemingly random places. Kriegman hypothesizes that porosity removes weight and adds flexibility, enabling the robot to bend its legs for walking.

“We don’t really know what these holes do, but we know that they are important,” he said. “Because when we take them away, the robot either can’t walk anymore or can’t walk as well.”

Overall, Kriegman is surprised and fascinated by the robot’s design, noting that most human-designed robots either look like humans, dogs or hockey pucks.

“When humans design robots, we tend to design them to look like familiar objects,” Kriegman said. “But AI can create new possibilities and new paths forward that humans have never even considered. It could help us think and dream differently. And this might help us solve some of the most difficult problems we face.”

Although the AI’s first robot can do little more than shuffle forward, Kriegman imagines a world of possibilities for tools designed by the same program. Someday, similar robots might be able to navigate the rubble of a collapsed building, following thermal and vibrational signatures to search for trapped people and animals, or they might traverse sewer systems to diagnose problems, unclog pipes and repair damage. The AI also might be able to design nano-robots that enter the human body and steer through the blood stream to unclog arteries, diagnose illnesses or kill cancer cells.

“The only thing standing in our way of these new tools and therapies is that we have no idea how to design them,” Kriegman said. “Lucky for us, AI has ideas of its own.”

Bridging two insect flight modes in evolution, physiology and robophysics

by Jeff Gau, James Lynch, Brett Aiello, Ethan Wold, Nick Gravish, Simon Sponberg in Nature

Robots built by engineers at the University of California San Diego helped achieve a major breakthrough in understanding how insect flight evolved. The study is a result of a six-year long collaboration between roboticists at UC San Diego and biophysicists at the Georgia Institute of Technology.

The findings focus on how the two different modes of flight evolved in insects. Most insects use their brains to activate their flight muscles each wingstroke, just like we activate the muscles in our legs every stride we take. This is called synchronous flight. But some insects, such as mosquitoes, are able to flap their wings without their nervous system commanding each wingstroke. Instead, the muscles of these animals automatically activate when they are stretched. This is called asynchronous flight. Asynchronous flight is common in some of the insects in the four major insect groups, allowing them to flap their wings at great speeds, allowing some mosquitoes to flap their wings more than 800 times a second, for example.

For years, scientists assumed the four groups of insects-bees, flies, beetles and true bugs (hemiptera)- all evolved asynchronous flight separately. However, a new analysis performed by the Georgia Tech team concludes that asynchronous flight actually evolved together in one common ancestor. Then some groups of insect species reverted back to synchronous flight, while others remained asynchronous.

The finding that some insects such as moths have evolved from synchronous to asynchronous, and then back to synchronous flight led the researchers down a path of investigation that required insect, robot, and mathematical experiments. This new evolutionary finding posed two fundamental questions: do the muscles of moths exhibit signatures of their prior asynchrony and how can an insect maintain both synchronous and asynchronous properties in their muscles and still be capable of flight?

The ideal specimen to study these questions of synchronous and asynchronous evolution is the Hawkmoth. That’s because moths use synchronous flight, but the evolutionary record tells us they have ancestors with asynchronous flight.

Transitions between synchronous and asynchronous modes in simulation and robotics.

Researchers at Georgia Tech first sought to measure whether signatures of asynchrony can be observed in the Hawkmoth muscle. Through mechanical characterization of the muscle they discovered that Hawkmoths still retain the physical characteristics of asynchronous flight muscles-even if they are not used.

How can an insect have both synchronous and asynchronous properties and still fly? To answer this question researchers realized that using robots would allow them to perform experiments that could never be done on insects. For example, they would be able to equip the robots with motors that could emulate combinations of asynchronous and synchronous muscles and test what transitions might have occurred during the millions of years of evolution of flight.

The work highlights the potential of robophysics-the practice of using robots to study the physics of living systems, said Nick Gravish, a professor of mechanical and aerospace engineering at the UC San Diego Jacobs School of Engineering and one of the paper’s senior authors.

“We were able to provide an understanding of how the transition between asynchronous and synchronous flight could occur,” Gravish said. “By building a flapping wing robot, we helped provide an answer to an evolutionary question in biology.”

Essentially, if you’re trying to understand how animals-or other things-move through their environment, it is sometimes easier to build a robot that has similar features to these things and moves through the same environment, said James Lynch, who earned his Ph.D. in Gravish’s lab and is one of the lead co-authors of the paper.

“One of the biggest evolutionary findings here is that these transitions are occurring in both directions, and that instead of multiple independent origins of asynchronous muscle, there’s actually only one,” said Brett Aiello, an assistant professor of biology at Seton Hill University and one of the co-first authors. He did the work for his study when he was a postdoctoral researcher in the lab of Georgia Tech professor Simon Sponberg. “From that one independent origin, multiple revisions back to synchrony have occurred.”

Lynch and co-first author Jeff Gau, a Ph.D. student at Georgia Tech, worked together to study moths and take measurements of their muscle activity under flight conditions. They then built a mathematical model of the moth’s wing flapping movements. Lynch took the model back to UC San Diego, where he translated the mathematical model into commands and control algorithms that could be sent to a robot mimicking a moth wing. The robots he built ended up being much bigger than moths-and as a result, easier to observe. That’s because in fluid physics, a very big object moving very slowly through a denser medium-in this case water-behaves the same way than a very small object moving much faster through a thinner medium-in this case air.

“We dynamically scaled this robot so that this much larger robot moving much more slowly was representative of a much smaller wing moving much faster,” Lynch said.

The team made two robots: a large flapper robot modeled after a moth to better understand how the wings worked, which they deployed in water. They also built a much smaller flapper robot that operated in air (modeled after Harvard’s robo bee).

The robot and modeling experiments helped researchers test how an insect could transition from synchronous to asynchronous flight. For example, researchers were able to create a robot with motors that could combine synchronous and asynchronous flight and see if it would actually be able to fly. They found that under the right circumstances, an insect could transition between the two modes gradually and smoothly.

“The robot experiments provided a possible pathway for this evolution and transition,” Gravish said.

Lynch encountered several challenges, including modeling the fluid flow around the robots, and modeling the feedback property of insect muscle when it’s stretched. Lynch was able to solve this by simplifying the model as much as possible while making sure it remained accurate. After several experiments, he also realized he would have to slow down the movements of the bots to keep them stable.

Next steps from the robotics perspective will include working with material scientists to equip the flappers with muscle-like materials. In addition to helping clarify the evolution and biophysics of insect flight, the work has benefits for robotics. Robots with asynchronous motors can rapidly adapt and respond to the environment, such as during a wind-gust or wing collision,Gravish said. The research also could help roboticists design better bots with flapping wings.

“This type of work could help usher in a new era of responsive and adaptive flapping wing systems,” Gravish said.

A hierarchical model for external electrical control of an insect, accounting for inter-individual variation of muscle force properties

by Dai Owaki, Volker Dürr, Josef Schmitz in eLife

Insect cyborgs may sound like science fiction, but it’s a relatively new phenomenon based on using electrical stimuli to control the movement of insects. These hybrid insect computer robots, as they are scientifically called, herald the future of small, high mobile and efficient devices.

Despite significant progress being made, however, further advances are complicated by the vast differences between different insects’ nervous and muscle systems. In a recent study, an international research group has studied the relationship between electrical stimulation in stick insects’ leg muscles and the resultant torque (the twisting force that makes the leg move).

They focused on three leg muscles that play essential roles in insect movement: one for propulsion, one for joint stiffness, and one for transitioning between standing and swinging the leg. The experiments involved the researchers keeping the body of the stick insects fixed, and electrically stimulating one out of the three leg muscles to produce walking-like movements.

The research was led by Dai Owaki, associate professor at the Department of Robotics at Tohoku University’s Graduate School of Engineering. Experiments were conducted at Bielefeld University, Germany, in a lab run by Professors Volker Dürr and Josef Schmitz.

Experimental setup and joint torque calculation.

“Based on our measurements, we could generate a model that predicted the created torque when different patterns of electrical stimulation were applied to a leg muscle,” points out Owaki. “We also identified a nearly linear relationship between the duration of the electrical stimulation and the torque generated, meaning we could predict how much twisting force we would generate by just looking at the length of the applied electrical pulse.”

Using only a few measurements, Owaki and his collaborators could apply this to each individual insect. As a result of these findings, scientists will be able to refine the motor control of tuned biohybrid robots, making their movements more precise.

While the team knows their insights could lead to adaptable and highly mobile devices with various applications, they still cite some key challenges that need to be addressed. “First, model testing needs to be implemented in free-walking insects, and the electrical stimuli must be refined to mimic natural neuromuscular signals more closely,” adds Owaki.

Locomotion rhythm makes power and speed

by A. Bejan, U. Gunes, H. Almahmoud in Scientific Reports

As the internet quickly fills with viral videos of futuristic robots darting and racing around like the animals they’re built to mimic, Duke researchers say that there’s an element of their movement’s programming that should not be overlooked: rhythm.

When analyzing legs, wings and fins for moving robots or animals in the real world, the mathematics looks fairly straightforward. Limbs with multiple sections of various lengths create different ratios for leverage, bodies with alternate shapes and sizes create drag coefficients and centers of mass, and feet, wings or fins of various shapes and sizes push on the world around them. All of these options create more degrees of freedom in the final design. But until now, say the researchers, nobody was paying much attention to the timing of how they’re all working together.

“Minimizing the amount of work being done by varying the speed over the mover is an idea that’s been around a long time,” said Adrian Bejan, the J.A. Jones Distinguished Professor of Mechanical Engineering at Duke. “But varying the rhythm of that movement — the music of how the pieces move together over time — is a design aspect that has been overlooked, even though it can improve performance.”

To illustrate his point in the paper, Bejan points to natural swimmers such as frogs or humans doing the breaststroke. Their swim gate is characterized by three time-intervals: a slow period of reaching forward, a fast period of pushing backward and a static period of coasting. For optimum performance, the lengths of time for those intervals typically go long, fast, long. But in certain situations — outracing or outmaneuvering a predator, for example — the ratios of those periods change drastically.

The universal scaling of steady locomotion in all media on earth.

In the design of robots built to emulate dogs, fish or birds, incorporating different rhythms into their standard cruising movements can make their normal operations more efficient. And those optimal rhythms will, in turn, affect the choices made for all of the other pieces of the overall design.

The work builds on research Bejan published nearly 20 years ago, where he demonstrated that size and speed go hand-in-hand across the entire animal kingdom whether on land, in the air or under water. The physics underlying that work dealt with weight falling forward from a given animal’s height over and over again. In this paper, Bejan shows that his previous work was incomplete, and that all animals, robots and other moving things can further optimize their mechanics by adding an element of rhythm.

“You can — and indeed you should — teach rhythms of movements to competitive swimmers and runners looking for an edge,” Bejan said. “Rhythm increases the number of knobs you can turn when trying to move through the world. It is yet another example of how good design — whether made by humans or through natural evolution — is truly a form of art.”

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--