RT/ Why insects navigate more efficiently than robots

Paradigm
Paradigm
Published in
28 min readFeb 27, 2024

Robotics & AI biweekly vol.90, 9th February — 27th February

TL;DR

  • Engineers study insect navigation for developing energy-efficient robots, leveraging nature-inspired strategies.
  • Scientists introduce ‘simultaneous and heterogeneous multithreading’ (SHMT) to double computer processing speeds by utilizing existing hardware for AI, machine learning, and digital signal processing.
  • Research shows that a ‘swarm’ of over 100 autonomous ground and aerial robots can be supervised effectively by one person without causing undue workload.
  • Robotic sensor with AI reads braille at speeds twice that of human readers, showcasing advancements in tactile communication technology.
  • Physicists create a neural network with active colloidal particles instead of electricity, offering a physical system for artificial intelligence and time series prediction.
  • Inchworm-inspired soft robot developed for transporting loads exceeding 100g at a speed of 9 mm per second, demonstrating precise object placement capabilities.
  • Deep learning-based model enables humanoid robots to sketch pictures, showcasing the potential for robots to actively participate in creative processes.
  • Engineers devise an external method to measure soft robot interaction with the environment, eliminating the need for built-in sensors.
  • Researchers introduce a light-responsive soft material called a liquid crystal elastomer (LCE), opening possibilities for shape-changing “soft machines” in fields like robotics and medicine.
  • Washington State University develops insect-like micro-robots, the smallest, lightest, and fastest fully functional micro-robots known to date.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Finding the gap: neuromorphic motion-vision in dense environments

by Thorben Schoepe, Ella Janotte, Moritz B. Milde, Olivier J. N. Bertrand, Martin Egelhaaf, Elisabetta Chicca in Nature Communications

With a brain the size of a pinhead, insects perform fantastic navigational feats. They avoid obstacles and move through small openings. How do they do this, with their limited brain power? Understanding the inner workings of an insect’s brain can help us in our search towards energy-efficient computing, physicist Elisabetta Chicca of the University of Groningen demonstrates with her most recent result: a robot that acts like an insect.

It’s not easy to make use of the images that come in through your eyes, when deciding what your feet or wings should do. A key aspect here is the apparent motion of things as you move.

‘Like when you’re on a train’, Chicca explains. ‘The trees nearby appear to move faster than the houses far away.

Insects use this information to infer how far away things are. This works well when moving in a straight line, but reality is not that simple. Moving in curves makes the problem too complex for insects. To keep things manageable for their limited brainpower, they adjust their behaviour: they fly in a straight line, make a turn, and then make another straight line.

Chicca explains: ‘What we learn from this is: if you don’t have enough resources, you can simplify the problem with your behaviour.’

Working principle of the obstacle avoidance network is demonstrated in an example run in a cluttered environment.

In search of the neural mechanism that drives insect behaviour, PhD student Thorben Schoepe developed a model of its neuronal activity and a small robot that uses this model to navigate. All this was done under Chicca’s supervision, and in close collaboration with neurobiologist Martin Egelhaaf of Bielefeld University, who helped to identify the insects’ computational principles.

Schoepe’s model is based on one main principle: always steer towards the area with the least apparent motion. He had his robot drive through a long ‘corridor’ — consisting of two walls with a random print on it — and the robot centred in the middle of the corridor, as insects tend to do. In other (virtual) environments, such as a space with obstacles or small openings, Schoepe’s model also showed similar behaviour to insects.

‘The model is so good’, Chicca concludes, ‘that once you set it up, it will perform in all kinds of environments. That’s the beauty of this result.’

The fact that a robot can navigate in a realistic environment is not new. Rather, the model gives insight into how insects do the job, and how they manage to do things so efficiently.

sEMD response, Drosophila T4/T5 neuron response and robotic agent.

Chicca explains: ‘Much of Robotics is not concerned with efficiency.

We humans tend to learn new tasks as we grow up and within Robotics, this is reflected in the current trend of machine learning. But insects are able to fly immediately from birth. An efficient way of doing that is hardwired in their brains.’ In a similar way, you could make computers more efficient. Chicca shows a chip that her research group has previously developed: a strip with a surface area that is smaller than a key on your keyboard. In the future, she hopes to incorporate this specific insect behaviour in a chip as well.

She comments: ‘Instead of using a general-purpose computer with all its possibilities, you can build specific hardware; a tiny chip that does the job, keeping things much smaller and energy-efficient.’

Simultaneous and Heterogenous Multithreading

by Kuan-Chieh Hsu, Hung-Wei Tseng in 56th Annual IEEE/ACM International Symposium on Microarchitecture

Imagine doubling the processing power of your smartphone, tablet, personal computer, or server using the existing hardware already in these devices.

Hung-Wei Tseng, a UC Riverside associate professor of electrical and computer engineering, has laid out a paradigm shift in computer architecture to do just that in a recent paper. Tseng explained that today’s computer devices increasingly have graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units as essential components. These components process information separately, moving information from one processing unit to the next, which in effect creates a bottleneck.

In their paper, Tseng and UCR computer science graduate student Kuan-Chieh Hsu introduce what they call “simultaneous and heterogeneous multithreading” or SHMT. They describe their development of a proposed SHMT framework on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU, and a Tensor Processing Unit hardware accelerator. The system achieved a 1.96 times speedup and a 51% reduction in energy consumption.

Index terms have been assigned to the content through auto-classification.

“You don’t have to add new processors because you already have them,” Tseng said.

The implications are huge. Simultaneous use of existing processing components could reduce computer hardware costs while also reducing carbon emissions from the energy produced to keep servers running in warehouse-size data processing centers. It also could reduce the need for scarce freshwater used to keep servers cool. Tseng’s paper, however, cautions that further investigation is needed to answer several questions about system implementation, hardware support, code optimization, and what kind of applications stand to benefit the most, among other issues.

Can A Single Human Supervise A Swarm of 100 Heterogeneous Robots?

by Julie Adams, Joshua Hamell, Phillip Walker in Field Robotics

Research involving Oregon State University has shown that a “swarm” of more than 100 autonomous ground and aerial robots can be supervised by one person without subjecting the individual to an undue workload.

The findings represent a big step toward efficiently and economically using swarms in a range of roles from wildland firefighting to package delivery to disaster response in urban environments.

“We don’t see a lot of delivery drones yet in the United States, but there are companies that have been deploying them in other countries,” said Julie A. Adams of the OSU College of Engineering. “It makes business sense to deploy delivery drones at a scale, but it will require a single person be responsible for very large numbers of these drones. I’m not saying our work is a final solution that shows everything is OK, but it is the first step toward getting additional data that would facilitate that kind of a system.”

The results stem from the Defense Advanced Research Project Agency’ program known as OFFSET, short for Offensive Swarm-Enabled Tactics. Adams was part of a group that received an OFFSET grant in 2017. During the course of the four-year project, researchers deployed swarms of up to 250 autonomous vehicles — multi-rotor aerial drones, and ground rovers — able to gather information in “concrete canyon” urban surroundings where line-of-sight, satellite-based communication is impaired by buildings.

Drone in hand, photo by Karl Maasdam

The information the swarms collect during their missions at military urban training sites have the potential to help keep U.S. troops and civilians more safe. Adams was a co-principal investigator on one of two swarm system integrator teams that developed the system infrastructure and integrated the work of other teams focused on swarm tactics, swarm autonomy, human-swarm teaming, physical experimentation and virtual environments.

“The project required taking off-the-shelf technologies and building the autonomy needed for them to be deployed by a single human called the swarm commander,” said Adams, the associate director for deployed systems and policy at OSU’s Collaborative Robotics and Intelligent Systems Institute. “That work also required developing not just the needed systems and the software, but also the user interface for that swarm commander to allow a single human to deploy these ground and aerial systems.”

Collaborators with Smart Information Flow Technologies developed a virtual reality interface called I3 that lets the commander control the swarm with high-level directions.

“The commanders weren’t physically driving each individual vehicle, because if you’re deploying that many vehicles, they can’t — a single human can’t do that,” Adams said. “The idea is that the swarm commander can select a play to be executed and can make minor adjustments to it, like a quarterback would in the NFL. The objective data from the trained swarm commanders demonstrated that a single human can deploy these systems in built environments, which has very broad implications beyond this project.”

Testing took place at multiple Department of Defense Combined Armed Collective Training Facilities. Each multiday field exercise introduced additional vehicles, and every 10 minutes swarm commanders provided information about their workload and how stressed or fatigued they were.

During the final field exercise, featuring more than 100 vehicles, the commanders’ workload levels were also assessed through physiological sensors that fed information into an algorithm that estimates someone’s sensory channel workload levels and their overall workload.

“The swarm commanders’ workload estimate did cross the overload threshold frequently, but just for a few minutes at a time, and the commander was able to successfully complete the missions, often under challenging temperature and wind conditions,” Adams said.

High-Speed Tactile Braille Reading via Biomimetic Sliding Interactions

by Parth Potdar, David Hardman, Elijah Almanzor, Fumiya Iida in IEEE Robotics and Automation Letters

Researchers have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers.

The research team, from the University of Cambridge, used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. The robot was able to read the braille at 315 words per minute at close to 90% accuracy. Although the robot braille reader was not developed as an assistive technology, the researchers say the high sensitivity required to read braille makes it an ideal test in the development of robot hands or prosthetics with comparable sensitivity to human fingertips.

Human fingertips are remarkably sensitive and help us gather information about the world around us. Our fingertips can detect tiny changes in the texture of a material or help us know how much force to use when grasping an object: for example, picking up an egg without breaking it or a bowling ball without dropping it. Reproducing that level of sensitivity in a robotic hand, in an energy-efficient way, is a big engineering challenge. In Professor Fumiya Iida’s lab in Cambridge’s Department of Engineering, researchers are developing solutions to this and other skills that humans find easy, but robots find difficult.

“The softness of human fingertips is one of the reasons we’re able to grip things with the right amount of pressure,” said Parth Potdar from Cambridge’s Department of Engineering and an undergraduate at Pembroke College, the paper’s first author. “For robotics, softness is a useful characteristic, but you also need lots of sensor information, and it’s tricky to have both at once, especially when dealing with flexible or deformable surfaces.”

Braille is an ideal test for a robot ‘fingertip’ as reading it requires high sensitivity, since the dots in each representative letter pattern are so close together. The researchers used an off-the-shelf sensor to develop a robotic braille reader that more accurately replicates human reading behaviour.

“There are existing robotic braille readers, but they only read one letter at a time, which is not how humans read,” said co-author David Hardman, also from the Department of Engineering. “Existing robotic braille readers work in a static way: they touch one letter pattern, read it, pull up from the surface, move over, lower onto the next letter pattern, and so on. We want something that’s more realistic and far more efficient.”

The robotic sensor the researchers used has a camera in its ‘fingertip’, and reads by using a combination of the information from the camera and the sensors.

“This is a hard problem for roboticists as there’s a lot of image processing that needs to be done to remove motion blur, which is time and energy-consuming,” said Potdar.

The team developed machine learning algorithms so the robotic reader would be able to ‘deblur’ the images before the sensor attempted to recognise the letters. They trained the algorithm on a set of sharp images of braille with fake blur applied. After the algorithm had learned to deblur the letters, they used a computer vision model to detect and classify each character. Once the algorithms were incorporated, the researchers tested their reader by sliding it quickly along rows of braille characters. The robotic braille reader could read at 315 words per minute at 87% accuracy, which is twice as fast and about as accurate as a human Braille reader.

“Considering that we used fake blur the train the algorithm, it was surprising how accurate it was at reading braille,” said Hardman. “We found a nice trade-off between speed and accuracy, which is also the case with human readers.”

“Braille reading speed is a great way to measure the dynamic performance of tactile sensing systems, so our findings could be applicable beyond braille, for applications like detecting surface textures or slippage in robotic manipulation,” said Potdar.

Harnessing synthetic active particles for physical reservoir computing

by Xiangzun Wang, Frank Cichos in Nature Communications

Artificial intelligence using neural networks performs calculations digitally with the help of microelectronic chips. Physicists at Leipzig University have now created a type of neural network that works not with electricity but with so-called active colloidal particles. In their publication, the researchers describe how these microparticles can be used as a physical system for artificial intelligence and the prediction of time series.

“Our neural network belongs to the field of physical reservoir computing, which uses the dynamics of physical processes, such as water surfaces, bacteria or octopus tentacle models, to make calculations,” says Professor Frank Cichos, whose research group developed the network with the support of ScaDS.AI. As one of five new AI centres in Germany, since 2019 the research centre with sites in Leipzig and Dresden has been funded as part of the German government’s AI Strategy and supported by the Federal Ministry of Education and Research and the Free State of Saxony.

“In our realization, we use synthetic self-propelled particles that are only a few micrometres in size,” explains Cichos. “We show that these can be used for calculations and at the same time present a method that suppresses the influence of disruptive effects, such as noise, in the movement of the colloidal particles.” Colloidal particles are particles that are finely dispersed in their dispersion medium (solid, gas or liquid).

Experimental realization.

For their experiments, the physicists developed tiny units made of plastic and gold nanoparticles, in which one particle rotates around another, driven by a laser. These units have certain physical properties that make them interesting for reservoir computing.

“Each of these units can process information, and many units make up the so-called reservoir. We change the rotational motion of the particles in the reservoir using an input signal. The resulting rotation contains the outcome of a calculation,” explains Dr Xiangzun Wang. “Like many neural networks, the system needs to be trained to perform a particular calculation.”

The researchers were particularly interested in noise. “Because our system contains extremely small particles in water, the reservoir is subject to strong noise, similar to the noise that all molecules in a brain are subject to,” says Professor Cichos.

“This noise, Brownian motion, severely disrupts the functioning of the reservoir computer and usually requires a very large reservoir to remedy. In our work, we have found that using past states of the reservoir can improve computer performance, allowing smaller reservoirs to be used for certain computations under noisy conditions.”

Cichos adds that this has not only contributed to the field of information processing with active matter, but has also yielded a method that can optimise reservoir computation by reducing noise.

Controlling a peristaltic robot inspired by inchworms

by Yanhong Peng et al in Biomimetic Intelligence and Robotics

Soft robots inspired by animals can help to tackle real-world problems in efficient and innovative ways. Roboticists have been working to continuously broaden and improve these robots’ capabilities, as this could open new avenues for the automation of tasks in various settings.

Researchers at Nagoya University and Tokyo Institute of Technology recently introduced a soft robot inspired by inchworms that can carry loads of more than 100 g at a speed of approximately 9 mm per second. This robot could be used to transport objects and place them in precise locations.

“Previous research in the field provided foundational insights but also highlighted limitations, such as the slow transportation speeds and low load capacities of inchworm-inspired robots,” Yanhong Peng told Tech Xplore. “For example, existing models demonstrated capabilities for transporting objects at speeds significantly lower than the 8.54 mm/s achieved in this study, with limited ability to handle loads above 40 grams.”

The inchworm-inspired robot designs introduced so far leverage various actuation mechanisms, including dielectric elastomer actuators, shape memory alloys and soft pneumatic actuators. While these mechanisms can effectively reproduce the movements of inchworms, they often limit the speed and load capacity of robots.

Credit: Peng et al.

“This paper emerged from the intersection of soft robotics and biomimicry, particularly inspired by the movement mechanisms of inchworms,” Peng said. “Prior research efforts in soft robotics have explored various actuation methods (pneumatic, phototropic, and electrohydrodynamic) and materials (fabrics, resins, polymer gels) to mimic the adaptability and multifunctionality of biological organisms. These efforts have aimed to overcome the limitations of traditional rigid robots, such as their lack of flexibility and inability to handle delicate tasks.”

As part of their recent study, Peng and his colleagues set out to develop a new inchworm-inspired robotic system with enhanced transport capabilities. To do this, they explored how different parameters, such as the number of activated body sections, the size and materials of objects carried, the air pressure supply, and the command execution rate, impacted their robot’s performance.

“The inchworm-inspired robot mimics the unique ‘Ω’-shaped movement of an inchworm by alternately contracting and extending its body, using McKibben artificial muscles for propulsion,” Peng explained. “This design allows for efficient object transport over various surfaces, achieving high speeds and load capacities while maintaining the adaptability and simplicity characteristic of soft robotics.”

The researchers created a prototype of their robot and tested it in a series of controlled experiments within a laboratory setting. Their results were highly promising, as the robot was found to outperformed other previously introduced inchworm-inspired robots both in terms of speed and load capacity.

“Our inchworm-inspired robot can transport objects at a maximum speed of 8.54 mm/s and handling loads exceeding 100 grams, significantly surpassing previous models in speed and load capacity,” Peng said. “This advancement not only demonstrates the potential of biomimetic designs in robotics for improving efficiency and adaptability in transport tasks but also opens new avenues for practical applications in delicate object transportation and automated logistics.”

The new robot created by this team of researchers could soon be introduced and evaluated in real-world settings, to validate its ability to transport objects with high speed. Meanwhile, Peng and his colleagues will try to further improve their system using deep learning techniques and other state-of-the-art computational models.

“In the future, we plan to integrate deep learning techniques and large language models to enhance the control and adaptability of inchworm-inspired robots,” Peng added. “By leveraging deep learning algorithms, these robots could learn to autonomously adjust their movement strategies based on environmental conditions and object characteristics, improving their performance in complex, real-world scenarios.

“Additionally, large language models could facilitate natural language communication with the robots, enabling intuitive and user-friendly interaction for a wider range of applications.”

Deep Robot Sketching: An application of Deep Q-Learning Networks for human-like sketching

by Raul Fernandez-Fernandez et al in Cognitive Systems Research

The rapid advancement of deep learning algorithms and generative models has enabled the automated production of increasingly striking AI-generated artistic content. Most of this AI-generated art, however, is created by algorithms and computational models, rather than by physical robots.

Researchers at Universidad Complutense de Madrid (UCM) and Universidad Carlos III de Madrid (UC3M) recently developed a deep learning-based model that allows a humanoid robot to sketch pictures, similarly to how a human artist would. Their paper offers a remarkable demonstration of how robots could actively engage in creative processes.

“Our idea was to propose a robot application that could attract the scientific community and the general public,” Raúl Fernandez-Fernandez, co-author of the paper, told Tech Xplore. “We thought about a task that could be shocking to see a robot performing, and that was how the concept of doing art with a humanoid robot came to us.”

Most existing robotic systems designed to produce sketches or paintings essentially work like printers, reproducing images that were previously generated by an algorithm. Fernandez-Fernandez and his colleagues, on the other hand, wished to create a robot that leverages deep reinforcement learning techniques to create sketches stroke by stroke, similar to how humans would draw them.

“The goal of our study was not to make a painting robot application that could generate complex paintings, but rather to create a robust physical robot painter,” Fernandez-Fernandez said. “We wanted to improve on the robot control stage of painting robot applications.”

In the past few years, Fernandez-Fernandez and his colleagues have been trying to devise advanced and efficient algorithms to plan the actions of creative robots. Their new paper builds on these recent research efforts, combining approaches that they found to be particularly promising.

“This work was inspired from two key previous works,” Fernandez-Fernandez said. “The first of these is one of our previous research efforts, where we explored the potential of the Quick Draw! Dataset works for training robotic painters. The second work introduced Deep-Q-Learning as a way to perform complex trajectories that could include complex features like emotions.”

Generated waypoints obtained during the execution of the flower sketch using the DQN framework. Credit: Fernandez-Fernandez et al.

The new robotic sketching system presented by the researchers is based on a Deep-Q-Learning framework first introduced in a previous paper by Zhou and colleagues. Fernandez-Fernandez and his colleagues improved this framework to carefully plan the actions of robots, allowing them to complete complex manual tasks in a wide range of environments.

“The neural network is divided in three parts that can be seen as three different networks interconnected,” Fernandez-Fernandez explained. “The global network extracts the high-level features of the full canvas. The local network extracts low level features around the painting position. The output network takes as input the features extracted by the convolutional layers (from the global and local networks) to generate the next painting positions.”

Fernandez-Fernandez and his collaborators also informed their model via two additional channels that provide distance-related and painting tool information (i.e., the position of the tool with respect the canvas). Collectively, all these features guided the training of their network, enhancing its sketching skills. To further improve their system’s human-like painting skills, the researchers also introduced a pre-training step based on a so-called random stroke generator.

“We use double Q-learning to avoid the overestimation problem and a custom reward function for its training,” Fernandez-Fernandez said. “In addition to this, we introduced an additional sketch classification network to extract the high-level features of the sketch and use its output as the reward in the last steps of a painting epoch. This network provides some flexibility to the painting since the reward generated by it does not depend on the reference canvas but the category.”

As they were trying to automate sketching using a physical robot, the researchers had to also devise a strategy to translate the distances and positions observed in AI-generated images into a canvas in the real world. To achieve this, they generated a discretized virtual space within the physical canvas, in which the robot could move and directly translate the painting positions provided by the model.

“I think the most relevant achievement of this work is the introduction of advanced control algorithms within a real robot painting application,” Fernandez-Fernandez said. “With this work, we have demonstrated that the control step of painting robot applications can be improved with the introduction of these algorithms. We believe that DQN frameworks have the capability and level of abstraction to achieve original and high-level applications out of the scope of classical problems.”

The recent work by this team of researchers is a fascinating example of how robots could create art in the real world, via actions that more closely resemble those of human artists. Fernandez-Fernandez and his colleagues hope that the deep learning-based model they developed will inspire further studies, potentially contributing to the introduction of control policies that allow robots to tackle increasingly complex tasks.

“In this line of work, we have developed a framework using Deep Q-Learning to extract the emotions of a human demonstrator and transfer it to a robot,” Fernandez-Fernandez added. “In this recent paper, we take advantage of the feature extraction capabilities of DQN networks to treat emotions as a feature that can be optimized and defined within the reward of a standard robot task and results are quite impressive. “In future works, we aim to introduce similar ideas that enhance robot control applications beyond classical robot control problems.”

A retrofit sensing strategy for soft fluidic robots

by Shibo Zou et al in Nature Communications

With a brief squeeze, you know whether an avocado, peach, or tomato is ripe. This is what a soft robot hand also does, for example, during automated harvesting. However, up until now, such a gripper needed sensors in its ‘fingers’ to determine whether the fruit was ripe enough.

Shibo Zou and Bas Overvelde from the AMOLF Soft Robotic Matter Group have developed an external method to measure the interaction of soft robots with their environment that does not require built-in sensors. Furthermore, the technique can be easily applied to a range of existing soft robots.

“How can we enable soft robots to feel something without the need for built-in sensors? We can now do that by externally measuring the pressure,” said Bas Overvelde. Together with researcher Shibo Zou from his group and researchers from Eindhoven University of Technology, he developed a system to externally measure how soft robot fingers respond to interactions with the environment.

The researchers do that by measuring the air pressure required to move the robot fingers. That is because the fingers grip something by being blown up like a balloon. If they encounter something during the gripping process, it takes more force to blow them up, just like it takes you more effort to blow up a balloon if you squeeze it at the same time. The amount of extra force required is something that can be measured externally, which means that sensors are no longer required in the robot fingers.

Zou explains, “Effectively, we use one balloon, our measurement system, to blow up another balloon, the inflatable robot fingers, while they are gripping something. At the same time, we measure how much pressure this requires. This tells us something about the surface or object that the soft robot is gripping. The system uses this information to determine what it should do next.”

One of the soft grippers adapted by the researchers can pluck tomatoes. The robot grips a tomato with four soft fingers and then plucks it by rotating these. By measuring the pressure during the gripping process, the robot knows whether it is holding the tomato properly. The robot arm can also sort tomatoes. In the case of an overripe tomato, the system measures less pressure than for a ripe tomato and, as a result, puts the overripe tomato aside. All of these decisions and actions are determined by the pressure regulator that the researchers developed. Thanks to this external device, the fingers no longer require sensors.

Overvelde says, “Sensors are hard and that is not ideal for soft robot fingers. Moreover, for each application, you need to develop new sensors that must also be suitable for use in food or medical applications, for example. We have developed a plug-and-play (retrofit) system and demonstrated that it can be used on a wide range of soft robots without the need for many adjustments.”

Suction cup gripping tissue-like soft material. Credit: AMOLF / Delft University of Technology

Picking tomatoes is just one application. For example, in collaboration with researchers including Aimée Sakes from Delft University of Technology, the team also demonstrated that the new system can be used to externally control a miniature suction cup used in a minimally invasive medical procedure, such as endoscopy.

“Consequently, this suction cup can now measure what kind of tissue it is gripping, which results in less tissue damage,” says Sakes.

Other soft robots that the measurement system has been applied to can determine an object’s dimensions, shape or roughness.

Overvelde concludes, “The system’s strength is that it can easily be applied to a wide range of soft robots. Now, we want to develop other types of measurements in addition to stiffness or size, such as weight.”

Movement with light: Photoresponsive shape morphing of printed liquid crystal elastomers

by Michael J. Ford et al in Matter

Researchers at Lawrence Livermore National Laboratory have furthered a new type of soft material that can change shape in response to light, a discovery that could advance “soft machines” for a variety of fields, from robotics to medicine.

The novel material called a liquid crystal elastomer (LCE), is made by incorporating liquid crystals into the molecular structure of a stretchable material. Adding gold nanorods to the LCE material, scientists and engineers created photo-responsive inks and 3D-printed structures that could be made to bend, crawl, and move when exposed to a laser that causes localized heating in the material.

As described in their paper, the LLNL team, along with their collaborators from Harvard University, North Carolina State University, and the University of Pennsylvania, used a direct ink writing printing technique to build a variety of light-responsive objects, including cylinders that could roll, asymmetric “crawlers” that could go forward and lattice structures that oscillated. By combining shape morphing with photoresponsivity, researchers said the new type of material could change the way people think about machines and materials.

“At LLNL, we’ve focused on developing static materials and architectures for some time,” said principal investigator Caitlyn Krikorian (Cook). “We’ve made these complex types of structures like hierarchical lattices, and we’ve even started exploring more responsive materials, like shape memory polymers, that have a one-time shape memory response. But the Lab really hadn’t delved deep into creating architectures that can go from a 3D-to-3D type of shape change. This project is starting to show how architecture and these novel materials can have unique modes of actuation that we haven’t researched before.”

Researchers said the new material could be used to create a “soft machine” — a type of machine made from these flexible LCE composite materials — capable of responding to external stimuli and even mimicking the movements and behaviors of living organisms. Soft robots made of shape-morphing material could crawl, swim, or fly and explore environments that are too difficult or dangerous for humans to access, like caves or outer space. Soft machines could also be used in medical applications, such as implantable devices that can adapt to the body’s movements, or prosthetic limbs that move like natural limbs, and other applications that aren’t possible with machines made from rigid materials, like metal or plastic.

Adding gold nanorods to the material, the researchers created photo-responsive inks and 3D-printed structures that could be made to bend, crawl, and move when exposed to a laser light. Credit: Michael Ford

“Rigid robots maybe wouldn’t be ideal for humans to interact with, so we need systems and materials that are more compliant,” said the paper’s lead author, Michael Ford, who began working on responsive materials while a postdoc at Carnegie Mellon University.

“You start with components that make up our robots, and one of those components is an actuator. That’s where these materials come in; they could potentially be an actuator. It reduces computational complexity; you’re making a material that gets rid of onboard electronics and replacing them with a single material that can do all those things. That will allow you to put more computational complexity into another component or drive power to other sensors that you wouldn’t have been able to do with traditional rigid materials.”

Researchers said the movement of the LCE material is driven primarily by a process known as photothermal actuation, which involves converting light energy into thermal energy, resulting in a mechanical response from the material. Driven by the interaction between light, gold nanorods, and the LCE matrix, the process enables the printed structures to exhibit dynamic and reversible movements in response to external stimuli.

“When you have this composite material — in this in case, these gold nanorods in these liquid-crystal elastomers — it has a photothermal effect,” Cook explained. “With [infrared] light, it creates a heating effect, which causes the aligned molecules to become misaligned. During that misalignment process, if there’s uniform heating, you’ll have a global shape change. But in this case, we can have localized heat change, which is how you can get those localized regions of shape morphing to do things like locomotion.”

In the study, researchers used a computer vision system involving cameras and tracking software to control the movement of a printed cylinder. The tracking system monitored the position of the rolling cylinder and continuously adjusted the position of the laser to raster the edge of the cylinder. This continuous tracking and adjustment allowed for the cylinder to maintain its rolling motion in a controlled manner.

By leveraging computer vision with the photothermal actuation of the cylinder, the researchers achieved a sophisticated level of manipulation of the soft machine’s movement, showcasing the potential for advanced control systems in the field of soft robotics and soft machines. The team also showed that responsivity could be controlled so the soft machines could perform useful tasks, such as a moving cylinder carrying a wire.

“[Lead author Ford] did some awesome work in using computer vision to control the locomotion of the printed cylinder and using a rastering laser to force it to move,” said co-author Elaine Lee. “But once you start to get into much more complex motion — like using various rastering speeds and light intensities on a printed lattice, causing it to move in various different modes — those were actually outside of what our high-performance computing (HPC) simulations were able to predict, because those codes are expecting uniform heating or stimuli on that lattice.”

“So, using computer vision and machine learning to learn the actuation speeds and what doses of light can cause locomotion from that printed architecture will push us a lot further in understanding how our materials will respond.”

Researchers said there are still some challenges that need to be overcome before the material can be used in practical applications. The team found that structures they created could flip over or exhibit other unpredictable motions, thereby making it difficult to design specific modes of motion.

They said they will continue to work on models that can describe the complex motion to design future machines better and develop new materials and manufacturing techniques to create soft machines that are more durable, reliable, and efficient for a variety of applications. New control systems and computer algorithms also could enable soft machines to move and interact with their environment in a more intelligent and autonomous way, they said.

Cook said the team is looking at incorporating responses to different types of stimuli, beyond thermal and light stimuli, into areas like humidity and energy absorption and conditions that the material might experience in space. She added that the team is looking at starting a new Strategic Initiative at the Lab to focus on autonomous materials and “move the needle” towards sentient materials.

A New 1-mg Fast Unimorph SMA-Based Actuator for Microrobotics

by Conor K. Trygstad, Xuan-Truc Nguyen, Néstor O. Pérez-Arancibia in Proceedings of the IEEE Robotics and Automation Society’s International Conference on Intelligent Robots and Systems

Two insect-like robots, a mini-bug and a water strider, developed at Washington State University, are the smallest, lightest and fastest fully functional micro-robots ever known to be created.

Such miniature robots could someday be used for work in areas such as artificial pollination, search and rescue, environmental monitoring, micro-fabrication or robotic-assisted surgery. Reporting on their work in the proceedings of the IEEE Robotics and Automation Society’s International Conference on Intelligent Robots and Systems, the mini-bug weighs in at eight milligrams while the water strider weighs 55 milligrams. Both can move at about six millimeters a second.

“That is fast compared to other micro-robots at this scale although it still lags behind their biological relatives,” said Conor Trygstad, a PhD student in the School of Mechanical and Materials Engineering and lead author on the work. An ant typically weighs up to five milligrams and can move at almost a meter per second.

The WaterStrider weighs 55 milligrams and can move at 6 millimeters per second (photo by Bob Hubner, WSU Photo Services).

The key to the tiny robots is their tiny actuators that make the robots move. Trygstad used a new fabrication technique to miniaturize the actuator down to less than a milligram, the smallest ever known to have been made.

“The actuators are the smallest and fastest ever developed for micro-robotics,” said Néstor O. Pérez-Arancibia, Flaherty Associate Professor in Engineering at WSU’s School of Mechanical and Materials Engineering who led the project.

The actuator uses a material called a shape memory alloy that is able to change shapes when it’s heated. It is called ‘shape memory’ because it remembers and then returns to its original shape. Unlike a typical motor that would move a robot, these alloys don’t have any moving parts or spinning components.

“They’re very mechanically sound,” said Trygstad. “The development of the very lightweight actuator opens up new realms in micro-robotics.”

Shape memory alloys are not generally used for large-scale robotic movement because they are too slow. In the case of the WSU robots, however, the actuators are made of two tiny shape memory alloy wires that are 1/1000 of an inch in diameter. With a small amount of current, the wires can be heated up and cooled easily, allowing the robots to flap their fins or move their feet at up to 40 times per second. In preliminary tests, the actuator was also able to lift more than 150 times its own weight. Compared to other technologies used to make robots move, the SMA technology also requires only a very small amount of electricity or heat to make them move.

“The SMA system requires a lot less sophisticated systems to power them,” said Trygstad.

Trygstad, an avid fly fisherman, has long observed water striders and would like to further study their movements. While the WSU water strider robot does a flat flapping motion to move itself, the natural insect does a more efficient rowing motion with its legs, which is one of the reasons that the real thing can move much faster.

The researchers would like to copy another insect and develop a water strider-type robot that can move across the top of the water surface as well as just under it. They are also working to use tiny batteries or catalytic combustion to make their robots fully autonomous and untethered from a power supply.

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--