RT/ The physical intelligence of ant and robot collectives

Paradigm
Paradigm
Published in
31 min readDec 30, 2022

Robotics biweekly vol.65, 15th December — 30th December

TL;DR

  • Researchers took inspiration from ants to design a team of relatively simple robots that can work collectively to perform complex tasks using only a few basic parameters.
  • Artificial intelligence and robot-assisted labs could help speed the search for better battery materials. Scientists give the lay of the land in the quest for electrolytes that could enable revolutionary battery chemistries.
  • A small tax on robots, as well as on trade generally, will help reduce income inequality in the U.S., according to economists.
  • Researchers have developed a method that allows a flapping-wing robot to land autonomously on a horizontal perch using a claw-like mechanism. The innovation could significantly expand the scope of robot-assisted tasks.
  • Enhancing the virtual experience with the touch sensation has become a hot topic, but today’s haptic devices remain typically bulky and tangled with wires. Researchers have now developed an advanced wireless haptic interface system, called WeTac, worn on the hand, which has soft, ultrathin soft features, and collects personalized tactile sensation data to provide a vivid touch experience in the metaverse.
  • A new gelatinous robot that crawls, powered by nothing more than temperature change and clever design, brings ‘a kind of intelligence’ to the field of soft robotics.
  • A new model describes how biological or technical systems form complex structures equipped with signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance.
  • Self-driving cars need to implement efficient, effective, and accurate detection systems to provide a safe and reliable experience to its users. To this end, an international research team has now developed an end-to-end neural network that, in conjunction with the Internet-of-Things technology, detects object with high accuracy (> 96%) in both 2D and 3D. The new method outperforms the current state-of-the-art methods and the way to new 2D and 3D detection systems for autonomous vehicles.
  • Experimental data is often not only highly dimensional, but also noisy and full of artefacts. This makes it difficult to interpret the data. Now a team has designed software that uses self-learning neural networks to compress the data in a smart way and reconstruct a low-noise version in the next step. This enables it to recognize correlations that would otherwise not be discernible. The software has now been successfully used in photon diagnostics at the FLASH free electron laser at DESY. But it is suitable for very different applications in science.
  • Researchers recently created AstroSLAM, a SLAM-based algorithm that could allow spacecraft to navigate more autonomously. The new solution, could be particularly useful in instances where space systems are navigating around a small celestial body, such as an asteroid.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Dynamics of cooperative excavation in ant and robot collectives

by S Ganga Prasath, Souvik Mandal, Fabio Giardina, Jordan Kennedy, Venkatesh N Murthy, L Mahadevan in eLife

Individual ants are relatively simple creatures and yet a colony of ants can perform really complex tasks, such as intricate construction, foraging and defense.

Recently, Harvard researchers took inspiration from ants to design a team of relatively simple robots that can work collectively to perform complex tasks using only a few basic parameters.

“This project continued along an abiding interest in understanding the collective dynamics of social insects such as termites and bees, especially how these insects can manipulate the environment to create complex functional architectures,” said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and Physics, and senior author of paper.

The research team began by studying how black carpenter ants work together to excavate out of and escape from a soft corral.

“At first, the ants inside the corral moved around randomly, communicating via their antennae before they started working together to escape the corral,” said S Ganga Prasath, a postdoctoral fellow at the Harvard John A. Paulson School of Engineering and Applied Sciences and one of the lead authors of the paper.

Ants primarily rely on their antennae to interact with the environment and other ants, a process termed antennation. The researchers observed that the ants would spontaneously congregate around areas where they interacted more often.Once a few ants started tunneling into the corral, others quickly joined in. Over time, excavation at one such location proceeded faster than at others and the ants eventually tunneled out of the corral.

From these observations, Mahadevan and his team identified two relevant parameters to understand the excavation task of the ants; the strength of collective cooperation, and the rate of excavation. Numerical simulations of mathematical models that encode these parameters showed that the ants can successfully excavate only when they cooperate with each other sufficiently strongly while simultaneously excavating efficiently.

Driven by this understanding and building upon the models, the researchers built robotic ants, nicknamed RAnts, to see if they could work together to escape a similar corral. Instead of chemical pheromones, the RAnts used “photormones,” fields of light that are left behind by the roving RAnts that mimic pheromone fields or antennation.

The RAnts were programmed only via simple local rules: to follow the gradient of the photoromone field, avoid other robots where photoromone density was high and pick up obstacles where photoromone density was high and drop them where photoromone was low. These three rules enabled the RAnts to quickly escape their confinement, and just as importantly, also allowed the researchers to explore regions of behavior that were hard to detect with real ants.

“We showed how the cooperative completion of tasks can arise from simple rules and similar such behavioral rules can be applied to solve other complex problems such as construction, search and rescue and defense.” said Prasath.

Mahadevan and his team studied real ants and developed parameters and a model to understand the excavation task of the ants. Driven by this understanding and building upon the models, the researchers built robotic ants, nicknamed RAnts. (Credit: Mahadevan Lab/Harvard SEAS)

This approach is highly flexible and robust to errors in sensing and control. It could be scaled up and applied to teams of dozens or hundreds of robots using a range of different types of communication fields. It’s also more resilient than other approaches to collaborative problem solving — even if a few individual robotic units fail, the rest of the team can complete the task.

“Our work, combining lab experiments, theory and robotic mimicry, highlights the role of a malleable environment as a communication channel, whereby self-reinforcing signals lead to the emergence of cooperation and thereby the solution of complex problems. Even without global representation, planning or optimization, the interplay between simple local rules at the individual level and the embodied physics of the collective leads to intelligent behavior and is thus likely to be relevant more broadly,” said Mahadevan.

Designing better electrolytes

by Y. Shirley Meng, Venkat Srinivasan, Kang Xu in Science

Looking at the future of battery materials. Designing a battery is a three-part process. You need a positive electrode, you need a negative electrode, and — importantly — you need an electrolyte that works with both electrodes.

An electrolyte is the battery component that transfers ions — charge-carrying particles — back and forth between the battery’s two electrodes, causing the battery to charge and discharge. For today’s lithium-ion batteries, electrolyte chemistry is relatively well-defined. For future generations of batteries being developed around the world and at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, however, the question of electrolyte design is wide open.

“While we are locked into a particular concept for electrolytes that will work with today’s commercial batteries, for beyond-lithium-ion batteries the design and development of different electrolytes will be crucial,” said Shirley Meng, chief scientist at the Argonne Collaborative Center for Energy Storage Science (ACCESS) and professor of molecular engineering at the Pritzker School of Molecular Engineering of The University of Chicago. “Electrolyte development is one key to the progress we will achieve in making these cheaper, longer-lasting and more powerful batteries a reality, and taking one major step towards continuing to decarbonize our economy.”

In a new paper published in Science, Meng and colleagues laid out their vision for electrolyte design in future generations of batteries. Even relatively small departures from today’s batteries will require a rethinking of electrolyte design, according to Meng. Switching from a nickel-containing oxide to a sulfur-based material as the main constituent of a lithium-ion battery’s positive electrode could yield significant performance benefits and reduce costs if scientists can figure out how to rejigger the electrolyte, she said.

Artificial intelligence and robot-powered labs could speed the search for new battery electrolytes, a crucial component of next-generation battery chemistries. (Image by Shutterstock/KanawatTH.)

For other beyond-lithium-ion battery chemistries, like rechargeable sodium-ion or lithium-oxygen, scientists will similarly have to devote considerable attention to the question of the electrolyte. One major factor that scientists are considering in the development of new electrolytes is how they tend to form an intermediary layer called an interphase, which harnesses the reactivity of the electrodes.

“Interphases are crucially important to the functioning of a battery because they control how the selective ions flow into and out of the electrodes,” Meng said. “Interphases function like a gate to the rest of the battery; if your gate doesn’t function properly, the selective transport doesn’t work.”

The near-term goal, according to the team, is to design electrolytes with the right chemical and electrochemical properties to enable the optimal formation of interphases at both the battery’s positive and negative electrodes. Ultimately, however, researchers believe that they may be able to develop a group of solid electrolytes that would be stable at extreme (both high and low) temperatures and enable batteries with high energy to have much longer lifetimes.

“A solid-state electrolyte for an all-solid battery will be a game changer,” said Venkat Srinivasan, director of ACCESS, deputy director of the Joint Center for Energy Storage Research, and co-author on the paper. “The key to a solid-state battery is a metal anode, but its performance is currently limited by the formation of needle-like structures called dendrites that can short out the battery. By finding a solid electrolyte that prevents or inhibits dendrite formation, we may be able to realize the benefits of some really exciting battery chemistries.”

In order to speed up their hunt for electrolyte breakthroughs, scientists have turned to the power of advanced characterization and artificial intelligence (AI) to search digitally through many more possible candidates, accelerating what had been a slow and painstaking process of laboratory synthesis. “High performance computing and artificial intelligence are allowing us to identify the best descriptors and characteristics that will enable the tailored design of various electrolytes for specific uses,” Meng said. “Instead of looking at a few dozen electrolyte possibilities a year in the lab, we’re looking at many thousands with the aid of computation.”

“Electrolytes have billions of possible combinations of components — salts, solvents and additives — that we can play with,” Srinivasan said. “To make that number into something more manageable, we’re beginning to really use the power of AI, machine learning and automated laboratories.”

The automated laboratories to which Srinivasan referred will incorporate a robot-driven experimental regime. In this way, machines can perform unassisted ever more carefully refined and calibrated experiments to eventually determine which combination of components will form the perfect electrolyte. “Automated discovery can dramatically increase the power of our research, as machines can work around the clock and reduce the potential for human error,” he said.

Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation

by Arnaud Costinot, Iván Werning in The Review of Economic Studies

What if the U.S. placed a tax on robots? The concept has been publicly discussed by policy analysts, scholars, and Bill Gates (who favors the notion). Because robots can replace jobs, the idea goes, a stiff tax on them would give firms incentive to help retain workers, while also compensating for a dropoff in payroll taxes when robots are used. Thus far, South Korea has reduced incentives for firms to deploy robots; European Union policymakers, on the other hand, considered a robot tax but did not enact it.

Now a study by MIT economists scrutinizes the existing evidence and suggests the optimal policy in this situation would indeed include a tax on robots, but only a modest one. The same applies to taxes on foreign trade that would also reduce U.S. jobs, the research finds.

“Our finding suggests that taxes on either robots or imported goods should be pretty small,” says Arnaud Costinot, an MIT economist, and co-author of a published paper detailing the findings. “Although robots have an effect on income inequality … they still lead to optimal taxes that are modest.”

Specifically, the study finds that a tax on robots should range from 1 percent to 3.7 percent of their value, while trade taxes would be from 0.03 percent to 0.11 percent, given current U.S. income taxes.

“We came in to this not knowing what would happen,” says Iván Werning, an MIT economist and the other co-author of the study. “We had all the potential ingredients for this to be a big tax, so that by stopping technology or trade you would have less inequality, but … for now, we find a tax in the one-digit range, and for trade, even smaller taxes.”

A key to the study is that the scholars did not start with an a priori idea about whether or not taxes on robots and trade were merited. Rather, they applied a “sufficient statistic” approach, examining empirical evidence on the subject. For instance, one study by MIT economist Daron Acemoglu and Boston University economist Pascual Restrepo found that in the U.S. from 1990 to 2007, adding one robot per 1,000 workers reduced the employment-to-population ratio by about 0.2 percent; each robot added in manufacturing replaced about 3.3 workers, while the increase in workplace robots lowered wages about 0.4 percent.

In conducting their policy analysis, Costinot and Werning drew upon that empirical study and others. They built a model to evaluate a few different scenarios, and included levers like income taxes as other means of addressing income inequality.

“We do have these other tools, though they’re not perfect, for dealing with inequality,” Werning says. “We think it’s incorrect to discuss this taxes on robots and trade as if they are our only tools for redistribution.”

Still more specifically, the scholars used wage distribution data across all five income quintiles in the U.S. — the top 20 percent, the next 20 percent, and so on — to evaluate the need for robot and trade taxes. Where empirical data indicates technology and trade have changed that wage distribution, the magnitude of that change helped produce the robot and trade tax estimates Costinot and Werning suggest. This has the benefit of simplicity; the overall wage numbers help the economists avoid making a model with too many assumptions about, say, the exact role automation might play in a workplace.

“I think where we are methodologically breaking ground, we’re able to make that connection between wages and taxes without making super-particular assumptions about technology and about the way production works,” Werning says. “It’s all encoded in that distributional effect. We’re asking a lot from that empirical work. But we’re not making assumptions we cannot test about the rest of the economy.”

Costinot adds: “If you are at peace with some high-level assumptions about the way markets operate, we can tell you that the only objects of interest driving the optimal policy on robots or Chinese goods should be these responses of wages across quantiles of the income distribution, which, luckily for us, people have tried to estimate.”

Apart from its bottom-line tax numbers, the study contains some additional conclusions about technology and income trends. Perhaps counterintuitively, the research concludes that after many more robots are added to the economy, the impact that each additional robot has on wages may actually decline. At a future point, robot taxes could then be reduced even further.

“You could have a situation where we deeply care about redistribution, we have more robots, we have more trade, but taxes are actually going down,” Costinot says. If the economy is relatively saturated with robots, he adds, “That marginal robot you are getting in the economy matters less and less for inequality.”

The study’s approach could also be applied to subjects besides automation and trade. There is increasing empirical work on, for instance, the impact of climate change on income inequality, as well as similar studies about how migration, education, and other things affect wages. Given the increasing empirical data in those fields, the kind of modeling Costinot and Werning perform in this paper could be applied to determine, say, the right level for carbon taxes, if the goal is to sustain a reasonable income distribution.

“There are a lot of other applications,” Werning says. “There is a similar logic to those issues, where this methodology would carry through.” That suggests several other future avenues of research related to the current paper. In the meantime, for people who have envisioned a steep tax on robots, however, they are “qualitatively right, but quantitatively off,” Werning concludes.

How ornithopters can perch autonomously on a branch

by Zufferey, R., Tormo-Barbero, J., Feliu-Talegón, D. et al. in Nature Communications

A bird landing on a branch makes the maneuver look like the easiest thing in the world, but in fact, the act of perching involves an extremely delicate balance of timing, high-impact forces, speed, and precision. It’s a move so complex that no flapping-wing robot (ornithopter) has been able to master it, until now.

Raphael Zufferey, a postdoctoral fellow in the Laboratory of Intelligent Systems (LIS) and Biorobotics ab (BioRob) in the School of Engineering, is the first author on a recent paper describing the unique landing gear that makes such perching possible. He built and tested it in collaboration with colleagues at the University of Seville, Spain, where the 700-gram ornithopter itself was developed as part of the European project GRIFFIN.

Overview and demonstration of the perching method.

“This is the first phase of a larger project. Once an ornithopter can master landing autonomously on a tree branch, then it has the potential to carry out specific tasks, such as unobtrusively collecting biological samples or measurements from a tree. Eventually, it could even land on artificial structures, which could open up further areas of application,” Zufferey says.

He adds that the ability to land on a perch could provide a more efficient way for ornithopters — which, like many unmanned aerial vehicles (UAVs) have limited battery life — to recharge using solar energy, potentially making them ideal for long-range missions.

“This is a big step toward using flapping-wing robots, which as of now can really only do free flights, for manipulation tasks and other real-world applications,” he says.

Avionics and control of the P-Flap.

The engineering problems involved in landing an ornithopter on a perch without any external commands required managing many factors that nature has already so perfectly balanced. The ornithopter had to be able to slow down significantly as it perched, while still maintaining flight. The claw needed to be strong enough to grasp the perch and support the weight of the robot, without being so heavy that it could not be held aloft. “That’s one reason we went with a single claw rather than two,” Zufferey notes. Finally, the robot needed to be able to perceive its environment and the perch in front of it in relation to its own position, speed, and trajectory.

The researchers achieved all this by equipping the ornithopter with a fully on-board computer and navigation system, which was complemented by an external motion-capture system to help it determine its position. The ornithopter’s leg-claw appendage was finely calibrated to compensate for the up-and-down oscillations of flight as it attempted to hone in on and grasp the perch. The claw itself was designed to absorb the robot’s forward momentum upon impact, and to close quickly and firmly to support its weight. Once perched, the robot remains on the perch without energy expenditure. Even with all these factors to consider, Zufferey and his colleagues succeeded, ultimately building not just one but two claw-footed ornithopters to replicate their perching results.

Looking ahead, Zufferey is already thinking about how their device could be expanded and improved, especially in an outdoor setting.

“At the moment, the flight experiments are carried out indoors, because we need to have a controlled flight zone with precise localization from the motion capture system. In the future, we would like to increase the robot’s autonomy to perform perching and manipulation tasks outdoors in a more unpredictable environment.”

Encoding of tactile information in hand via skin-integrated wireless haptic interface

by Kuanming Yao, Jingkun Zhou, Qingyun Huang, Mengge Wu, at al in Nature Machine Intelligence

Enhancing the virtual experience with the touch sensation has become a hot topic, but today’s haptic devices remain typically bulky and tangled with wires. A team led by the City University of Hong Kong (CityU) researchers recently developed an advanced wireless haptic interface system, called WeTac, worn on the hand, which has soft, ultrathin soft features, and collects personalised tactile sensation data to provide a vivid touch experience in the metaverse.

The system has application potential in gaming, sports and skills training, social activities, and remote robotic controls. “Touch feedback has great potential, along with visual and audial information, in virtual reality (VR), so we kept trying to make the haptic interface thinner, softer, more compact and wireless, so that it could be freely used on the hand, like a second skin,” said Dr Yu Xinge, Associate Professor in the Department of Biomedical Engineering (BME) at CityU, who led the research. Together with Professor Li Wenjung, Chair Professor in the Department of Mechanical Engineering (MNE), Dr Wang Lidai, Associate Professor in the Department of Biomedical Engineering (BME) and other collaborators, Dr Yu’s team developed WeTac, an ultra-flexible, wireless, integrated skin VR system.

Illustration of the QI-compatible wireless charging ability.

Existing haptic gloves rely mostly on bulky pumps and air ducts, powered and controlled through a bunch of cords and cables, which severely hinder the immersive experience of VR and augmented reality (AR) users. The newly developed WeTac overcomes these shortcomings with its soft, ultrathin, skin-integrated wireless electrotactile system.

The system comprises two parts: a miniaturised soft driver unit, attached to the forearm as a control panel, and hydrogel-based electrode hand patch as a haptic interface. The entire driver unit weighs only 19.2g and is small (5cm x 5cm x 2.1mm) enough to be mounted on the arm. It uses Bluetooth low energy (BLE) wireless communication and a small rechargeable lithium-ion battery. The hand patch is only 220 µm to 1mm thick, with electrodes on the palm. It exhibits great flexibility and guarantees effective feedback in various poses and gestures.

Current control module performance and monitoring results.

“Electrotactile stimulation is a good method to provide effective virtual touch for users,” Dr Yu explained. “However, as individuals have different sensitivities, the same feedback strength might be felt differently in different users’ hands. So we need to customise the feedback parameters accordingly to provide a universal tool for all users to eliminate another major bottleneck in the current haptic technology.”

The ultra-soft feature of WeTac allows the threshold currents to be easily mapped for individual users to determine the optimised parameters for each part of the hand. Based on the personalised threshold data, electrotactile feedback can be delivered to any part of the hand on demand in a proper intensity range to avoid causing pain or being too weak to be felt. In this way, virtual tactile information, including spatial and temporal sequences, can be faithfully reproduced over the whole hand.

The WeTac patches are worn on the hands to provide programmable spatio-temporal feedback patterns, with 32 electrotactile stimulation pixels on the palm instead of the fingertips only. The average centre-to-centre distance between the electrodes is about 13mm, providing wide coverage over the whole hand. The device has several built-in safety measures to protect users from electric shock, and the temperature of the device is maintained in a relatively low range of 27 to 35.5°C to avoid causing any thermal discomfort during continuous operation.

WeTac has been successfully integrated into VR and AR scenarios, and synchronised with robotic hands through BLE communication. With the miniature size, wearable and wireless format, and sensitivity-oriented feedback strategy, WeTac makes tactile feedback in the hand much easier and user-friendly. Users can feel virtual objects in different scenarios, such as grasping a tennis ball in sports training, touching a cactus, or feeling a mouse running on the hand in social activities, virtual gaming, etc.

“We believe that this is a powerful tool for providing virtual touching, and is inspiring for the development of the metaverse, human-machine interface (HMI), and other fields,” said Dr Yu.

Untethered unidirectionally crawling gels driven by asymmetry in contact forces

by Aishwarya Pantula, Bibekananda Datta, Yupin Shi, Margaret Wang, Jiayu Liu, Siming Deng, Noah J. Cowan, Thao D. Nguyen, David H. Gracias in Science Robotics

A new gelatinous robot that crawls, powered by nothing more than temperature change and clever design, brings “a kind of intelligence” to the field of soft robotics.

“It seems very simplistic but this is an object moving without batteries, without wiring, without an external power supply of any kind — just on the swelling and shrinking of gel,” said senior author David Gracias, a professor of chemical and biomolecular engineering at Johns Hopkins University. “Our study shows how the manipulation of shape, dimensions and patterning of gels can tune morphology to embody a kind of intelligence for locomotion.”

Robots are made almost exclusively of hard materials like metals and plastics, a fundamental obstacle in the push to create if not more human-like robots, than robots ideal for human biomedical advancements.

Image credit: Aishwarya Pantula / Johns Hopkins University

Water-based gels, which feel like gummy bears, are one of the most promising materials in the field of soft robotics. Researchers have previously demonstrated that gels that swell or shrink in response to temperature can be used to create smart structures. Here, the Johns Hopkins team demonstrated for the first time, how swelling and shrinking of gels can be strategically manipulated to move robots forward and backward on flat surfaces, or to essentially have them crawl in certain directions with an undulating, wave-like motion.

The gelbots, which were created by 3D printing for this work, would be easy to mass produce. Gracias forsees a range of practical future applications, including moving on surfaces through the human body to deliver targeted medicines. They could also be marine robots, patrolling and monitoring the ocean’s surface.

Gracias hopes to train the gelbots to crawl in response to variations in human biomarkers and biochemicals. He also plans to test other worm and marine organism-inspired shapes and forms and would like to incorporate cameras and sensors on their bodies.

Multi-scale organization in communicating active matter

by Alexander Ziepke, Ivan Maryshev, Igor S. Aranson, Erwin Frey in Nature Communications

From a distance, they looked like clouds of dust. Yet, the swarm of microrobots in author Michael Crichton’s bestseller “Prey” was self-organized. It acted with rudimentary intelligence, learning, evolving and communicating with itself to grow more powerful.

A new model by a team of researchers led by Penn State and inspired by Crichton’s novel describes how biological or technical systems form complex structures equipped with signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance.

“Basically, these little nanobots become self-organized and self-aware,” said Igor Aronson, Huck Chair Professor of Biomedical Engineering, Chemistry, and Mathematics at Penn State, explaining the plot of Crichton’s book. The novel inspired Aronson to study the emergence of collective motion among interacting, self-propelled agents.

Schematics of the agent-based model for communicating active matter and summary of collective dynamic states.

Aronson and a team of physicists from the LMU University, Munich, have developed a new model to describe how biological or synthetic systems form complex structures equipped with minimal signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance. The findings have implications in microrobotics and for any field involving functional, self-assembled materials formed by simple building blocks, Aronson said. For example, robotics engineers could create swarms of microrobots capable of performing complex tasks such as pollutant scavenging or threat detection.

“If we look to nature, we see that many living creatures rely on communication and teamwork because it enhances their chances of survival,” Aronson said.

The computer model conceived by researchers from Penn State and Ludwig-Maximillian University predicted that communications by small, self-propelled agents lead to intelligent-like collective behavior. The study demonstrated that communications dramatically expand an individual unit’s ability to form complex functional states akin to living systems. The team built their model to mimic the behavior of social amoebae, single-cell organisms that can form complex structures by communicating through chemical signals. They studied one phenomenon in particular. When food becomes scarce, the amoebae emit a messenger chemical known as cyclic adenosine monophosphate (cAMP), which induces the amoebae to gather in one place and form a multicellular aggregate.

Hierarchical self-organization and information processing.

“The phenomenon is well known,” co-author Erwin Frey of Ludwig-Maximilians-Universität München said in a release. “Before now, however, no research group has investigated how information processing, at a general level, affects the aggregation of systems of agents when individual agents — in our case, amoebae — are self-propelled.”

For decades, scientists have been pursuing a better understanding of “active matter,” the biological or synthetic systems which transform energy stored in the environment, e.g., a nutrient, into mechanical motion and form larger structures by means of self-organization. Taken individually, the material has no intelligence or functionality, but collectively, the material is capable of responding to its environment with a kind of emergent intelligence, Aronson explained. It’s an ancient concept with futuristic applications.

Aristotle articulated the theory of emergence some 2,370 years ago in his treatise “Metaphysics.” His language is commonly paraphrased as “the whole is greater than the sum of the parts.” In the not-so-distance future, Aronson says research into emergent systems could lead to cell-sized nanobots that self-organize inside the body to combat viruses or swarms of autonomous microrobots that can coordinate in complex formation without a pilot.

“We typically talk about artificial intelligence as some kind of sentient android with elevated thinking,” Aronson said. “What I’m working on is distributed artificial intelligence. Each element doesn’t have any intelligence, but once they come together, they’re capable of collective response and decision-making.”

There is currently a great demand for distributed artificial intelligence in the field of robotics, Aronson explained.

“If you’re designing a robot in the most cost-effective way possible, you don’t want to make it too complex,” he said. “We want to make small robots that are very simple, just a few transistors, that when working together have the same functionality as a complex machine, but without the expensive, complicated machinery. This discovery will open new avenues for applications of active matter in nanoscience and robotics.”

Aronson explained that from a practical standpoint, distributed artificial intelligence could be used in any kind of substance that has microscopically dispersed particles suspended within it. It could be deployed within the body to deliver a drug to fight disease or activate tiny electronic circuits in mass-manufactured microrobots.

“Despite its importance, the role of communication in the context of active matter remains largely unexplored,” the researchers wrote. “We identify the decision-making machinery of the individual active agents as the driving mechanism for the collectively controlled self-organization of the system.”

A Smart IoT Enabled End-to-End 3D Object Detection System for Autonomous Vehicles

by Imran Ahmed, Gwanggil Jeon, Abdellah Chehri in IEEE Transactions on Intelligent Transportation Systems

Self-driving cars, or autonomous vehicles, have long been earmarked as the next generation mode of transport. To enable the autonomous navigation of such vehicles in different environments, many different technologies relating to signal processing, image processing, artificial intelligence deep learning, edge computing, and IoT, need to be implemented.

One of the largest concerns around the popularization of autonomous vehicles is that of safety and reliability. In order to ensure a safe driving experience for the user, it is essential that an autonomous vehicle accurately, effectively, and efficiently monitors and distinguishes its surroundings as well as potential threats to passenger safety.

To this end, autonomous vehicles employ high-tech sensors, such as Light Detection and Ranging (LiDaR), radar, and RGB cameras that produce large amounts of data as RGB images and 3D measurement points, known as a “point cloud.” The quick and accurate processing and interpretation of this collected information is critical for the identification of pedestrians and other vehicles. This can be realized through the integration of advanced computing methods and Internet-of-Things (IoT) into these vehicles, which allows for fast, on-site data processing and navigation of various environments and obstacles more efficiently.

Image of a Google self-driving car.

In a recent study, a group of international researchers, led by Professor Gwanggil Jeon from Incheon National University, Korea have now developed a smart IoT-enabled end-to-end system for 3D object detection in real time based on deep learning and specialized for autonomous driving situations.

“For autonomous vehicles, environment perception is critical to answer a core question, ‘What is around me?’ It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” explains Prof. Jeon. “We devised a detection model based onYOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” he elaborates.

The team fed the collected RGB images and point cloud data as input to YOLOv3, which, in turn, output classification labels and bounding boxes with confidence scores. They then tested its performance with the Lyft dataset. The early results revealed that YOLOv3 achieved an extremely high accuracy of detection (>96%) for both 2D and 3D objects, outperforming other state-of-the-art detection models.

The method can be applied to autonomous vehicles, autonomous parking, autonomous delivery, and future autonomous robots as well as in applications where object and obstacle detection, tracking, and visual localization is required.

“At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” highlights Prof. Jeon. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5–10 years,” he concludes optimistically.

Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics

by Gregor Hartmann, Gesa Goetzke, Stefan Düsterer, Peter Feuer-Forson, Fabiano Lever, David Meier, Felix Möller, Luis Vera Ramirez, Markus Guehr, Kai Tiedtke, Jens Viefhaus, Markus Braune in Scientific Reports

Experimental data is often not only highly dimensional, but also noisy and full of artefacts. This makes it difficult to interpret the data. Now a team at HZB has designed software that uses self-learning neural networks to compress the data in a smart way and reconstruct a low-noise version in the next step. This enables to recognise correlations that would otherwise not be discernible. The software has now been successfully used in photon diagnostics at the FLASH free electron laser at DESY. But it is suitable for very different applications in science.

More is not always better, but sometimes a problem. With highly complex data, which have many dimensions due to their numerous parameters, correlations are often no longer recognisable. Especially since experimentally obtained data are additionally disturbed and noisy due to influences that cannot be controlled.

Now, new software based on artificial intelligence methods can help: It is a special class of neural networks (NN) that experts call “disentangled variational autoencoder network (β-VAE).” Put simply, the first NN takes care of compressing the data, while the second NN subsequently reconstructs the data. “In the process, the two NNs are trained so that the compressed form can be interpreted by humans,” explains Dr Gregor Hartmann. The physicist and data scientist supervises the Joint Lab on Artificial Intelligence Methods at HZB, which is run by HZB together with the University of Kassel.

Eleven representative single-shot time-of-flight spectra (samples) obtained by the four OPIS electron spectrometers (eTOF 0–3): The grey traces at the bottom show no photo electron signal, whereas the other ten traces above contain Ne 2p photoelectron lines with successively longer time-of-flights, indicating decreasing FEL photon energy.

Google Deepmind had already proposed to use β-VAEs in 2017. Many experts assumed that the application in the real world would be challenging, as non-linear components are difficult to disentangle. “After several years of learning how the NNs learn, it finally worked,” says Hartmann. β-VAEs are able to extract the underlying core principle from data without prior knowledge.

The structure of the β-VAE-network.

In the study now published, the group used the software to determine the photon energy of FLASH from single-shot photoelectron spectra. “We succeeded in extracting this information from noisy electron time-of-flight data, and much better than with conventional analysis methods,” says Hartmann. Even data with detector-specific artefacts can be cleaned up this way.

“The method is really good when it comes to impaired data,” Hartmann emphasises. The programme is even able to reconstruct tiny signals that were not visible in the raw data. Such networks can help uncover unexpected physical effects or correlations in large experimental data sets. “AI-based intelligent data compression is a very powerful tool, not only in photon science,” says Hartmann.

In total, Hartmann and his team spent three years developing the software. “But now, it is more or less plug and play. We hope that soon many colleagues will come with their data and we can support them.”

AstroSLAM: Autonomous Monocular Navigation in the Vicinity of a Celestial Small Body — Theory and Experiments

by Mehregan Dor et al in arXiv

Simultaneous localization and mapping (SLAM) is a promising technology that can be used to improve the navigation of autonomous systems, helping them to map their surrounding environment and track other objects within it. So far, it has primarily been applied to terrestrial vehicles and mobile robots, yet it could also potentially be expanded to spacecraft.

Researchers at Georgia Institute of Technology (Georgia Tech) and the NASA Goddard Space Flight Center recently created AstroSLAM, a SLAM-based algorithm that could allow spacecraft to navigate more autonomously. The new solution, could be particularly useful in instances where space systems are navigating around a small celestial body, such as an asteroid.

“Our recent work is part of a NASA-funded ESI (Early-Stage Innovations) program whose objective was to make future spacecraft destined for deep-space missions (e.g., visiting and surveying asteroids) more autonomous,” Panagiotis Tsiotras, one of the researchers who carried out the study.

“This problem is of great interest since, owing to the large distances from Earth, it is difficult to execute the required maneuvers around the asteroid in a real-time manner. Instead, the current process requires a large team of human operators on the ground to downlink the images captured from the spacecraft and to analyze them offline to create digital terrain maps, which amounts to carefully choreographing the spacecraft maneuvers.”

AstroSLAM in action, showing navigation solution and asteroid surface landmark map.

Ensuring that spacecraft move in desired ways around asteroids is a laborious, tedious and time-consuming task for human agents on Earth. A model that can autonomously reconstruct the shape of nearby asteroids and navigate the spacecraft with minimal intervention from Earth would thus be incredibly valuable, as it could facilitate and potentially speed up deep-space missions.

AstroSLAM, the solution developed by Tsiotras and his colleagues, can autonomously generate the location and orientation of spacecraft relative to that of nearby asteroids or other small celestial bodies. It achieves this by analyzing a sequence of images taken from a camera onboard the spacecraft as it is orbiting the celestial body of interest.

“AstroSLAM, as its name suggests, is based on SLAM, a methodology that has so far been used with great success in terrestrial mobile robots, but which we not extended to the space environment,” Tsiotras explained. “Our model can also generate a 3D shape representation of small celestial bodies and estimate their size and gravitational parameters. The algorithm is the culmination of more than five years of work in vision-based relative navigation for spacecraft in my group, the Dynamics and Control Systems Laboratory at Georgia Tech.”

Autonomous operations in the vicinity of a celestial small body. Credit: Dor et al.

AstroSLAM can estimate the relative position and orientation of spacecraft in full autonomy. This information can then be used to plan and execute various maneuvers in orbit, including landing on a nearby celestial body. The model can also generate images of the 3D shape of the nearby celestial body, estimating its size and gravitational parameters.

“One of the novelties of AstroSLAM is that it takes into account the motion constraints stemming from the orbital dynamics, thus providing a much more accurate navigation solution,” Tsiotras said.

“AstroSLAM reduces a spacecraft’s reliance on the human ground crew to run complex computations, thus increasing its autonomy and relative navigation capabilities. Even if we continue to rely on existing well-tested methodologies for the foreseeable future, the proposed approach can also serve as a ‘back-up’ solution in case the primary approach fails, as it relies on just a single camera.”

The researchers evaluated their technology in a series of tests, using real data captured by NASA during legacy space missions and high-fidelity artificial data generated using a spacecraft simulator at Georgia Tech. Their findings were very promising, suggesting that AstroSLAM could eventually enable the autonomous operation of spacecraft in various scenarios.

“We are currently working on improving the image processing step of AstroSLAM (e.g., salient feature detection and tracking), by leveraging a state-of-the-art neural-network architecture trained on a large database of real images of asteroids from prior NASA missions to detect more reliable, salient surface features,” Tsiotras added. “Once integrated with AstroSLAM, this work is expected to increase the reliability and robustness against incorrect measurements (outliers) and difficult illumination conditions.”

Tsiotras and his colleagues are now also working to allow the model to merge images from visible light and infrared light, to attain even better performances. Finally, they wish to extend their approach to operational scenarios in which the images would be captured by multiple spacecraft in orbit concurrently.

“Small celestial bodies, such as asteroids, comets, and planetary moons, are fascinating and scientifically-valuable targets for exploration,” said Kenneth Getzandanner, co-author of the paper and flight dynamics lead for Space Science Mission Operations at the NASA Goddard Space Flight Center.

“Missions to these objects, however, present unique challenges to navigation and operations given the object’s small size and the magnitude of perturbing forces relative to gravity. Recent small body missions, including the Origins, Spectral Interpretation, Resource Identification — Security Regolith Explorer (OSIRIS-REx) at the near-Earth asteroid 101955 Bennu, exemplify these challenges and require extensive characterization campaigns and significant ground-in-the-loop interaction. Technologies such as AstroSLAM are useful for simplifying operations, reducing reliance on ground assets and personnel for near real-time operations, and enabling more ambitious mission concepts and near-surface sorties.”

Videos

  • A self healing soft robot finger developed by VUB-imec Brubotics and FYSC sending in morse to the world “MERRY XMAS”.
  • This year, our robot Husky was very busy working for the European Space Agency (ESA). But will he have to spend Christmas alone, apart from his robot friends at the FZI — alone on the moon? His friends want to change that! So, they train very hard to reunite with Husky! Will they succeed?
  • In this festive robotic Christmas sketch, a group of highly advanced robots come together to celebrate the holiday season. The “Berliner Hochschule für Technik” wishes a merry Christmas and a happy new year!
  • Robots like Digit are purpose-built to do tasks in environments made for humans. We aren’t trying to just mimic the look of people or make a humanoid robot. Every design and engineering decision is looked at through a function-first lens. To easily walk into warehouses and work alongside people, to do the kinds of dynamic reaching, carrying, and walking that we do, Digit has some similar characteristics. Our Co-Founder and Chief Technology Officer Jonathan Hurst, discusses the difference between humanoid and human-centric robotics.

Upcoming events

ICRA 2023: 29 May–2 June 2023, London, UK

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--