RT/ Will future computers run on human brain cells?

Paradigm
Paradigm
Published in
26 min readMar 3, 2023

Robotics biweekly vol.69, 17th February — 3rd March

TL;DR

  • A ‘biocomputer’ powered by human brain cells could be developed within our lifetime, according to researchers who expect such technology to exponentially expand the capabilities of modern computing and create novel fields of study.
  • A study has investigated the potential of artificial intelligence to address societal mega-trends and analyzed its proposed solutions in dealing with these global challenges. Artificial intelligence can offer understandable insights into the complex and cross-cutting issues of mega-trends, and how they could change and benefit in different areas if AI systems are deployed.
  • A tiny robot that could one day help doctors perform surgery was inspired by the incredible gripping ability of geckos and the efficient locomotion of inchworms.
  • Octopus arms coordinate nearly infinite degrees of freedom to perform complex movements such as reaching, grasping, fetching, crawling, and swimming. How these animals achieve such a wide range of activities remains a source of mystery, amazement, and inspiration. Part of the challenge comes from the intricate organization and biomechanics of the internal muscles.
  • Using artificial intelligence, researchers can now follow cell movement across time and space. The method could be very helpful for developing more effective cancer medications.
  • Could an app tell if a first date is just not that into you? Engineers say the technology might not be far off. They trained a computer to identify the type of conversation two people were having based on their physiological responses alone.
  • Study finds that just 8% of all depictions of AI professionals from a century of film are women — and half of these are shown as subordinate to men. Cinema promotes AI as the product of lone male geniuses with god complexes, say researchers. Cultural perceptions influence career choices and recruitment, they argue, with the AI industry suffering from severe gender imbalance, risking development of discriminatory technology.
  • A new system represents the first time that the capabilities of conventional beam-scanning lidar systems have been combined with those of a newer 3D approach known as flash lidar. The nonmechanical 3D lidar system is compact enough to fit in the palm of the hand and solves issues of detecting and tracking poorly reflective objects.
  • Researchers have realized a new soft robot inspired by the biology of earthworms, which is able to crawl thanks to soft actuators that elongate or squeeze when air passes through them or is drawn out.
  • Researchers have recently developed a new computational method that could detect DDoS attacks more effectively and reliably. This method is based on a long short-term memory (LSTM) model, a type of recurrent neural network (RNN) that can learn to detect long-term dependencies in event sequences.
  • Robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish

by Lena Smirnova et al. in Frontiers in Science

A “biocomputer” powered by human brain cells could be developed within our lifetime, according to Johns Hopkins University researchers who expect such technology to exponentially expand the capabilities of modern computing and create novel fields of study. The team outlines their plan for “organoid intelligence”.

“Computing and artificial intelligence have been driving the technology revolution but they are reaching a ceiling,” said Thomas Hartung, a professor of environmental health sciences at the Johns Hopkins Bloomberg School of Public Health and Whiting School of Engineering who is spearheading the work. “Biocomputing is an enormous effort of compacting computational power and increasing its efficiency to push past our current technological limits.”

For nearly two decades scientists have used tiny organoids, lab-grown tissue resembling fully grown organs, to experiment on kidneys, lungs, and other organs without resorting to human or animal testing. More recently Hartung and colleagues at Johns Hopkins have been working with brain organoids, orbs the size of a pen dot with neurons and other features that promise to sustain basic functions like learning and remembering.

“This opens up research on how the human brain works,” Hartung said. “Because you can start manipulating the system, doing things you cannot ethically do with human brains.”

Magnified image of a brain organoid produced in Thomas Hartung’s lab, dyed to show neurons in magenta, cell nuclei in blue, and other supporting cells in red and green. Image: Jesse Plotkin/Johns Hopkins University

Hartung began to grow and assemble brain cells into functional organoids in 2012 using cells from human skin samples reprogrammed into an embryonic stem cell-like state. Each organoid contains about 50,000 cells, about the size of a fruit fly’s nervous system. He now envisions building a futuristic computer with such brain organoids.

Computers that run on this “biological hardware” could in the next decade begin to alleviate energy-consumption demands of supercomputing that are becoming increasingly unsustainable, Hartung said. Even though computers process calculations involving numbers and data faster than humans, brains are much smarter in making complex logical decisions, like telling a dog from a cat.

“The brain is still unmatched by modern computers,” Hartung said. “Frontier, the latest supercomputer in Kentucky, is a $600 million, 6,800-square-feet installation. Only in June of last year, it exceeded for the first time the computational capacity of a single human brain — but using a million times more energy.”

Architecture of an OI system for biological computing. At the core of OI is the 3D brain cell culture (organoid) that performs the computation. The learning potential of the organoid is optimized by culture conditions and enrichment by cells and genes critical for learning (including IEGs). The scalability, viability, and durability of the organoid are supported by integrated microfluidic systems. Various types of input can be provided to the organoid, including electrical and chemical signals, synthetic signals from machine sensors, and natural signals from connected sensory organoids (e.g. retinal).

It might take decades before organoid intelligence can power a system as smart as a mouse, Hartung said. But by scaling up production of brain organoids and training them with artificial intelligence, he foresees a future where biocomputers support superior computing speed, processing power, data efficiency, and storage capabilities.

“It will take decades before we achieve the goal of something comparable to any type of computer,” Hartung said. “But if we don’t start creating funding programs for this, it will be much more difficult.”

3D microfluidic devices to support scalability and long-term homeostasis of brain organoids. (A) Cells within brain organoids require perfusion with oxygen, nutrients, and growth factors, as well as the removal of waste products, to provide conditions approximating physiologic homeostasis. Passive diffusion penetrates to a depth of only around 300 μm, and so necrosis occurs at the core of larger organoids owing to starvation. This prevents brain organoids from being scaled up to the size and complexity required for OI research and limits their durability. (B) 3D microfluidic systems enable greater scalability and durability by providing controlled perfusion throughout larger organoids. They also enable 3D spatiotemporal dosing of chemicals for signaling purposes.

Organoid intelligence could also revolutionize drug testing research for neurodevelopmental disorders and neurodegeneration, said Lena Smirnova, a Johns Hopkins assistant professor of environmental health and engineering who co-leads the investigations.

“We want to compare brain organoids from typically developed donors versus brain organoids from donors with autism,” Smirnova said. “The tools we are developing towards biological computing are the same tools that will allow us to understand changes in neuronal networks specific for autism, without having to use animals or to access patients, so we can understand the underlying mechanisms of why patients have these cognition issues and impairments.”

To assess the ethical implications of working with organoid intelligence, a diverse consortium of scientists, bioethicists, and members of the public have been embedded within the team.

Artificial Intelligence and Ten Societal Megatrends: An Exploratory Study Using GPT-3

by Daniela Haluza, David Jungwirth in Systems

A study by the Medical University of Vienna has investigated the potential of artificial intelligence (AI) to address societal megatrends and analyzed its proposed solutions in dealing with these global challenges. Artificial intelligence can offer understandable insights into the complex and cross-cutting issues of megatrends, and how they could change and benefit in different areas if AI systems are deployed.

The study by Daniela Haluza and David Jungwirth of MedUni Vienna’s Center for Public Health used OpenAI’s Generative Pre-Trained Transformer 3 (GPT-3), a more powerful version of the currently popular ChatGPT chatbot, to analyze the potential of AI for societal megatrends. These are major global issues such as digitization, urbanization, globalization, climate change, automation, mobility, global health issues, aging population, emerging markets, and sustainability. Interaction with the AI was done by entering questions, and the generated responses were analyzed. The study concluded that AI can significantly improve understanding of these megatrends by providing insights into how they might evolve over time and what solutions might be implemented.

“Our exploratory study shows that AI provides GPT-3 with easy-to-understand insights into the complex and cross-cutting matters of the megatrends and how they could change and benefit in different areas if AI systems are deployed,” Haluza explains. “In addition, GPT-3 has illustrated several solution ideas for each of the ten societal megatrends and provided suggestions for further scientific research in these areas,” Jungwirth adds.

The author team notes that while much work remains to be done before the use of AI tools such as GPT-3 will have a tangible impact on societal megatrends, there is ample evidence to suggest that they will have a positive impact if used correctly. The researchers also suggest that further research should be conducted on how best to use new AI technologies to address these challenge.

GPT-3′s agreement on contributing to a scientific article.

The study also acknowledges that while AI systems are becoming increasingly sophisticated, they are not yet infallible and can still make mistakes or produce incorrect results. Haluza takes a realistic perspective on the current hype surrounding artificial intelligence. “One problem is also that AI GPT-3 only provides useful answers if the question is very precisely formulated, and even then it simply invents content without labeling. Garbage in, garbage out.”

The study’s findings suggest that an AI could be useful for use cases such as abbreviating and creating summaries. However, the authors suggest that an ethical discussion about the broader use of AI systems for writing scientific research papers is highly overdue and should lead to adjusted journal policies, possibly restrictions on future co-authorships with AIs, the introduction of mandatory tools for reviewing AI-generated content, or refusal to allow AIs themselves to collaborate on scientific articles.

Gecko-and-inchworm-inspired untethered soft robot for climbing on walls and ceilings

by Jian Sun, Lukas Bauman, Li Yu, Boxin Zhao in Cell Reports Physical Science

A tiny robot that could one day help doctors perform surgery was inspired by the incredible gripping ability of geckos and the efficient locomotion of inchworms.

The new robot, developed by engineers at the University of Waterloo, utilizes ultraviolet (UV) light and magnetic force to move on any surface, even up walls and across ceilings. It is the first soft robot of its kind that doesn’t require connection to an external power supply, enabling remote operation and versatility for potential applications such as assisting surgeons and searching otherwise inaccessible places.

“This work is the first time a holistic soft robot has climbed on inverted surfaces, advancing state-of-the-art soft robotics innovation,” said Dr. Boxin Zhao, a professor of chemical engineering. “We are optimistic about its potential, with much more development, in several different fields.”

Constructed from a smart material, the robot — dubbed the GeiwBot by researchers because of the creatures that inspired it — can be altered at the molecular level to mimic how geckos stick and unstick powerful grippers on their feet. That enables the robot — about four centimetres long, three millimetres wide and one millimetre thick — to climb on a vertical wall and across the ceiling without being tethered to a power source.

Zhao and his research team constructed the robot using liquid crystal elastomers and synthetic adhesive pads. A light-responsive polymer strip simulates the arching and stretching motion of an inchworm, while gecko-inspired magnet pads at either end do the gripping.

“Even though there are still limitations to overcome, this development represents a significant milestone for utilizing biomimicry and smart materials for soft robots,” said Zhao, the University of Waterloo Endowed Chair in Nanotechnology. “Nature is a great source of inspiration and nanotechnology is an exciting way to apply its lessons.”

An untethered soft robot paves the way for potential surgical applications via remote operation inside the human body and for sensing or searching in dangerous or hard-to-reach places during rescue operations. The next step for researchers is to develop a solely light-driven climbing soft robot that doesn’t require a magnetic field and uses near-infrared radiation instead of UV light to improve biocompatibility.

Energy-shaping control of a muscular octopus arm moving in three dimensions

by Heng-Sheng Chang, Udit Halder, Chia-Hsien Shih, Noel Naughton, Mattia Gazzola, Prashant G. Mehta in Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences

Octopus arms coordinate nearly infinite degrees of freedom to perform complex movements such as reaching, grasping, fetching, crawling, and swimming. How these animals achieve such a wide range of activities remains a source of mystery, amazement, and inspiration. Part of the challenge comes from the intricate organization and biomechanics of the internal muscles.

This problem was tackled in a multidisciplinary project led by Prashant Mehta and Mattia Gazzola, professors of mechanical science & engineering at the University of Illinois Urbana-Champaign. As reported, the two researchers and their groups have developed a physiologically accurate model of octopus arm muscles. “Our model, the first of its kind, not only provides insight into the biological problem, but a framework for design and control of soft robots going forward,” Mehta said.

The impressive capabilities of octopus arms have long served as an inspiration for the design and control of soft robots. Such soft robots have the potential to perform complex tasks in unstructured environments while operating safely around humans, with applications ranging from agriculture to surgery.

Graduate student Heng-Sheng Chang, the study’s lead author, explained that soft-bodied systems like octopuses’ arms present a major modeling and control challenge.

“They are driven by three major internal muscle groups — longitudinal, transverse, and oblique — that cause the arm to deform in several modes — shearing, extending, bending, and twisting,” he said. “This endows the soft muscular arms with significant freedom, unlike their rigid counterparts.”

Simulation of an octopus grasping a cylinder.

The team’s key insight was to express the arm musculature using a stored energy function, a concept borrowed from the theory of continuum mechanics. Postdoctoral scholar and corresponding author Udit Halder explained that “The arm rests at the minimum of an energy landscape. Muscle actuations modify the stored energy function, thus shifting the equilibrium position of the arm and guiding the motion.”

Interpreting the muscles using stored energy dramatically simplifies the arm’s control design. In particular, the study outlines an energy-shaping control methodology to compute the necessary muscle activations for solving manipulation tasks such as reaching and grasping. When this approach was numerically demonstrated in the software environment Elastica, This model led to remarkably life-like motion when an octopus arm was simulated in three dimensions. Moreover, according to Halder, “Our work offers mathematical guarantees of performance that are often lacking in alternative approaches, including machine learning.”

“Our work is part of a larger ecosystem of ongoing collaborations at the University of Illinois,” Mehta said. “Upstream, there are biologists who perform experiments on octopuses. Downstream, there are roboticists who are taking these mathematical ideas and applying them to real soft robots.”

Geometric deep learning reveals the spatiotemporal features of microscopic motion

by Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo in Nature Machine Intelligence

The enormous amount of data obtained by filming biological processes using a microscope has previously been an obstacle for analyses. Using artificial intelligence (AI), researchers at the University of Gothenburg can now follow cell movement across time and space. The method could be very helpful for developing more effective cancer medications.

Studying the movements and behaviours of cells and biological molecules under a microscope provides fundamental information for better understanding processes pertaining to our health. Studies of how cells behave in different scenarios is important for developing new medical technologies and treatments.

“In the past two decades, optical microscopy has advanced significantly. It enables us to study biological life down to the smallest detail in both space and time. Living systems move in every possible direction and at different speeds,” says Jesús Pineda, doctoral student at the University of Gothenburg and first author of the scientific article.

Spatiotemporal characterization of trajectories using MAGIK.

Advancements have given today’s researchers such large amounts of data that analysis is nearly impossible. But now, researchers at the University of Gothenburg have developed an AI method combining graph theory and neural networks that can pick out reliable information from video clips. Graph theory is a mathematical structure that is used to describe the relationships between different particles in the studied sample. It is comparable to a social network in which the particles interact and influence one another’s behaviour directly or indirectly.

“The AI method uses the information in the graph to adapt to different situations and can solve multiple tasks in different experiments. For example, our AI can reconstruct the path that individual cells or molecules take when moving to achieve a certain biological function. This means that researchers can test the effectiveness of different medications and see how well they work as potential cancer treatments,” says Jesús Pineda.

Automated Classification of Dyadic Conversation Scenarios using Autonomic Nervous System Responses

by Iman Chatterjee, Maja Gorsic, Mohammad S. Hossain, Joshua D. Clapp, Vesna D. Novak in IEEE Transactions on Affective Computing

Could an app tell if a first date is just not that into you? Engineers at the University of Cincinnati say the technology might not be far off. They trained a computer — using data from wearable technology that measures respiration, heart rates and perspiration — to identify the type of conversation two people were having based on their physiological responses alone.

Researchers studied a phenomenon in which people’s heart rates, respiration and other autonomic nervous system responses become synchronized when they talk or collaborate. Known as physiological synchrony, this effect is stronger when two people engage deeply in a conversation or cooperate closely on a task.

“Physiological synchrony shows up even when people are talking over Zoom,” said study co-author Vesna Novak, an associate professor of electrical engineering in UC’s College of Engineering and Applied Science.

In experiments with human participants, the computer was able to differentiate four different conversation scenarios with as much as 75% accuracy. The study is one of the first of its kind to train artificial intelligence how to recognize aspects of a conversation based on the participants’ physiology alone. Lead author and UC doctoral student Iman Chatterjee said a computer could give you honest feedback about your date — or yourself.

“The computer could tell if you’re a bore,” Chatterjee said. “A modified version of our system could measure the level of interest a person is taking in the conversation, how compatible the two of you are and how engaged the other person is in the conversation.”

Chatterjee said physiological synchrony is likely an evolutionary adaptation. Humans evolved to share and collaborate with each other, which manifests even at a subconscious level, he said.

“It is certainly no coincidence,” he said. “We only notice physiological synchrony when we measure it, but it probably creates a better level of coordination.”

Studies have shown that physiological synchrony can predict how well two people will work together to accomplish a task. The degree of synchrony also correlates with how much empathy a patient perceives in a therapist or the level of engagement students feel with their teachers.

“You could probably use our system to determine which people in an organization work better together in a group and which are naturally antagonistic,” Chatterjee said.

This aspect of affective computing holds huge potential for providing real-time feedback for educators, therapists or even autistic people, Novak said.

“There are a lot of potential applications in this space. We’ve seen it pitched to look for implicit bias. You might not even be aware of these biases,” Novak said.

Who makes AI? Gender and portrayals of AI scientists in popular film, 1920–2020

by Stephen Cave, Kanta Dihal, Eleanor Drage, Kerry McInerney in Public Understanding of Science

Cinematic depictions of the scientists behind artificial intelligence over the last century are so heavily skewed towards men that a dangerous “cultural stereotype” has been established — one that may contribute to the shortage of women now working in AI development.

Influence career choices and recruitment, they argue, with the AI industry suffering from severe gender imbalance, risking development of discriminatory technology. Cinematic depictions of the scientists behind artificial intelligence over the last century are so heavily skewed towards men that a dangerous “cultural stereotype” has been established — one that may contribute to the shortage of women now working in AI development.

Researchers from the University of Cambridge argue that such cultural tropes and a lack of female representation affects career aspirations and sector recruitment. Without enough women building AI there is a high risk of gender bias seeping into the algorithms set to define the future, they say.

The team from the University’s Leverhulme Centre for the Future of Intelligence (LCFI) whittled down over 1,400 films to the 142 most influential cinematic works featuring artificial intelligence between 1920 and 2020, and identified 116 characters they classed as “AI professionals.” Of these, 92% of all AI scientists and engineers on screen were men, with representations of women consisting of a total of eight scientists and one CEO. This is higher than the percentage of men in the current AI workforce (78%).

Researchers argue that films such as Iron Man and Ex Machina promote cultural perceptions of AI as the product of lone male geniuses. Of the meagre eight female AI scientists to come out of 100 years of cinema, four were still depicted as inferior or subservient to men. The first major film to put a female AI creator on screen did not come until the 1997 comedy Austin Powers: International Man of Mystery, with the over-the-top Frau Farbissina and her ‘Fembots’. This dearth of on-screen depictions may be linked to a lack of women behind the camera. Depending on how the directors’ gender is counted, not a single influential film with an AI plotline was directed solely by a woman.

“Gender inequality in the AI industry is systemic and pervasive,” said co-author Dr Kanta Dihal from LCFI at Cambridge. “Mainstream films are an enormously influential source and amplifier of the cultural stereotypes that help dictate who is suited to a career in AI.”

“Our cinematic stock-take shows that women are grossly underrepresented as AI scientists on screen. We need to be careful that these cultural stereotypes do not become a self-fulfilling prophecy as we enter the age of artificial intelligence.”

Representations of AI scientists and engineers in influential mainstream and science fiction films 1920–2020 by gender.

The researchers found that a third (37 individuals) of cinema’s AI scientists are presented as “geniuses” — and of these, just one is a woman. In fact, 14% of all AI professionals on film are portrayed as former child prodigies of some kind.

The LCFI team point to previous research showing that people across age groups associate exceptional intellectual ability with men — the “brilliance bias” — and argue that the stereotype of AI scientists as genius visionaries “entrench” beliefs that women are not suited for AI-related careers.

“Genius is not a neutral concept,” said co-author Dr Stephen Cave, director of LCFI. “Genius is an idea based in gendered and racialised notions of intelligence, historically shaped by a white male elite. Some influential technologists, such as Elon Musk, have deliberately cultivated ‘genius’ personas that are explicitly based on cinematic characters such as Iron Man.”

Dihal and Cave, along with their LCFI colleagues — and hosts of the Good Robot podcast — Dr Eleanor Drage and Dr Kerry McInerney, also catalogue the way in which cinema’s male scientists create human-like AI as a form of emotional compensation. Some 22% of the male AI scientists or engineers throughout cinematic history create human-like AI to “fulfil their desires”: replacing lost loved ones, building ideal lovers, or creating AI copies of themselves.

“Cinema has long used narratives of artificial intelligence to perpetuate male fantasies, whether it’s the womb envy of a lone genius creating in his own image, or the god complex of returning the dead to life or constructing obedient women,” said LCFI co-author Dr Kerry McInerney.

All this is further exacerbated by the overwhelmingly “male milieu” of many AI movies, argue researchers — with AI often shown as a product of male-dominated corporations or the military. The LCFI team argue that the current state of female representation in the AI industry is grim. Globally, only 22% of AI professionals are women (compared to 39% across all STEM fields). Over 80% of all AI professors are men, with women comprising just 12% of authors at AI conferences.

“Women are often confined to lower-paid, lower-status roles such as software quality assurance, rather than prestigious sub-fields such as machine learning,” said LCFI co-author Dr Eleanor Drage.

“This is not just about inequality in one industry. The marginalisation of women could contribute to AI products that actively discriminate against women — as we have seen with past technologies. Given that science fiction shapes reality, this imbalance has the potential to be dangerous as well as unfair.”

While some may question whether on-screen representation truly influences the real world, the LCFI team point to research showing that nearly two-thirds (63%) of women in STEM say that Dr Dana Scully, the scientist protagonist on legendary TV show The X Files, served as an early role model.

The eight female AI scientists and engineers (and one CEO) from a century of cinema:

  • Quintessa, the female alien in Transformers: the Last Knight (2017)
  • Shuri in Avengers: Infinity War (2018)
  • Evelyn Caster in Transcendence (2014)
  • Ava in The Machine (2013)
  • Dr Brenda Bradford in Inspector Gadget (1999)
  • Dr Susan Calvin in I, Robot (2004)
  • Dr Dahlin in Ghost in the Shell (2017)
  • Frau Farbissina in Austin Powers: International Man of Mystery (1997)
  • Smiler, a female emoji in The Emoji Movie (2017)

Non-mechanical three-dimensional LiDAR system based on flash and beam-scanning dually modulated photonic crystal lasers

by Menaka De Zoysa, Ryoichi Sakata, Kenji Ishizaki, Takuya Inoue, Masahiro Yoshida, John Gelleta, Yoshiyuki Mineyama, Tomoyuki Akahori, Satoshi Aoyama, Susumu Noda in Optica

Our roads might one day be safer thanks to a completely new type of system that overcomes some of lidar’s limitations. Lidar, which uses pulsed lasers to map objects and scenes, helps autonomous robots, vehicles and drones to navigate their environment. The new system represents the first time that the capabilities of conventional beam-scanning lidar systems have been combined with those of a newer 3D approach known as flash lidar.

Investigators led by Susumu Noda from Kyoto University in Japan describe their new nonmechanical 3D lidar system, which fits in the palm of the hand. They also show that it can be used to measure the distance of poorly reflective objects and automatically track the motion of these objects.

“With our lidar system, robots and vehicles will be able to reliably and safely navigate dynamic environments without losing sight of poorly reflective objects such as black metallic cars,” said Noda. “Incorporating this technology into cars, for example, would make autonomous driving safer.”

The new system is possible thanks to a unique light source the researchers developed called a dually modulated photonic-crystal laser (DM-PCSEL). Because this light source is chip-based it could eventually enable the development of an on-chip all-solid-state 3D lidar system.

“The DM-PCSEL integrates non-mechanical, electronically controlled beam scanning with flash illumination used in flash lidar to acquire a full 3D image with a single flash of light,” said Noda. “This unique source allows us to achieve both flash and scanning illumination without any moving parts or bulky external optical elements, such as lenses and diffractive optical elements.”

(a) Photograph of the developed 3D ToF-LiDAR system implementing DM-PCSEL-based flash and beam-scanning laser sources. A business card is placed in front of the system for perspective. (b) Driving circuits for the DM-PCSEL-based flash and beam-scanning laser sources. Schematic diagrams illustrating the circuit for each laser source are also shown.

Lidar systems map objects within view by illuminating those objects with laser beams and then calculating the distance of those objects by measuring the beams’ time of flight (ToF) — the time it takes for the light to travel to objects, be reflected and then return to the system. Most lidar systems in use and under development rely on moving parts such as motors to scan the laser beam, making these systems bulky, expensive and unreliable.

One non-mechanical approach, known as flash lidar, simultaneously illuminates and evaluates the distances of all objects in the field of view with a single broad, diffuse beam of light. However, flash lidar systems can’t be used to measure the distances of poorly reflective objects like black metallic cars due to the very small amount of light reflected from these objects. These systems also tend to be large because of the external lenses and optical elements needed to create the flash beam.

To address these critical limitations, the researchers developed the DM-PCSEL light source. It has both a flash source that can illuminate a wide 30°×30° field of view and a beam-scanning source that provides spot illumination with 100 narrow laser beams. They incorporated the DM-PCSEL into a 3D lidar system, which allowed them to measure the distances of many objects simultaneously using wide flash illumination while also selectively illuminating poorly reflective objects with a more concentrated beam of light. The researchers also installed a ToF camera to perform distance measurements and developed software that enables automatic tracking of the motion of poorly reflective objects using beam-scanning illumination.

a) Microscope image of the backside electrodes of the DM-PCSEL flash laser source.

“Our DM-PCSEL-based 3D lidar system lets us range highly reflective and poorly reflective objects simultaneously,” said Noda. “The lasers, ToF camera and all associated components required to operate the system were assembled in a compact manner, resulting in a total system footprint that is smaller than a business card.”

The researchers demonstrated the new lidar system by using it to measure the distances of poorly reflective objects placed on a table in a lab. They also showed that the system can automatically recognize poorly reflective objects and track their movement using selective illumination.

The researchers are now working to demonstrate the system in practical applications, such as the autonomous movement of robots and vehicles. They also want to see if replacing the ToF camera with a more optically sensitive single-photon avalanche photodiode array would allow the measurement of objects across even longer distances.

An earthworm-like modular soft robot for locomotion in multi-terrain environments

by Riddhi Das et al in Scientific Reports

Researchers at Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) in Genoa has realized a new soft robot inspired by the biology of earthworms, which is able to crawl thanks to soft actuators that elongate or squeeze, when air passes through them or is drawn out.

The prototype is the starting point for developing devices for underground exploration, but also for search and rescue operations in confined spaces and the exploration of other planets. Nature offers many examples of animals, such as snakes, earthworms, snails, and caterpillars, which use both the flexibility of their bodies and the ability to generate physical traveling waves along the length of their body to move and explore different environments. Some of their movements are also similar to plant roots.

Taking inspiration from nature and, at the same time, revealing new biological phenomena while developing new technologies is the main goal of the BioInspired Soft robotics lab coordinated by Barbara Mazzolai, and this earthworm-like robot is the latest invention coming from her group.

The new soft robot inspired by the biology of earthworms.

The creation of earthworm-like robot was made possible through a thorough understanding and application of earthworm locomotion mechanics. They use alternating contractions of muscle layers to propel themselves both below and above the soil surface by generating retrograde peristaltic waves. The individual segments of their body (metameres) have a specific quantity of fluid that controls the internal pressure to exert forces, and perform independent, localized and variable movement patterns.

IIT researchers have studied the morphology of earthworms and have found a way to mimic their muscle movements, their constant volume coelomic chambers and the function of their bristle-like hairs (setae) by creating soft robotic solutions. The team developed a peristaltic soft actuator (PSA) that implements the antagonistic muscle movements of earthworms; from a neutral position it elongates when air is pumped into it and compresses when air is extracted from it. The entire body of the robotic earthworm is made of five PSA modules in series, connected with interlinks. The current prototype is 45 cm long and weighs 605 grams.

Each actuator has an elastomeric skin that encapsulates a known amount of fluid, thus mimicking the constant volume of internal coelomic fluid in earthworms. The earthworm segment becomes shorter longitudinally and wider circumferentially and exerts radial forces as the longitudinal muscles of an individual constant volume chamber contract. Antagonistically, the segment becomes longer along the anterior–posterior axis and thinner circumferentially with the contraction of circumferential muscles, resulting in penetration forces along the axis. Every single actuator demonstrates a maximum elongation of 10.97mm at 1 bar of positive pressure and a maximum compression of 11.13mm at 0.5 bar of negative pressure, unique in its ability to generate both longitudinal and radial forces in a single actuator module.

In order to propel the robot on a planar surface, small passive friction pads inspired by earthworms’ setae were attached to the ventral surface of the robot. The robot demonstrated improved locomotion with a speed of 1.35mm/s.

This study not only proposes a new method for developing a peristaltic earthworm-like soft robot but also provides a deeper understanding of locomotion from a bioinspired perspective in different environments. The potential applications for this technology are vast, including underground exploration, excavation, search and rescue operations in subterranean environments and the exploration of other planets. This bioinspired burrowing soft robot is a significant step forward in the field of soft robotics and opens the door for further advancements in the future.

Detecting DDoS attacks using adversarial neural network

by Ali Mustapha et al in Computers & Security

Cybercriminals are coming up with increasingly savvy ways to disrupt online services, access sensitive data or crash internet user’s devices. A cyber-attack that has become very common over the past decades is the so-called Distributed Denial of Service (DDoS) attack.

This type of attack involves a series of devices connected to the internet, which are collectively referred to as a “botnet.” This “group” of connected devices is then used to flood a target server or website with “fake” traffic, disrupting its operation and making it inaccessible to legitimate users.

To protect their website or servers from DDoS attacks, businesses and other users commonly use firewalls, anti-malware software or conventional intrusion detection systems. Yet detecting these attacks can be very challenging today, as they are often carried out using generative adversarial networks (GANs), machine learning techniques that can learn to realistically mimic the activity of real users and legitimate user requests. As a result, many existing anti-malware systems ultimately fail to secure users against them.

Researchers at Institut Polytechnique de Paris, Telecom Paris (INFRES) have recently developed a new computational method that could detect DDoS attacks more effectively and reliably. This method is based on a long short-term memory (LSTM) model, a type of recurrent neural network (RNN) that can learn to detect long-term dependencies in event sequences.

“Our research paper was based on the problem of detecting DDoS attacks, a type of cyber-attacks that can cause significant damage to online services and network communication,” Ali Mustapha, one of the researchers who carried out the study, told Tech Xplore. “While previous studies have explored the use of deep learning algorithms to detect DDoS attacks, these approaches may still be vulnerable to attackers who utilize machine learning and deep learning techniques to create adversarial attack traffic capable of bypassing detection systems.”

IDS model architecture. Credit: Mustapha et al

As part of their study, Mustapha and his colleagues set out to devise an entirely new machine learning–based approach that could improve the resilience of DDoS detection systems. The method they proposed is based on two separate models that can be integrated into a single intrusion detection system.

“The first model is designed to determine whether the incoming network traffic is adversarial and block it if it is deemed fraudulent,” Mustapha explained. “Otherwise, it is then forwarded to the second model, which is responsible for identifying whether it constitutes a DDoS attack. Depending on the outcome of this analysis, a corresponding set of rules and an alert system are employed.”

The DDoS detection tool proposed by this team of researchers has numerous advantages over other intrusion detection systems developed in the past. Most notably, it is robust and can detect DDoS attacks with high levels of accuracy, it is adaptable, and it could also be tailored to meet the unique needs of specific businesses or users. In addition, it can be easily deployed by internet service providers (ISPs), while protecting them against both standard and adversarial DDoS attacks.

“Our study yielded several noteworthy results and accomplishments,” Mustapha explained. “Initially, we evaluated high-performance models that are trained to identify standard DDoS attacks, testing them against adversarial DDoS attacks generated through Generative Adversarial Networks (GANs). We observed that the models were relatively ineffective at detecting these types of attacks; however, we were able to refine our approach and enhance it to detect these attacks with an accuracy exceeding 91%.”

Initial tests conducted by Mustapha and his colleagues yielded very promising results, as they showed that their system could also detect more sophisticated attacks specifically engineered to fool machine learning algorithms. To demonstrate their tool’s potential further, the researchers also carried out a series of tests in real-time. They found that the system satisfied the real-time DDoS attack detection requirements, extracting and analyzing network packets in a limited amount of time and without causing substantial network traffic delays.

The promising method presented in this paper could soon be integrated within existing and newly developed security systems. In addition, it might inspire the development of similar machine learning techniques for detecting DDoS attacks.

“As we look ahead to future work, it will be essential to assess the efficacy of our IDS when challenged with adversarial attacks generated by alternative models,” Mustapha added. “Additionally, we need to explore the implementation of online learning algorithms, which enable the IDS to continuously update its model in real-time as it analyzes new data. By integrating an incremental update feature, the IDS could retain its effectiveness in detecting evolving attack techniques.”

Upcoming events

ICRA 2023: 29 May–2 June 2023, London, UK

RoboCup 2023: 4–10 July 2023, Bordeaux, France

RSS 2023: 10–14 July 2023, Daegu, Korea

IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--